CV

Click the pdf button to download my Resume.

Basics

Name Prajjwal Gupta
Label Graduate Student
Email gupta.praj@northeastern.edu
Url https://ma1VAR3.github.io/
Summary A graduate student at Northeastern University with research interest in machine learning privacy and security, explainable AI, and adversarial machine learning.

Work

  • 2023.08 - Present
    Teaching Assistant
    Northeastern University
    Reviewed material, graded assignments, and conducted office hours for the courses: Introduction to Data Mining and Machine Learning in Fall 2023 and Foundations of AI in Spring 2024 semesters.
    • Data Mining
    • Reinforcement Learning
    • Deep Learning
    • Search and Optimization
    • AI Ethics
  • 2022.12 - 2023.07
    Research intern
    Indian Institute of Science (India Urban Data Exchange)
    -Developed a cutting-edge privacy-preserving mean estimation algorithm, ensuring user-level privacy in a non-IID setting. Achieved exceptional results, surpassing existing approaches with a significant 20% increase in utility and an outstanding 36% enhancement in privacy.
    -Designed a privacy-preserving approach for generative models based on quantization of latent space and exponential mechanism with differential privacy guarantees.
    • Data Mining
    • Reinforcement Learning
    • Deep Learning
    • Search and Optimization
    • AI Ethics
  • 2022.02 - 2022.07
    Federated Learning Consultant
    DynamoFL
    -Implemented a pipeline for federated neural collaborative filtering from scratch for recommendation systems.
    -Integrated federated learning optimizers and aggregators for models like Xgboost and K-Means clustering.
    Undertook the development of end-to-end solutions with the dynamofl framework for diverse client use cases.
    • Federated Learning
    • Recommendation Systems
    • XGBoost
    • Clustering
  • 2021.08 - 2022.01
    Research Intern
    National Institute of Technology, Kurukshetra (Security and AI Laboratory)
    -Created a data poisoning attack for collaborative learning settings, which resulted in an adversarial success rate that was over 3 times improvement over the existing error-generic attacks.
    - Investigated the use of encoder based models in distributed learning to achieve cross-domain generalization of network intrusion in a multi-client setting.
    • Federated Learning
    • Adversarial Machine Learning
    • Instrusion Detection

Education

  • 2023.09 - 2025.04

    Boston, MA, USA

    MS
    Northeastern University
    Artificial Intelligence
    • Unsupervised Machine Learning
    • Foundations of AI
    • Algorithms
    • Program Design Paradigms
  • 2019.08 - 2023.04

    Vellore, India

    BTech
    Vellore Institute of Technology
    Computer Science and Engineering
    • Machine Learning
    • Natural Langugage Processing
    • Parallel and Distributed Computing
    • Data Privacy

Awards

  • 2023.05.15
    Raman Research Award
    Vellore Institute of Technology
    Award for the publication 'A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function'

Publications

Skills

Programming Languages
Python
R
Java
SQL
Frameworks
Tensorflow
PyTorch
Keras
Scikit-learn
Huggingface Transformers
Flask
Libraries and Databases
VectorDB
Numpy
Pandas
NLTK
OpenCV
Plotly
Matplotlib
Seaborn
Tools
Git
Docker
Kubernetes
Kubeflow
BigQuery
VertexAI
Skills
Deep Learning
Computer Vision
Reinforcement Learning
Natural Language Processing
Data Analysis

Languages

English
Fluent
Hindi
Native

Projects

  • 2021.10 - 2022.02
    DAIDNet IDS
    Conceptualized and implemented a novel deep learning model for the two-stage classification of network intrusions. DAIDNet leverages multiple autoencoders to learn patterns from distinct data subsets and has demonstrated state-of-the-art (SOTA) or near-SOTA performance on benchmark datasets.
    • Autoencoders
    • Tensorflow
  • 2023.01 - 2023.05
    FedPartial
    A framework to enable collaborative learning across multiple hospitals, each using a different model architecture. The results demonstrated a remarkable 66% gain in communication efficiency, ensuring comparable utility to baseline models for a chest x-ray image classification task using the EfficientNet family of models.
    • Federated Learning
    • Tensorflow