Eugene Bagdasaryan

Bio

I am a 4th year Cornell CS PhD candidate working on privacy and security applied to Machine Learning systems. I am co-advised by Professors Deborah Estrin and Vitaly Shmatikov.

I study Machine Learning Privacy and Security including Federated Learning, Differential Privacy, and backdoor attacks (NeurIPS'19, AISTATS'20). My other interests include Ancile Project, that introduces language-level control for data usage (WPES'19) and OpenRec project that proposes modular design to modern recommender systems. On the industry side I have development experience with large scale systems such as Amazon Alexa and OpenStack.

For Summer 2020, I am looking for research internships in NYC area. Happy to talk or consult on Federated Learning, ML privacy and security.

Recent news
  • Jan 2020, Our backdoor federated learning paper was accepted to AISTATS'20!
  • Nov 2019, Became a PhD Candidate. Title: "Evaluating privacy preserving techniques in machine learning".
  • Sep 2019, Our paper that discovers impact of Differential Privacy on model Fairness was accepted to NeurIPS'19.
  • Aug 2019, Our work on Ancile use-based privacy system was accepted to WPES'19.
  • June 2019, Digital Life Initiative fellow 2019-2020.
Research papers
  • doneAncile: Enhancing Privacy for Ubiquitous Computing with Use-Based Privacy

    A novel platform that enables control over application's data usage with language level policies and implementing use-based privacy.

    [Paper], [Code], [Slides].
  • faceDifferential Privacy Has Disparate Impact on Model Accuracy

    This project discusses a new trade off between privacy and fairness. We observe that training a Machine Learning model with Differential Privacy reduces accuracy on underrepresented groups.

    [NeurIPS, 2019], [Code]
  • smartphoneHow To Backdoor Federated Learning

    We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.

    [AISTATS, 2020], [Code]
  • extensionOpenrec: A modular framework for extensible and adaptable recommendation algorithms

    An open and modular Python framework that supports extensible and adaptable research in recommender systems.

    [WSDM, 2018], [Code]