Eugene Bagdasaryan


I am a 4th year Cornell CS PhD candidate working on privacy and security applied to Machine Learning systems. I am co-advised by Professors Deborah Estrin and Vitaly Shmatikov.

I study Machine Learning Privacy and Security including Federated Learning, Differential Privacy, and backdoor attacks (NeurIPS'19, AISTATS'20). My other interests include Ancile Project, that introduces language-level control for data usage (WPES'19) and OpenRec project that proposes modular design to modern recommender systems. On the industry side I have development experience with large scale systems such as Amazon Alexa and OpenStack.

For Summer 2020, I will be at Google Research NYC.

Recent news
  • Jan 2020, Our backdoor federated learning paper was accepted to AISTATS'20!
  • Nov 2019, Became a PhD Candidate. Title: "Evaluating privacy preserving techniques in machine learning".
  • Sep 2019, Our paper that discovers impact of Differential Privacy on model Fairness was accepted to NeurIPS'19.
  • Aug 2019, Our work on Ancile use-based privacy system was accepted to WPES'19.
  • June 2019, Digital Life Initiative fellow 2019-2020.
Research papers
  • local_hospitalSalvaging Federated Learning by Local Adaptation

    Recovering participants' performance on their data when using federated learning with robustness and privacy techniques.

    [Paper], [Code],
  • doneAncile: Enhancing Privacy for Ubiquitous Computing with Use-Based Privacy

    A novel platform that enables control over application's data usage with language level policies and implementing use-based privacy.

    [WPES'19], [Code], [Slides].
  • faceDifferential Privacy Has Disparate Impact on Model Accuracy

    This project discusses a new trade off between privacy and fairness. We observe that training a Machine Learning model with Differential Privacy reduces accuracy on underrepresented groups.

    [NeurIPS, 2019], [Code]
  • smartphoneHow To Backdoor Federated Learning

    We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.

    [AISTATS, 2020], [Code]
  • extensionOpenrec: A modular framework for extensible and adaptable recommendation algorithms

    An open and modular Python framework that supports extensible and adaptable research in recommender systems.

    [WSDM, 2018], [Code]