Eugene Bagdasaryan


I am a 4th year Cornell CS PhD candidate working in privacy and security, focusing on Machine Learning systems that use personal data. I am co-advised by Professors Deborah Estrin and Vitaly Shmatikov. My current research focuses on security analysis of Federated Learning, tradeoffs with Differential Privacy, and ML security.

For the last year, I have been working on Ancile Project, that introduces language-level control for data usage (to appear at WPES'19). For another project, OpenRec, we proposed a modular design to building modern recommender systems. Besides that I had some experience in working on large scale systems such as Amazon Alexa and OpenStack and doing smaller projects in Agile teams.

For Summer 2020, I am looking for research internships in NYC area.

Recent news
  • Nov 2019, Became a PhD Candidate. Title: "Evaluating privacy preserving techniques in machine learning".
  • Sep 2019, Our paper that discovers impact of Differential Privacy on model Fairness was accepted to NeurIPS'19.
  • Aug 2019, Our work on Ancile use-based privacy system was accepted to WPES'19.
  • June 2019, Digital Life Initiative fellow 2019-2020.
Research papers
  • doneAncile: Enhancing Privacy for Ubiquitous Computing with Use-Based Privacy

    A novel platform that enables control over application's data usage with language level policies and implementing use-based privacy.

    [Paper], [Code], [Slides].
  • faceDifferential Privacy Has Disparate Impact on Model Accuracy

    This project discusses a new trade off between privacy and fairness. We observe that training a Machine Learning model with Differential Privacy reduces accuracy on underrepresented groups.

    [NeurIPS, 2019], [Code]
  • smartphoneHow To Backdoor Federated Learning

    We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.

    [ArXiv, 2018], [Code]
  • extensionOpenrec: A modular framework for extensible and adaptable recommendation algorithms

    An open and modular Python framework that supports extensible and adaptable research in recommender systems.

    [WSDM, 2018], [Code]