I am a 4th year Cornell CS PhD candidate working in privacy and security, focusing on Machine Learning systems that use personal data. I am co-advised by Professors Deborah Estrin and Vitaly Shmatikov. My current research focuses on security analysis of Federated Learning, tradeoffs with Differential Privacy, and ML security.
For the last year, I have been working on Ancile Project, that introduces language-level control for data usage (to appear at WPES'19). For another project, OpenRec, we proposed a modular design to building modern recommender systems. Besides that I had some experience in working on large scale systems such as Amazon Alexa and OpenStack and doing smaller projects in Agile teams.
For Summer 2020, I am looking for research internships in NYC area.
We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.[ArXiv, 2018], [Code]