I study Machine Learning Privacy and Security including Federated Learning, Differential Privacy, and backdoor attacks (NeurIPS'19, AISTATS'20). My other interests include Ancile Project, that introduces language-level control for data usage (WPES'19) and OpenRec project that proposes modular design to modern recommender systems. On the industry side I have development experience with large scale systems such as Amazon Alexa and OpenStack.
For Summer 2020, I will be at Google Research NYC.
We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants.[AISTATS, 2020], [Code]