Christopher A. Choquette-Choo

I am a researcher in Google Brain, Privacy and Security. My focuses are privacy-preserving and adversarial machine learning. I've worked on privacy auditing techniques, collaborative learning approaches, and methods for ownership-verification.


Previously, I was an AI Resident at Google and a researcher in the CleverHans Lab at the Vector Institute. I graduated from the University of Toronto, where I had a full scholarship.

Email: choquette[dot]christopher[at]gmail[dot]com

Research

I'm broadly interested in Machine Learning, with a focus on its intersection with security and privacy. Specific areas include adversarial ML, data privacy, deep learning, and collaborative learning. See my google scholar for an up-to-date list.

The Fundamental Price of Secure Aggregation in Differentially Private Federated Learning
Wei-Ning Chen*, Christopher A. Choquette-Choo*, Peter Kairouz*, Ananda Theertha Suresh* * Equal contribution. The names are ordered alphabetically.
International Conference on Machine Learning (ICML), 2022 Conference
Theory and Practice of Differential Privacy (TPDP) Workshop, 2022 WorkShop

We characterize the fundamental communication costs of Federated Learning (FL) under Secure Aggregation (SecAgg) and Differential Privacy (DP), two privacy-preserving mechanisms that are commonly used with FL. We prove our optimality for worst-case settings which provides significant improvements over prior work, and show that improvements can be made under additional assumptions, e.g., data sparisty. Extensive empirical evaluations support our claims, showing costs as low as 1.2 bits per parameter on Stack Overflow with <4% relative decrease in test-time model accuracy.

Communication Efficient Federated Learning with Secure Aggregation and Differential Privacy
Wei-Ning Chen*, Christopher A. Choquette-Choo*, Peter Kairouz*
Privacy in Machine Learning Workshop at Neurips, 2021 WorkShop
* Equal contribution. The names are ordered alphabetically.

We show that in the worst-case, differentially-private federated learning with secure aggregation requires Ω(d) bits. Despite this, we discuss how to leverage near-sparsity to compress updates by more than 50x with modest noise multipliers of 0.4 by using sketching.

Proof-of-Learning: Definitions
Hengrui Jia^, Mohammad Yaghini^, Christopher A. Choquette-Choo*, Natalie Dullerud*, Anvith Thudi*, Varun Chandrasekaran, Nicolas Papernot
Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference
^,* Equal contribution. The names are ordered alphabetically.

How can we prove that a machine learning model owner trained their model? We define the problem of Proof-of-Learning (PoL) in machine learning and provide a method for it that is robust to several spoofing attacks. This protocol enables model ownership verification and robustness against byzantines workers (in a distributed learning setting).

CaPC Learning: Confidential and Private Collaborative Learning
Christopher A. Choquette-Choo*, Natalie Dullerud*, Adam Dziedzic*, Yunxiang Zhang*, Somesh Jha, Nicolas Papernot, Xiao Wang
9th International Conference on Learning Representations (ICLR), 2021 conference
* Equal contribution. The names are ordered alphabetically.

We design a protocol for collaborative learning that ensures both the privacy of the training data and confidentiality of the test data in a collaborative setup. Our protocol can provide several percentage-points improvement to models, especially on subpopulations that the model underperforms on, with a modest privacy budget usage of less than 20. Unlike prior work, we enable collaborative learning of heterogeneous models amongst participants. Unlike differentially private federated learning, which requires ~1 million participants, our protocol can work in regimes of ~100 participants.

Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo, Florian Tramèr, Nicholas Carlini, Nicolas Papernot
38th International Conference on Machine Learning (ICML), 2021 conference

What defenses properly defend against all membership inference threats? We expose and show that confidence-masking -- defensive obfuscation of confidence-vectors -- is not a viable defense to Membership Inference. We do this by introducing (3) label-only attacks, which bypass this defense and match typical confidence-vector attacks. In an extensive evaluation of defenses,including the first evaluation of data augmentations and transfer learning as defenses, we further show that Differential Privacy can defend against average- and worse-case Membership Inference attacks.

Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot
Proceedings of 30th USENIX Security, 2021 conference

How can we enable an IP owner to reliably claim ownership of a stolen model? We explore entangling of watermarks to task data to ensure that stolen models learn these watermarks as well. Our improved watermarks enable IP owners to claim ownership with 95% confidence in less than 10 queries to the stolen model.

Machine Unlearning
Lucas Bourtoule*, Varun Chandrasekaran*, Christopher A. Choquette-Choo*, Hengrui Jia*, Adelin Travers*, Baiwu Zhang*, David Lie, Nicolas Papernot
Proceedings of the 42nd IEEE Symposium on Security and Privacy, San Francisco, CA, 2021 conference
* Equal contribution. The names are ordered alphabetically.

How can we enable efficient and guaranteed retraining of machine learning models? We define requirements for machine unlearning and study a stricter unlearning requirement whereby unlearning a datapoint is guaranteed to be equivalent to as if we had never trained on it. To this end, we improve on the naive retraining-from-scratch approach to provide a better accuracy-efficieny tradeoff. We also study how a priori knowledge of the distribution of requests can further improve efficiency.