P-Power Exponential Mechanisms for Differentially Private Machine Learning

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Differentially private stochastic gradient descent (DP-SGD) that perturbs the clipped gradients is a popular approach for private machine learning. Gaussian mechanism GM, combined with the moments accountant (MA), has demonstrated a much better privacy-utility tradeoff than using the advanced composition theorem. However, it is unclear whether the tradeoff can be further improved by other mechanisms with different noise distributions. To this end, we extend GM (p=2) to the generalizedp-power exponential mechanism (pEM withp>0) family and show its privacy guarantee. Straightforwardly, we can enhance the privacy-utility tradeoff of GM by searching noise distribution in the wider mechanism space. To implementpEM in practice, we design an effective sampling method and extend MA topEM for tightly estimating privacy loss. Besides, we formally prove the non-optimality of GM based on the variation method. Numerical experiments validate the properties ofpEM and illustrate a comprehensive comparison betweenpEM and the other two state-of-the-art methods. Experimental results show thatpEM is preferred when the noise variance is relatively small to the signal and the dimension is not too high.

Original languageEnglish
Pages (from-to)155018-155034
Number of pages17
JournalIEEE Access
Volume9
DOIs
StatePublished - 2021

Keywords

  • Gaussian mechanism
  • Privacy protection
  • moments accountant
  • noise variance
  • privacy-utility trade-off

Fingerprint

Dive into the research topics of 'P-Power Exponential Mechanisms for Differentially Private Machine Learning'. Together they form a unique fingerprint.

Cite this