Efficient Privacy-Preserving Stochastic Nonconvex Optimization. (arXiv:1910.13659v3 [cs.LG] UPDATED)
While many solutions for privacy-preserving convex empirical risk
minimization (ERM) have been developed, privacy-preserving nonconvex ERM
remains a challenge. We study nonconvex ERM, which takes the form of minimizing
a finite-sum of nonconvex loss functions over a training set. We propose a new
differentially private stochastic gradient descent algorithm for nonconvex ERM
that achieves strong privacy guarantees efficiently, and provide a tight
analysis of its privacy and utility guarantees, as well as its gradient
complexity. Our algorithm reduces gradient complexity while improves the best
previous utility guarantee given by Wang et al. (NeurIPS 2017). Our experiments
on benchmark nonconvex ERM problems demonstrate superior performance in terms
of both training cost and utility gains compared with previous differentially
private methods using the same privacy budgets.