Shuffle Private Stochastic Convex Optimization. (arXiv:2106.09805v1 [cs.LG])

In shuffle privacy, each user sends a collection of randomized messages to a
trusted shuffler, the shuffler randomly permutes these messages, and the
resulting shuffled collection of messages must satisfy differential privacy.
Prior work in this model has largely focused on protocols that use a single
round of communication to compute algorithmic primitives like means,
histograms, and counts. In this work, we present interactive shuffle protocols
for stochastic convex optimization. Our optimization protocols rely on a new
noninteractive protocol for summing vectors of bounded $ell_2$ norm. By
combining this sum subroutine with techniques including mini-batch stochastic
gradient descent, accelerated gradient descent, and Nesterov’s smoothing
method, we obtain loss guarantees for a variety of convex loss functions that
significantly improve on those of the local model and sometimes match those of
the central model.