Scalable Privacy-Preserving Distributed Learning. (arXiv:2005.09532v5 [cs.CR] UPDATED)

In this paper, we address the problem of privacy-preserving distributed
learning and the evaluation of machine-learning models by analyzing it in the
widespread MapReduce abstraction that we extend with privacy constraints. We
design SPINDLE (Scalable Privacy-preservINg Distributed LEarning), the first
distributed and privacy-preserving system that covers the complete ML workflow
by enabling the execution of a cooperative gradient-descent and the evaluation
of the obtained model and by preserving data and model confidentiality in a
passive-adversary model with up to N-1 colluding parties. SPINDLE uses
multiparty homomorphic encryption to execute parallel high-depth computations
on encrypted data without significant overhead. We instantiate SPINDLE for the
training and evaluation of generalized linear models on distributed datasets
and show that it is able to accurately (on par with non-secure
centrally-trained models) and efficiently (due to a multi-level parallelization
of the computations) train models that require a high number of iterations on
large input data with thousands of features, distributed among hundreds of data
providers. For instance, it trains a logistic-regression model on a dataset of
one million samples with 32 features distributed among 160 data providers in
less than three minutes.