Federated Learning of User Verification Models Without Sharing Embeddings. (arXiv:2104.08776v2 [cs.LG] UPDATED)

We consider the problem of training User Verification (UV) models in
federated setting, where each user has access to the data of only one class and
user embeddings cannot be shared with the server or other users. To address
this problem, we propose Federated User Verification (FedUV), a framework in
which users jointly learn a set of vectors and maximize the correlation of
their instance embeddings with a secret linear combination of those vectors. We
show that choosing the linear combinations from the codewords of an
error-correcting code allows users to collaboratively train the model without
revealing their embedding vectors. We present the experimental results for user
verification with voice, face, and handwriting data and show that FedUV is on
par with existing approaches, while not sharing the embeddings with other users
or the server.