In the text processing context, most ML models are built on word embeddings.
These embeddings are themselves trained on some datasets, potentially
containing sensitive data. In some cases this training is done independently,
in other cases, it occurs as part of training a larger, task-specific model. In
either case, it is of interest to consider membership inference attacks based
on the embedding layer as a way of understanding sensitive information leakage.
But, somewhat surprisingly, membership inference attacks on word embeddings and
their effect in other natural language processing (NLP) tasks that use these
embeddings, have remained relatively unexplored.
In this work, we show that word embeddings are vulnerable to black-box
membership inference attacks under realistic assumptions. Furthermore, we show
that this leakage persists through two other major NLP applications:
classification and text-generation, even when the embedding layer is not
exposed to the attacker. We show that our MI attack achieves high attack
accuracy against a classifier model and an LSTM-based language model. Indeed,
our attack is a cheaper membership inference attack on text-generative models,
which does not require the knowledge of the target model or any expensive
training of text-generative models as shadow models.