Accumulative Poisoning Attacks on Real-time Data. (arXiv:2106.09993v1 [cs.LG])

Collecting training data from untrusted sources exposes machine learning
services to poisoning adversaries, who maliciously manipulate training data to
degrade the model accuracy. When trained on offline datasets, poisoning
adversaries have to inject the poisoned data in advance before training, and
the order of feeding these poisoned batches into the model is stochastic. In
contrast, practical systems are more usually trained/fine-tuned on sequentially
captured real-time data, in which case poisoning adversaries could dynamically
poison each data batch according to the current model state. In this paper, we
focus on the real-time settings and propose a new attacking strategy, which
affiliates an accumulative phase with poisoning attacks to secretly (i.e.,
without affecting accuracy) magnify the destructive effect of a (poisoned)
trigger batch. By mimicking online learning and federated learning on CIFAR-10,
we show that the model accuracy will significantly drop by a single update step
on the trigger batch after the accumulative phase. Our work validates that a
well-designed but straightforward attacking strategy can dramatically amplify
the poisoning effects, with no need to explore complex techniques.