Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. (arXiv:2104.02107v1 [cs.CR])

Advances in deep neural networks (DNNs) have shown tremendous promise in the
medical domain. However, the deep learning tools that are helping the domain,
can also be used against it. Given the prevalence of fraud in the healthcare
domain, it is important to consider the adversarial use of DNNs in manipulating
sensitive data that is crucial to patient healthcare. In this work, we present
the design and implementation of a DNN-based image translation attack on
biomedical imagery. More specifically, we propose Jekyll, a neural style
transfer framework that takes as input a biomedical image of a patient and
translates it to a new image that indicates an attacker-chosen disease
condition. The potential for fraudulent claims based on such generated ‘fake’
medical images is significant, and we demonstrate successful attacks on both
X-rays and retinal fundus image modalities. We show that these attacks manage
to mislead both medical professionals and algorithmic detection schemes.
Lastly, we also investigate defensive measures based on machine learning to
detect images generated by Jekyll.