Evaluating and Improving Adversarial Robustness of Machine Learning-Based Network Intrusion Detectors. (arXiv:2005.07519v4 [cs.CR] UPDATED)

Machine learning (ML), especially deep learning (DL) techniques have been
increasingly used in anomaly-based network intrusion detection systems (NIDS).
However, ML/DL has shown to be extremely vulnerable to adversarial attacks,
especially in such security-sensitive systems. Many adversarial attacks have
been proposed to evaluate the robustness of ML-based NIDSs. Unfortunately,
existing attacks mostly focused on feature-space and/or white-box attacks,
which make impractical assumptions in real-world scenarios, leaving the study
on practical gray/black-box attacks largely unexplored.

To bridge this gap, we conduct the first systematic study of the
gray/black-box traffic-space adversarial attacks to evaluate the robustness of
ML-based NIDSs. Our work outperforms previous ones in the following aspects:
(i) practical-the proposed attack can automatically mutate original traffic
with extremely limited knowledge and affordable overhead while preserving its
functionality; (ii) generic-the proposed attack is effective for evaluating the
robustness of various NIDSs using diverse ML/DL models and non-payload-based
features; (iii) explainable-we propose an explanation method for the fragile
robustness of ML-based NIDSs. Based on this, we also propose a defense scheme
against adversarial attacks to improve system robustness. We extensively
evaluate the robustness of various NIDSs using diverse feature sets and ML/DL
models. Experimental results show our attack is effective (e.g., >97% evasion
rate in half cases for Kitsune, a state-of-the-art NIDS) with affordable
execution cost and the proposed defense method can effectively mitigate such
attacks (evasion rate is reduced by >50% in most cases).