Constraining Adversarial Attacks On Network Intrusion Detection Systems: Transferability and Defense Analysis

Nour Alhussien, Ahmed Aleroud, Abdullah Melhem, Samer Y. Khamaiseh

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Adversarial attacks have been extensively studied in the domain of deep image classification, but their impacts on other domains such as Machine and Deep Learning-based Network Intrusion Detection Systems (NIDSs) have received limited attention. While adversarial attacks on images are generally more straightforward due to fewer constraints in the input domain, generating adversarial examples in the network domain poses greater challenges due to the diverse types of network traffic and the need to maintain its validity. Prior research has introduced constraints to generate adversarial examples against NIDSs, but their effectiveness across different attack settings, including transferability, targetability, defenses, and the overall attack success have not been thoroughly examined. In this paper, we proposed a novel set of domain constraints for network traffic that preserve the statistical and semantic relationships between traffic features while ensuring the validity of the perturbed adversarial traffic. Our constraints are categorized into four types: feature mutability constraints, feature value constraints, feature dependency constraints and distribution preserving constraints. We evaluated the impacts of these constraints on white box and black box attacks using two intrusion detection datasets. Our results demonstrated that the introduced constraints have a significant impact on the success of white box attacks. Our research revealed that transferability of adversarial examples depends on the similarity between the targeted models and the models to which the examples are transferred, regardless of the attack type or the presence of constraints. We also observed that adversarial training enhanced the robustness of the majority of machine learning and deep learning-based NIDSs against unconstrained attacks, while providing some resilience against constrained attacks. In practice, this suggests the potential use of pre-existing signatures of constrained attacks to combat new variations or zero-day adversarial attacks in real-world NIDSs.

Original languageEnglish (US)
Pages (from-to)1
Number of pages1
JournalIEEE Transactions on Network and Service Management
DOIs
StateAccepted/In press - 2024

Keywords

  • Adversarial attacks
  • Analytical models
  • Data models
  • Glass box
  • Perturbation methods
  • Robustness
  • Telecommunication traffic
  • Training
  • artificial intelligence
  • computer networks
  • network intrusion detection systems
  • neural networks

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Constraining Adversarial Attacks On Network Intrusion Detection Systems: Transferability and Defense Analysis'. Together they form a unique fingerprint.

Cite this