TY - GEN
T1 - Triggerability of Backdoor Attacks in Multi-Source Transfer Learning-based Intrusion Detection
AU - Alhussien, Nour
AU - Aleroud, Ahmed
AU - Rahaeimehr, Reza
AU - Schwarzmann, Alexander
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Network-based Intrusion Detection Systems (NIDSs) automate monitoring of events in networks and analyze them for signatures of cyberattacks. With the advancement of machine learning algorithms, more organizations started using machine learning based IDSs (ML-IDSs) to identify and mitigate cyberattacks. However, the lack of training datasets is a major challenge when implementing ML-IDSs. Therefore, using training data from external sources or transfer learning models are some solutions to overcome this challenge. However, using training data from external sources introduces the risk of backdoored datasets, specifically, when the adversaries also have background knowledge on data sources inside the target organization. This work investigates the role of backdoor attacks on intrusion detection techniques trained using multi-source data. The backdoor examples are injected into one or more training data sources. Transfer learning models are then created by projecting data from different sources into a new subspace containing all source data. The backdoor is then triggered in the target data. An anomaly-based intrusion detection classifier is applied to examine the effectiveness of the introduced backdoors. The results have shown that backdoor attacks on multis-source transfer learning models are feasible, although having less impact compared to backdoors on traditional machine learning models.
AB - Network-based Intrusion Detection Systems (NIDSs) automate monitoring of events in networks and analyze them for signatures of cyberattacks. With the advancement of machine learning algorithms, more organizations started using machine learning based IDSs (ML-IDSs) to identify and mitigate cyberattacks. However, the lack of training datasets is a major challenge when implementing ML-IDSs. Therefore, using training data from external sources or transfer learning models are some solutions to overcome this challenge. However, using training data from external sources introduces the risk of backdoored datasets, specifically, when the adversaries also have background knowledge on data sources inside the target organization. This work investigates the role of backdoor attacks on intrusion detection techniques trained using multi-source data. The backdoor examples are injected into one or more training data sources. Transfer learning models are then created by projecting data from different sources into a new subspace containing all source data. The backdoor is then triggered in the target data. An anomaly-based intrusion detection classifier is applied to examine the effectiveness of the introduced backdoors. The results have shown that backdoor attacks on multis-source transfer learning models are feasible, although having less impact compared to backdoors on traditional machine learning models.
KW - Intrusion detection
KW - backdoor attacks
KW - distributed systems
KW - poisoning attacks
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85150678568&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150678568&partnerID=8YFLogxK
U2 - 10.1109/BDCAT56447.2022.00013
DO - 10.1109/BDCAT56447.2022.00013
M3 - Conference contribution
AN - SCOPUS:85150678568
T3 - Proceedings - 2022 IEEE/ACM 9th International Conference on Big Data Computing, Applications and Technologies, BDCAT 2022
SP - 40
EP - 47
BT - Proceedings - 2022 IEEE/ACM 9th International Conference on Big Data Computing, Applications and Technologies, BDCAT 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, BDCAT 2022
Y2 - 6 December 2022 through 9 December 2022
ER -