The Effects of Autoencoders on the Robustness of Deep Learning Models Saldiri Tespit Sistemlerindeki Otokodlayicilarin Derin Öǧrenme Modellerinin Güçlülüǧüne Etkisi


DEĞİRMENCİ E., ÖZÇELİK İ., YAZICI A.

30th Signal Processing and Communications Applications Conference, SIU 2022, Safranbolu, Türkiye, 15 - 18 Mayıs 2022 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/siu55565.2022.9864975
  • Basıldığı Şehir: Safranbolu
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: adversarial attack, deep learning, intrusion detection systems
  • Eskişehir Osmangazi Üniversitesi Adresli: Evet

Özet

© 2022 IEEE.Adversarial attacks aim to deceive the target system. Recently, deep learning methods have become a target of adversarial attacks. Even small perturbations could lead to classification errors in deep learning models. In an intrusion detection system using deep learning method, adversarial attack can cause classification error and malicious traffic can be classified as benign. In this study, the effects of adversarial attacks on the accuracy of deep learning-based intrusion detection systems were examined. CICIDS2017 dataset was used to test the detection systems. At first, DDoS attacks weredetected using Autoencoder, MLP, AEMLP, DNN, AEDNN, CNN and AECNN methods. Then, the Fast Gradient Sign Method (FGSM) is used to perform adversarial attacks. Finally, the sensitivity of the methods against the adversarial attacks were examined. Our results showed that the classification performance of deep learning based detection methods decreased up to %17 after the adversarial attacks. The results obtained in this study form the basis for the validation and validation studies of learning-based intrusion detection systems.