Effects of un targeted Adversarial Attacks on Deep Learning Methods


DEĞİRMENCİ E., ÖZÇELİK İ., YAZICI A.

15th International Conference on Information Security and Cryptography, ISCTURKEY 2022, Ankara, Türkiye, 19 - 20 Ekim 2022, ss.8-12 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/iscturkey56345.2022.9931786
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.8-12
  • Anahtar Kelimeler: adversarial attack, deep learning, gradient based attack, intrusion detection
  • Eskişehir Osmangazi Üniversitesi Adresli: Evet

Özet

© 2022 IEEE.The increasing connectivity of smart systems opens up new security vulnerabilities. Since smart systems are used in various sectors such as healthcare, smart cities, and the intelligent industries, security becomes a fundamental concern. For this reason, security measurements are often added to security systems, such as authorization, authentication, encryption, Intrusion Prevention System (IPS) and Intrusion Detection Systems (IDS). However, these systems are still vulnerable to attacks such as DoS, DDoS. Recently, Deep Learning methods have been used to detect these attacks as early as possible. This method, however, has its own vulnerabilities to adversarial attacks. This paper presents three types of adversarial effects on the deep learning models analysis. In this study we use the open dataset CICIDS2017. Initially, DDoS attacks are detected using deep learning methods. Then, adversarial examples are created using adversarial attack methods. The untargeted adversarial attacks are used in this study, these being; FGSM, PGD and BIM. The results show that all adversarial attacks are effective on Deep Learning models. However, the results show that attacks from PGD and BIM are more effective. In addition, the deep learning models training is evaluated with K-fold cross-validation. The results show that, deep learning models that occur during each fold could be get different accuracy results against adversarial attacks. Adversarial attacks may be used in a K-fold cross validation process as a parameter for best model selection.