Cover
Vol. 18 No. 1 (2022)

Published: June 30, 2022

Pages: 1-8

Original Article

Face Recognition System Against Adversarial Attack Using Convolutional Neural Network

Abstract

Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.

References

  1. Mofeed T. Rashid, "Modeling Of Self-Organization Fish School System By Neural Network System," Basrah Journal for Engineering Science, vol. 15, no. 1, pp. 14-19, 2015.
  2. I. Goodfellow, Jonathon Shlens, Christian Szegedy, "Explaining and harnessing adversarial examples," 2014.
  3. R. Huang, B. Xu, D. Schuurmans, and C. J. Szepesvári, "Learning with a strong adversary," CoRR, 2015.
  4. Harini Kannan, Alexey Kurakin, Ian Goodfellow, "Adversarial logit pairing," CoRR, Vol. abs/1803.06373, 2018.
  5. Alexey Kurakin, Ian Goodfellow, Samy Bengio, "Adversarial machine learning at scale," Computer Vision and Pattern Recognition, 2016.
  6. A. Bharati, R. Singh, M. Vatsa, K. W. Bowyer, and Security, "Detecting facial retouching using supervised deep learning," IEEE Transactions on Information Forensics and Security, vol. 11, no. 9, pp. 1903-1913, 2016.
  7. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus, "Intriguing properties of neural networks," Computer Vision and Pattern Recognition, 2013.
  8. A. Agarwal, R. Singh, M. Vatsa, and N. Ratha, "Are image-agnostic universal adversarial perturbations for face recognition difficult to detect?," in 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1-7, 2018.
  9. Y. Liu, S. Mao, X. Mei, T. Yang, and X. Zhao, "Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method," in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 433-436, 2019.
  10. S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582, 2016.
  11. J. Xue, Y. Yang, and D. Jing, "Deceiving Face Recognition Neural Network with Samples Generated by Deepfool," in Journal of Physics: Conference Series, vol. 1302, no. 2, 2019.
  12. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv:1706.06083, 2017.
  13. A. Goel, A. Agarwal, M. Vatsa, R. Singh, and N. K. Ratha, "DNDNet: Reconfiguring CNN for adversarial robustness," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 22-23, 2020.
  14. N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 ieee symposium on security and privacy (sp), pp. 39-57, 2017.
  15. N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in 2016 IEEE symposium on security and privacy (SP), pp. 582-597, 2016.
  16. D. Deb, J. Zhang, and A. K. Jain, "Advfaces: Adversarial face synthesis," in 2020 IEEE International Joint Conference on Biometrics (IJCB), pp. 1-10, 2020.
  17. M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N. Usunier, "Parseval networks: Improving robustness to adversarial examples," in International Conference on Machine Learning, pp. 854-863, 2017.
  18. A. Dubey, L. Maaten, Z. Yalniz, Y. Li, and D. Mahajan, "Defense against adversarial images using web-scale nearest-neighbor search," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8767-8776, 2019.
  19. G. B. Huang, M. Mattar, T. Berg, and E. Learned-Miller, "Labeled faces in the wild: A database forstudying face recognition in unconstrained environments," in Workshop on faces in'Real-Life'Images: detection, alignment, and recognition, 2008.