Page 12 - IJEEE-2022-Vol18-ISSUE-1
P. 12

8 | Kadhim & Al-Darraji

                         VI. CONCLUSION                         [10] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard,
                                                                  "Deepfool: a simple and accurate method to fool deep
      To distinguish between original and adversarial             neural networks," in Proceedings of the IEEE conference
images, a CNN algorithm was used. Therefore, ten classes          on computer vision and pattern recognition, pp.
were used for ten people. That means each class for every         2574-2582, 2016.
person and everyone has many different captures of images
with different positions. Thus, classes were divided into       [11] J. Xue, Y. Yang, and D. Jing, "Deceiving Face
groups of train and test datasets. So, every person has two       Recognition Neural Network with Samples Generated by
folders for both train and test images. However, the number       Deepfool," in Journal of Physics: Conference Series, vol.
of train images should be more than the images in the test to     1302, no. 2, 2019.
get clear results. Hence, the first method is by training
original images on the CNN algorithm. So, the rate of           [12] Aleksander Madry, Aleksandar Makelov, Ludwig
recognition is "95%" between train images and test images.        Schmidt, Dimitris Tsipras, Adrian Vladu, "Towards deep
                                                                  learning models resistant to adversarial attacks,"
      For future work, The number of attacks can be               arXiv:1706.06083, 2017.
increased to distort images such as One Pixel attack, Carlini
& Wagner attacks (C&W), Visible Light-based attack              [13] A. Goel, A. Agarwal, M. Vatsa, R. Singh, and N. K.
(VLA), AdvHat attack, Face Friend-safe attack, etc. The           Ratha, "DNDNet: Reconfiguring CNN for adversarial
number of classes can also be increased with new images.          robustness," in Proceedings of the IEEE/CVF Conference
                                                                  on Computer Vision and Pattern Recognition Workshops,
                     CONFLICT OF INTEREST                         pp. 22-23, 2020.

     The authors have no conflict of relevant interest to this  [14] N. Carlini and D. Wagner, "Towards evaluating the
article.                                                          robustness of neural networks," in 2017 ieee symposium
                                                                  on security and privacy (sp), pp. 39-57, 2017.
                            REFERENCES
                                                                [15] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A.
[1] Mofeed T. Rashid, "Modeling Of Self-Organization Fish         Swami, "Distillation as a defense to adversarial
  School System By Neural Network System," Basrah                 perturbations against deep neural networks," in 2016 IEEE
  Journal for Engineering Science, vol. 15, no. 1, pp. 14-19,     symposium on security and privacy (SP), pp. 582-597,
  2015.                                                           2016.

[2] I. Goodfellow, Jonathon Shlens, Christian Szegedy,          [16] D. Deb, J. Zhang, and A. K. Jain, "Advfaces:
  "Explaining and harnessing adversarial examples," 2014.         Adversarial face synthesis," in 2020 IEEE International
                                                                  Joint Conference on Biometrics (IJCB), pp. 1-10, 2020.
[3] R. Huang, B. Xu, D. Schuurmans, and C. J. Szepesvári,
  "Learning with a strong adversary," CoRR, 2015.               [17] M. Cisse, P. Bojanowski, E. Grave, Y. Dauphin, and N.
                                                                  Usunier, "Parseval networks: Improving robustness to
[4] Harini Kannan, Alexey Kurakin, Ian Goodfellow,                adversarial examples," in International Conference on
  "Adversarial logit pairing," CoRR, Vol. abs/1803.06373,         Machine Learning, pp. 854-863, 2017.
  2018.
                                                                [18] A. Dubey, L. Maaten, Z. Yalniz, Y. Li, and D. Mahajan,
[5] Alexey Kurakin, Ian Goodfellow, Samy Bengio,                  "Defense against adversarial images using web-scale
  "Adversarial machine learning at scale," Computer Vision        nearest-neighbor search," in Proceedings of the
  and Pattern Recognition, 2016.                                  IEEE/CVF Conference on Computer Vision and Pattern
                                                                  Recognition, pp. 8767-8776, 2019.
[6] A. Bharati, R. Singh, M. Vatsa, K. W. Bowyer, and
  Security, "Detecting facial retouching using supervised       [19] G. B. Huang, M. Mattar, T. Berg, and E.
  deep learning," IEEE Transactions on Information                Learned-Miller, "Labeled faces in the wild: A database
  Forensics and Security, vol. 11, no. 9, pp. 1903-1913,          forstudying face recognition in unconstrained
  2016.                                                           environments," in Workshop on faces in'Real-Life'Images:
                                                                  detection, alignment, and recognition, 2008.
[7] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever,
  Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus,
  "Intriguing properties of neural networks," Computer
  Vision and Pattern Recognition, 2013.

[8] A. Agarwal, R. Singh, M. Vatsa, and N. Ratha, "Are
  image-agnostic universal adversarial perturbations for
  face recognition difficult to detect?," in 2018 IEEE 9th
  International Conference on Biometrics Theory,
  Applications and Systems (BTAS), pp. 1-7, 2018.

[9] Y. Liu, S. Mao, X. Mei, T. Yang, and X. Zhao,
  "Sensitivity of Adversarial Perturbation in Fast Gradient
  Sign Method," in 2019 IEEE Symposium Series on
  Computational Intelligence (SSCI), pp. 433-436, 2019.
   7   8   9   10   11   12   13   14   15   16   17