Page 7 - IJEEE-2022-Vol18-ISSUE-1
P. 7

Kadhim & Al-Darraji                                                                                                      |3

                                    TABLE 1

                             EXPLAINS A SUMMARY OF THE RELATED WORKS

        Authors          FGSM  PGD                                 Deep Fool                      Method
 (Deb, Zhang, et al.,
                         ??    ?                                              The AdvFaces, an automated adversarial face
          2019)                                                               synthesis method that learns to generate minimal

(Dubey, Abhimanyu,                                                            perturbations in the salient facial regions via GAN.
      et al., 2019)
                                                                              The image is transmitted by adversarial
(Aleksander, Madry et
       al., 2017)              ?                                              perturbations away from the image manifold. The
                                                                              aim is to return the image to a manifold before
(Xue, Jingsong et al.,
          2019)                                                               classification.

(Gael, Agarwal, et al.,  ??                                                   To address this problem, the aggressive robustness
          2020)                                                               of neural networks was studied through a strong

      Our Method                                                              optimization lens.

                                                                                The face recognition neural network deceiving
                         ? ? method that is based on the Deep Fool algorithm is

                                                                              proposed.

                         ? ? ? Showed using the filter to generate noise in the
                                                                                manner of agnostic for the network.

                                                                              The work focuses on recognizing faces if they are

                         ?     ?                                   ?          distorted using the regeneration of the dataset.
                                                                              This is done by adding distorted images with three

                                                                              attacks such as FGSM, Deep Fool, and PGD.

      Goodfellow and Colleagues [9] use the term                   This allows it to calculate the deep classifier's noise on
"consequence" to describe the Attack FGSM. In other words,         large-scale data sets that include adversarial cases, which is
the technique uses the loss gradient to modify the input data      very useful in machine learning [10].
to maximize the loss. Also, this technique manipulates the
input data to maximize the loss gradient while factoring in                                    TABLE 2
the loss function's gradient. In this way, an adversarial
example is an instance in which tiny, deliberate feature              FGSM ATTACK WITH VARIOUS VALUES OF
perturbations lead a machine-learning model to produce an
incorrect prediction. As a result, numeric vectors are                        GRANULARITY OF PERTURBATIONS
accepted as inputs by machine learning algorithms. An
adversarial attack is defined as the deliberate design of input               Dodging                     Impersonation
in such a manner that it causes the model to provide the           Step ? =0.001 ? =0.01 ? =0.1 ? =0.001 ? =0.01 ? =0.1
incorrect output. It is a significant issue in the field of
Artificial Intelligence (AI) security to harness this sensitivity     1 81.62% 11.54% 1.74% 97.37% 28.95% 85.1%
and use it in order to alter an algorithm's behavior. The
evading attack thus requires less control over the disturbance        5 83.40% 55.38% 46.29% 98.72% 50.42% 69.7%
than before. As a result, the original image is no longer
recognizable due to the disturbance. There is still a                 10 88.43% 49.45% 44.26% 99.22% 57.21% 41.0%
possibility that you will not be identified as the subject of the
Vitim images. The number of iterations increases with the               In a recent study, researchers showed that this method is
number of recognitions. It is helpful for impersonation            unsustainable when data are subjected to hostile
attacks, but it is not good for avoiding attacks. Face             modifications. Despite the fact that deep neural networks
classification is a classification issue, and this is the problem  have shown remarkable performance in classification tasks,
that deep learning is trying to solve. As a result, the FGSM       these attacks on the technique revealed many weaknesses in
technique is suggested as an efficient way for generating          the system. As a result, these algorithms have the potential to
adversarial samples that may be used to deceive the classifier     enhance results by filtering out noise in the images. In other
and mislead it. Table 2 illustrates three different kinds of       words, machine-learning models are capable of producing
granularity of perturbations: 0.001, 0.01, and 0.1.                particular misclassifications based on some different kinds
                                                                   of sample data. While deep networks have been shown to be
  B. Deep Fool attack                                              very successful at classification tasks, they are often misled
                                                                   by little and undetectable changes in the data sets they are
     In adversarial attack techniques, the Deep Fool is a          trained on. It is shown in this case that adversarial cases in
well-known method that employs images in a variety of              deep learning models are adequate to reveal our blind spots.
places. It identifies for the first time the sample oscillations   Additionally, numeric vectors are accepted as inputs by
and the model oscillations mirrors that correspond to them.        machine learning algorithms. As a consequence, the Deep
                                                                   fool technique computes perturbations that fool deep
                                                                   networks in a brief span, allowing for the quantification of
                                                                   their resilience.

                                                                         After a few rounds, the scientists discovered that Deep
                                                                   Fool converges to an oscillation vector that deceives the
   2   3   4   5   6   7   8   9   10   11   12