Page 8 - IJEEE-2022-Vol18-ISSUE-1
P. 8

4|                                                                                                           Kadhim & Al-Darraji

classifier, thereby fooling it (i.e., fewer than three). The            Fig. 1: The Structure of the proposed System.
oscillation vector is also more accurate than that of other                  Fig. 2: Part of the image by cropping.
current models, which is another plus. In contrast, “the Fast
Gradient Sign” generates a perturbation image with a greater     Fig. 3: Cropping and resizing of images with changing of
normal, while this method generates minimal adversarial                                          size.
perturbations. Because of this, with Deep Fool, it is
recommended to create adversarial samples that are capable           Fig. 4: A Sample of some images after processing.
of deceiving current-generation classifiers.

  C. Projected Gradient Descent (PGD) attack
     PGD “Projected Gradient Descent” is a white-box

attack, which implies the attacker has direct access to the
model gradients during the attack. As a result of this attack,
the attacker acquired a copy of the weights associated with
your model. Then, the PGD attack is almost identical to the
BIM (Basic Iterative Method) and IFGSM (Iterative–Fast
Gradient Signed Method) attacks. After that, the BIM
executes FGSM with reduced step size and restricts the
updated adversarial sample to a range that the algorithm can
handle in T iterations. PGD, on the other hand, initializes the
example at a random location inside the ball of interest
(specified by the L norm) and then performs random restarts,
while BIM initializes the example at the starting position
(which is decided by the L norm).

      Consequently, the rapid gradient sign technique
demonstrates its efficacy by creating an adversarial example
using the neural network's gradients. A source image is
utilized to generate a new image that is as similar to the
original as feasible [12], which is then used to mitigate the
loss.

                  IV. THE PROPOSED METHOD

    This proposed system is capable of recognizing faces
even if there are adversarial images in databases. It distorts
images using three kinds of attacks: FGSM, Deep fool, and
PGD. The system is composed of four steps: pre-processing,
generation of adversarial images, feature extraction, and
building the classifier. So, Figure 1 explains the proposed
system diagram.

  A. Pre-processing

    Pre-processing is a crucial step that proceeds any
recognition system. It can affect the accuracy of the system
tremendously. However, the variety in size and distance of
faces in images may lead to poor recognition. So, the LFW
database contains a set of classes for one person in different
situations. The original image size before pre-processing
was (250 x 250) pixels. In this work, pre-processing includes
face detection, cropping, and resizing. As indicated in Figure
2, the technique of image editing is used.

      Before training the images, only faces are cropped. Its
size before being cropped is (250x250) pixels, whereas after
cropping, the size of the images became (146x146) pixels.
Figure 3 shows the process of cropping and resizing the
image to make all images equal. Figure 4 shows images after
processing.
   3   4   5   6   7   8   9   10   11   12   13