Page 6 - IJEEE-2022-Vol18-ISSUE-1
P. 6
2| Kadhim & Al-Darraji
to add some improvements to the faces to recognize them. by using L2 distance by finding different images x which is
Some problems related to attacking the FR system were similar to x under the distance of L2.
noticed, such as generating adversarial perturbations images
using different attacks such as FGSM, Deep Fool and PGD Deb, Zhang, et al. [16] proposed making the face have
[8]. To eliminate these distortions, it is proposed to use an noise in some areas (set of pixels). They showed the dangers
algorithm suitable for this purpose that includes obtaining of adversarial examples in image classification. Hence, CNN
high recognition accuracy. might classify images wrongly when the pixels have any
noise.
The contributions of this paper are focused on
recognizing faces if they are distorted using dataset Cisse, Moustapha, et al. [17] proposed that on many
regeneration. This is done by adding distorted images with tasks, the accuracy of neural networks in tasks is comparable
three attacks such as FGSM [9], Deep fool [10 and 11], and to that of humans, particularly in perception, but the
PGD [12]. Therefore, the proposed system will be powerful robustness of inputs to change is limited during testing. They
enough to be robust against these three attacks. Hence, with suggested that by changing the structure of engineering to
difficult situations or environmental problems, the FR move attack examples from one network to another.
system will successfully cope with these difficult situations. However, a transferable attack example leads to the creation
of a security threat to the production system as well as giving
The remainder of this work is structured in the information about the lack of robustness of neural networks.
following manner. The review of the literature is presented
in Section II. Section III provides an overview of the Dubey, Abhimanyu, et al. [18] proposed several
adversarial attacks. Section IV discusses the techniques that adversarial attacks after adversarial examples which were
were suggested. The face database for this study is described discovered first. Therefore, changing the image by using
in Section V. Section VI contains the results of the oscillations with a scale of L2 or l8 norm leads to changing
experiments in more detail. In Section VII, an explanation of the predictions of the model. Hence, PGD is related to the
the results and their implications were provided. gradient-sign method which considers a strong attack. So,
the force against adversarial attacks can be increased by
II. LITERATURE REVIEW using defensive distillation.
The FR technology has been used to enhance results by In adversarial attacks, by using deep learning models
using the CNN algorithm to study the faces of people. It is resistant, Aleksander Madry et al. [12] suggested that there
widely used on smartphones and in other forms of are weaknesses in deep learning that facilitate such
technology, such as robotics. However, the algorithm is adversarial attacks. Also, that is implemented values of loss
related to mathematical results suitable for this purpose. on databases of MINST and CIFAR-10. Hence, the loss
However, the results can be enhanced. Therefore, many developed during 20 runs of PGD.
methods are used for this purpose to increase accuracy, such
as FGSM using MNSIT and CIFAR-10 databases. Xue, Jingsong et al. [11] proposed the face recognition
neural network deceiving method that is based on the Deep
Gael, Agarwal, et al. [13] showed the use of the filter to Fool algorithm. FaceNet is used to generate adversarial
generate noise in the manner of agnostic for the network. samples. Table 1 shows the comparison of the related works.
Therefore, they suggested a defense layer that helps to
protect against enemy attacks such as FGSM. Three III. BACKGROUND
databases (MNIST, CIFAR-10, and PaSC) were used to get
high results. Therefore, efficiency is improved by using this This section explores potential adversarial attacks on
defense layer without more mathematical work. facial recognition systems. Face recognition systems are
vulnerable to a variety of attacks. Three different types of
Carlini, Nicholas, et al. [14] proposed the classification attacks were mentioned in this paper: Fast Gradient Signed
of images for detection by changing small parts. Neural Method (FGSM), Deep Fool, and Projected Gradient
networks perform machine-learning tasks. The inputs such Descent (PGD). There are several types of attacks, but these
as X face adversarial examples. Therefore, it is difficult to three types were particularly used in this paper because they
use neural networks, especially in security fields. Distance is are the most widely used and most well-known attacks.
considered a very important thing for getting high activity.
So, defense distillation was proposed to increase the A. Fast Gradient Signed Method (FGSM) attack
robustness of the network. Another proposal was to use
symbols. FGSM is called the Fast Gradient Signed Method
because it computes the gradients of a loss function (for
Papernot, Nicolas, et al. [15] suggested some noises such example, mean-squared error, or categorical cross-entropy)
as attacks on neural network, which contain two layers. and then utilizes the sign of the gradients to create a new
Their layers are useful to remove (decrease) noises from image (i.e., the adversarial image) that minimizes loss. In
images. Also, the distance between layers is necessary to order to provide the same kind of noise that exists in the
recognize the faces clearly by FR technology. Hence, the gradient, the FGSM method (forward-looking stochastic
distance should be simple between images to clearly gradient descent) is used. The magnitude of the noise is
recognize the face. scaled by the epsilon constant, and epsilon is typically
limited to be a small integer to prevent excessive
Szegedy, Christian, et al. [7] suggested that attack floating-point arithmetic. Another advantage of the FGSM is
examples were generated by the L-BFGS box. This was done that it is a white-box attack, meaning that it is designed to
target the specific network structure.