Face recognition technique is an automatic approach for recognizing a person from digital images using mathematical interpolation as matrices for these images. It can be adopted to realize facial appearance in the situations of different poses, facial expressions, ageing and other changes. This paper presents efficient face recognition model based on the integration of image preprocessing, Co-occurrence Matrix of Local Average Binary Pattern (CMLABP) and Principle Component Analysis (PCA) methods respectively. The proposed model can be used to compare the input image with existing database images in order to display or record the citizen information such as name, surname, birth date, etc. The recognition rate of the model is better than 99%. Accordingly, the proposed face recognition system is functional for criminal investigations. Furthermore, it has been compared with other reported works in the literature using diverse databases and training images. .
Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.
Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.
This work presents aneural and fuzzy based ECG signal recognition system based on wavelet transform. The suitable coefficients that can be used as a feature for each fuzzy network or neural network is found using a proposed best basis technique. Using the proposed best bases reduces the dimension of the input vector and hence reduces the complexity of the classifier. The fuzzy network and the neural network parameters are learned using back propagation algorithm.
In this paper, a hierarchical Arabic phoneme recognition system is proposed in which Mel Frequency Cepstrum Coefficients (MFCC) features is used to train the hierarchical neural networks architecture. Here, separate neural networks (subnetworks) are to be recursively trained to recognize subsets of phonemes. The overall recognition process is a combination of the outputs of these subnetworks. Experiments that explore the performance of the proposed hierarchical system in comparison to non-hierarchical (flat) baseline systems are also presented in this paper.