Facial retouching, also referred to as digital retouching, is the process of modifying or enhancing facial characteristics in digital images or photographs. While it can be a valuable technique for fixing flaws or achieving a desired visual appeal, it also gives rise to ethical considerations. This study involves categorizing genuine and retouched facial images from the standard ND-IIITD retouched faces dataset using a transfer learning methodology. The impact of different primary optimization algorithms—specifically Adam, RMSprop, and Adadelta—utilized in conjunction with a fine-tuned ResNet50 model is examined to assess potential enhancements in classification effectiveness. Our proposed transfer learning ResNet50 model demonstrates superior performance compared to other existing approaches, particularly when the RMSprop and Adam optimizers are employed in the fine-tuning process. By training the transfer learning ResNet50 model on the ND-IIITD retouched faces dataset with the ”ImageNet” weight, we achieve a validation accuracy of 98.76%, a training accuracy of 98.32%, and an overall accuracy of 98.52% for classifying real and retouched faces in just 20 epochs. Comparative analysis indicates that the choice of optimizer during the fine-tuning of the transfer learning ResNet50 model can further enhance the classification accuracy.
Training the user in Brain-Computer Interface (BCI) systems based on brain signals that recorded using Electroencephalography Motor Imagery (EEG-MI) signal is a time-consuming process and causes tiredness to the trained subject, so transfer learning (subject to subject or session to session) is very useful methods of training that will decrease the number of recorded training trials for the target subject. To record the brain signals, channels or electrodes are used. Increasing channels could increase the classification accuracy but this solution costs a lot of money and there are no guarantees of high classification accuracy. This paper introduces a transfer learning method using only two channels and a few training trials for both feature extraction and classifier training. Our results show that the proposed method Independent Component Analysis with Regularized Common Spatial Pattern (ICA-RCSP) will produce about 70% accuracy for the session to session transfer learning using few training trails. When the proposed method used for transfer subject to subject the accuracy was lower than that for session to session but it still better than other methods.
Due to their vital applications in many real-world situations, researchers are still presenting bunches of methods for better analysis of motor imagery (MI) electroencephalograph (EEG) signals. However, in general, EEG signals are complex because of their nonstationary and high-dimensionality properties. Therefore, high consideration needs to be taken in both feature extraction and classification. In this paper, several hybrid classification models are built and their performance is compared. Three famous wavelet mother functions are used for generating scalograms from the raw signals. The scalograms are used for transfer learning of the well-known VGG-16 deep network. Then, one of six classifiers is used to determine the class of the input signal. The performance of different combinations of mother functions and classifiers are compared on two MI EEG datasets. Several evaluation metrics show that a model of VGG-16 feature extractor with a neural network classifier using the Amor mother wavelet function has outperformed the results of state-of-the-art studies.
SARS-COV-2 (severe acute respiratory syndrome coronavirus-2) has caused widespread mortality. Infected individuals had specific radiographic visual features and fever, dry cough, lethargy, dyspnea, and other symptoms. According to the study, the chest X-ray (CXR) is one of the essential non-invasive clinical adjuncts for detecting such visual reactions associated with SARS-COV-2. Manual diagnosis is hindered by a lack of radiologists' availability to interpret CXR images and by the faint appearance of illness radiographic responses. The paper describes an automatic COVID detection based on the deep learning- based system that applied transfer learning techniques to extract features from CXR images to distinguish. The system has three main components. The first part is extracting CXR features with MobileNetV2. The second part used the extracted features and applied Dimensionality reduction using LDA. The final part is a Classifier, which employed XGBoost to classify dataset images into Normal, Pneumonia, and Covid-19. The proposed system achieved both immediate and high results with an overall accuracy of 0.96%, precision of 0.95%, recall of 0.94%, and F1 score of 0.94%.