Automatic signature verification methods play a significant role in providing a secure and authenticated handwritten signature in many applications, to prevent forgery problems, specifically institutions of finance, and transections of legal papers, etc. There are two types of handwritten signature verification methods: online verification (dynamic) and offline verification (static) methods. Besides, signature verification approaches can be categorized into two styles: writer dependent (WD), and writer independent (WI) styles. Offline signature verification methods demands a high representation features for the signature image. However, lots of studies have been proposed for WI offline signature verification. Yet, there is necessity to improve the overall accuracy measurements. Therefore, a proved solution in this paper is depended on deep learning via convolutional neural network (CNN) for signature verification and optimize the overall accuracy measurements. The introduced model is trained on English signature dataset. For model evaluation, the deployed model is utilized to make predictions on new data of Arabic signature dataset to classify whether the signature is real or forged. The overall obtained accuracy is 95.36% based on validation dataset.
COVID-19 is an infectious viral disease that mostly affects the lungs. That quickly spreads across the world. Early detection of the virus boosts the chances of patients recovering quickly worldwide. Many radiographic techniques are used to diagnose an infected person such as X-rays, deep learning technology based on a large amount of chest x-ray images is used to diagnose COVID-19 disease. Because of the scarcity of available COVID-19 X-rays image, the limited COVID-19 Datasets are insufficient for efficient deep learning detection models. Another problem with a limited dataset is that training models suffer from over-fitting, and the predictions are not generalizable to address these problems. In this paper, we developed Conditional Generative Adversarial Networks (CGAN) to produce synthetic images close to real images for the COVID-19 case and traditional augmentation that was used to expand the limited dataset then used to train by Customized deep detection model. The Customized Deep learning model was able to obtain excellent detection accuracy of 97% accurate with only ten epochs. The proposed augmentation outperforms other augmentation techniques. The augmented dataset includes 6988 high-quality and resolution COVID-19 X-rays images. At the same time, the original COVID-19 X-rays images are only 587.
Arial images are very high resolution. The automation for map generation and semantic segmentation of aerial images are challenging problems in semantic segmentation. The semantic segmentation process does not give us precise details of the remote sensing images due to the low resolution of the aerial images. Hence, we propose an algorithm U-Net Architecture to solve this problem. It is classified into two paths. The compression path (also called: the encoder) is the first path and is used to capture the image's context. The encoder is just a convolutional and maximal pooling layer stack. The symmetric expanding path (also called: the decoder) is the second path, which is used to enable exact localization by transposed convolutions. This task is commonly referred to as dense prediction, which is completely connected to each other and also with the former neurons which gives rise to dense layers. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains convolutional layers and does not contain any dense layer because of which it can accept images of any size. The performance of the model will be evaluated by improving the image using the proposed method U-NET and obtaining an improved image by measuring the accuracy compared with the value of accuracy with previous methods.
Brain tumors are collections of abnormal tissues within the brain. The regular function of the brain may be affected as it grows within the region of the skull. Brain tumors are critical for improving treatment options and patient survival rates to prevent and treat them. The diagnosis of cancer utilizing manual approaches for numerous magnetic resonance imaging (MRI) images is the most complex and time-consuming task. Brain tumor segmentation must be carried out automatically. A proposed strategy for brain tumor segmentation is developed in this paper. For this purpose, images are segmented based on region-based and edge-based. Brain tumor segmentation 2020 (BraTS2020) dataset is utilized in this study. A comparative analysis of the segmentation of images using the edge-based and region-based approach with U-Net with ResNet50 encoder, architecture is performed. The edge-based segmentation model performed better in all performance metrics compared to the region-based segmentation model and the edge-based model achieved the dice loss score of 0. 008768, IoU score of 0. 7542, f1 score of 0. 9870, the accuracy of 0. 9935, the precision of 0. 9852, recall of 0. 9888, and specificity of 0. 9951.