COVID-19 is an infectious viral disease that mostly affects the lungs. That quickly spreads across the world. Early detection of the virus boosts the chances of patients recovering quickly worldwide. Many radiographic techniques are used to diagnose an infected person such as X-rays, deep learning technology based on a large amount of chest x-ray images is used to diagnose COVID-19 disease. Because of the scarcity of available COVID-19 X-rays image, the limited COVID-19 Datasets are insufficient for efficient deep learning detection models. Another problem with a limited dataset is that training models suffer from over-fitting, and the predictions are not generalizable to address these problems. In this paper, we developed Conditional Generative Adversarial Networks (CGAN) to produce synthetic images close to real images for the COVID-19 case and traditional augmentation that was used to expand the limited dataset then used to train by Customized deep detection model. The Customized Deep learning model was able to obtain excellent detection accuracy of 97% accurate with only ten epochs. The proposed augmentation outperforms other augmentation techniques. The augmented dataset includes 6988 high-quality and resolution COVID-19 X-rays images. At the same time, the original COVID-19 X-rays images are only 587.
WiFi-based human activity and gesture recognition explore the interaction between the human hand or body movements and the reflected WiFi signals to identify various activities. This type of recognition has received much attention in recent years since it does not require wearing special sensors or installing cameras. This paper aims to investigate human activity and gesture recognition schemes that use Channel State Information (CSI) provided by WiFi devices. To achieve high accuracy in the measurement, deep learning models such as AlexNet, VGG 19, and SqueezeNet were used for classification and extracting features automatically. Firstly, outliers are removed from the amplitude of each CSI stream during the preprocessing stage by using the Hampel identifier algorithm. Next, the RGB images are created for each activity to feed as input to Deep Convolutional Neural Networks. After that, data augmentation is implemented to reduce the overfitting problems in deep learning models. Finally, the proposed method is evaluated on a publicly available dataset called WiAR, which contains 10 volunteers, each of whom executes 16 activities. The experiment results demonstrate that AlexNet, VGG19, and SqueezeNet all have high recognition accuracy of 99.17 %, 96.25%, and 100 %, respectively.