Epilepsy, a neurological disorder characterized by recurring seizures, necessitates early and precise detection for effective management. Deep learning techniques have emerged as powerful tools for analyzing complex medical data, specifically electroencephalogram (EEG) signals, advancing epileptic detection. This review comprehensively presents cutting-edge methodologies in deep learning-based epileptic detection systems. Beginning with an overview of epilepsy’s fundamental concepts and their implications for individuals and healthcare are present. This review then delves into deep learning principles and their application in processing EEG signals. Diverse research papers to know the architectures—convolutional neural networks, recurrent neural networks, and hybrid models—are investigated, emphasizing their strengths and limitations in detecting epilepsy. Preprocessing techniques for improving EEG data quality and reliability, such as noise reduction, artifact removal, and feature extraction, are discussed. Present performance evaluation metrics in epileptic detection, such as accuracy, sensitivity, specificity, and area under the curve, are provided. This review anticipates future directions by highlighting challenges such as dataset size and diversity, model interpretability, and integration with clinical decision support systems. Finally, this review demonstrates how deep learning can improve the precision, efficiency, and accessibility of early epileptic diagnosis. This advancement allows for more timely interventions and personalized treatment plans, potentially revolutionizing epilepsy management.
Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.