Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.
A robust system that classifies various hand gestures would greatly help those using prosthetic limbs. Recently, emphasis has been placed on extracted features from the High Density - surface Electromyography (HD-sEMG) signals and the size of segmentation windows which augment the recognition accuracy. This paper proposes a hand gestures identification system, in which HD-sEMG signals are employed, and is supported by Force Myography (FMG) signals for this mission. Several feature types have been extracted from FMG and HD-sEMG signals such as MEAN, RMS, MAD, STD, and Variance, these features have been validated under some classifiers such as decision tree (DT), linear discriminant analysis (LDA), support vector machine SVM, and k-nearest neighbor (KNN), in which results showing that MEAN and RMS features are superior to others, while the best classifier is SVM. Several experiments have been achieved by the MATLAB platform to validate the proposed system, in which, a database of HD-sEMG signals comprising 65 isometric hand gestures is employed, where two (8×8) electrodes and 9 force sensors are used to collect the FMG data. This data was derived from 20 intact participants, the first preprocessing step was applied during the recording stage. Ten gestures are chosen to be classified from the 65 hand gestures. Results show the success of the proposed system while the classification accuracy arrived at 99.1%.
In recent years, the number of researches in the field of artificial limbs has increased significantly in order to improve the performance of the use of these limbs by amputees. During this period, High-Density surface Electromyography (HD-sEMG) signals have been employed for hand gesture identification, in which the performance of the classification process can be improved by using robust spatial features extracted from HD-sEMG signals. In this paper, several algorithms of spatial feature extraction have been proposed to increase the accuracy of the SVM classifier, while the histogram oriented gradient (HOG) has been used to achieve this mission. So, several feature sets have been extracted from HD-sEMG signals such as; features extracted based on HOG denoted by (H); features have been generated by combine intensity feature with H features denoted as (HI); features have been generated by combine average intensity with H features denoted as (AIH). The proposed system has been simulated by MATLAB to calculate the accuracy of the classification process, in addition, the proposed system is practically validated in order to show the ability to use this system by amputees. The results show the high accuracy of the classifier in real-time which leads to an increase in the possibility of using this system as an artificial hand.
The evolution of wireless communication technology increases human machine interaction capabilities especially in controlling robotic systems. This paper introduces an effective wireless system in controlling the directions of a wheeled robot based on online hand gestures. The hand gesture images are captured and processed to be recognized and classified using neural network (NN). The NN is trained using extracted features to distinguish five different gestures; accordingly it produces five different signals. These signals are transmitted to control the directions of the cited robot. The main contribution of this paper is, the technique used to recognize hand gestures is required only two features, these features can be extracted in very short time using quite easy methodology, and this makes the proposed technique so suitable for online interaction. In this methodology, the preprocessed image is partitioned column-wise into two half segments; from each half one feature is extracted. This feature represents the ratio of white to black pixels of the segment histogram. The NN showed very high accuracy in recognizing all of the proposed gesture classes. The NN output signals are transmitted to the robot microcontroller wirelessly using Bluetooth. Accordingly the microcontroller guides the robot to the desired direction. The overall system showed high performance in controlling the robot movement directions.