Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.
Early in the 20th century, as a result of technological advancements, the importance of digital marketing significantly increased as the necessity for digital customer experience, promotion, and distribution emerged. Since the year 1988, in the case when the term ”Digital Marketing” first appeared, the business sector has undergone drastic growth, moving from small startups to massive corporations on a global scale. The marketer must navigate a chaotic environment caused by the vast volume of generated data. Decision-makers must contend with the fact that user data is dynamic and changes every day. Smart applications must be used within enterprises to better evaluate, classify, enhance, and target audiences. Customers who are tech-savvy are pushing businesses to make bigger financial investments and use cutting-edge technologies. It was only natural that marketing and trade could be one of the areas to move to such development, which helps to move to the speed of spread, advertisements, along with other things to facilitate things for reaching and winning customers. In this study, we utilized machine learning (ML) algorithms (Decision tree (DT), K-Nearest Neighbor (KNN), CatBoost, and Random Forest (RF) (for classifying data in customers to move to development. Improve the ability to forecast customer behavior so one can gain more business from them more quickly and easily. With the use of the aforementioned dataset, the suggested system was put to the test. The results show that the system can accurately predict if a customer will buy something or not; the random forest (RF) had an accuracy of 0.97, DT had an accuracy of 0. 95, KNN had an accuracy of 0. 91, while the CatBoost algorithm had the execution time 15.04 of seconds, and gave the best result of highest f1 score and accuracy (0.91, 0. 98) respectively. Finally, the study’s future goals involve being created a web page, thereby helping many banking institutions with speed and forecast accuracy. Using more techniques of feature selection in conjunction with the marketing dataset to improve diagnosis.
The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better.
Recently, numerous researches have emphasized the importance of professional inspection and repair in case of suspected faults in Photovoltaic (PV) systems. By leveraging electrical and environmental features, many machine learning models can provide valuable insights into the operational status of PV systems. In this study, different machine learning models for PV fault detection using a simulated 0.25MW PV power system were developed and evaluated. The training and testing datasets encompassed normal operation and various fault scenarios, including string-to-string, on-string, and string-to-ground faults. Multiple electrical and environmental variables were measured and exploited as features, such as current, voltage, power, temperature, and irradiance. Four algorithms (Tree, LDA, SVM, and ANN) were tested using 5-fold cross-validation to identify errors in the PV system. The performance evaluation of the models revealed promising results, with all algorithms demonstrating high accuracy. The Tree and LDA algorithms exhibited the best performance, achieving accuracies of 99.544% on the training data and 98.058% on the testing data. LDA achieved perfect accuracy (100%) on the testing data, while SVM and ANN achieved 95.145% and 89.320% accuracy, respectively. These findings underscore the potential of machine learning algorithms in accurately detecting and classifying various types of PV faults. .
Kinship (Familial relationships) detection is crucial in many fields and has applications in biometric security, adoption, forensic investigations, and more. It is also essential during wars and natural disasters like earthquakes since it may aid in reunion, missing person searches, establishing emergency contacts, and providing psychological support. The most common method of determining kinship is DNA analysis which is highly accurate. Another approach, which is noninvasive, uses facial photos with computer vision and machine learning algorithms for kinship estimation. Each part of the Human -body has its own embedded information that can be extracted and adopted for identification, verification, or classification of that person. Kinship recognition is based on finding traits that are shared by every family. We investigate the use of hand geometry for kinship detection, which is a new approach. Because of the available hand image Datasets do not contain kinship ground truth; therefore, we created our own dataset. This paper describes the tools, methodology, and details of the collected MKH, which stands for the Mosul Kinship Hand, images dataset. The images of MKH dataset were collected using a mobile phone camera with a suitable setup and consisted of 648 images for 81 individuals from 14 families (8 hand situations per person). This paper also presents the use of this dataset in kinship prediction using machine learning. Google MdiaPipe was used for hand detection, segmentation, and geometrical key points finding. Handcraft feature extraction was used to extract 43 distinctive geometrical features from each image. A neural network classifier was designed and trained to predict kinship, yielding about 93% prediction accuracy. The results of this novel approach demonstrated that the hand possesses biometric characteristics that may be used to establish kinship, and that the suggested method is a promising way as a kinship indicator.
The reliance on networks and systems has grown rapidly in contemporary times, leading to increased vulnerability to cyber assaults. The Distributed Denial-of-Service (Distributed Denial of Service) attack, a threat that can cause great financial liabilities and reputation damage. To address this problem, Machine Learning (ML) algorithms have gained huge attention, enabling the detection and prevention of DDOS (Distributed Denial of Service) Attacks. In this study, we proposed a novel security mechanism to avoid Distributed Denial of Service attacks. Using an ensemble learning methodology aims to it also can differentiate between normal network traffic and the malicious flood of Distributed Denial of Service attack traffic. The study also evaluates the performance of two well-known ML algorithms, namely, the decision tree and random forest, which were used to execute the proposed method. Tree in defending against Distributed Denial of Service (DDoS) attacks. We test the models using a publicly available dataset called TIME SERIES DATASET FOR DISTRIBUTED DENIAL OF SERVICE ATTACK DETECTION. We compare the performance of models using a list of evaluation metrics developing the Model. This step involves fetching the data, preprocessing it, and splitting it into training and testing subgroups, model selection, and validation. When applied to a database of nearly 11,000 time series; in some cases, the proposed approach manifested promising results and reached an Accuracy (ACC) of up to 100 % in the dataset. Ultimately, this proposed method detects and mitigates distributed denial of service. The solution to securing communication systems from this increasing cyber threat is this: preventing attacks from being successful.