Iraqi Journal for Electrical and Electronic Engineering
Login
Iraqi Journal for Electrical and Electronic Engineering
  • Home
  • Articles & Issues
    • Latest Issue
    • All Issues
  • Authors
    • Submit Manuscript
    • Guide for Authors
    • Authorship
    • Article Processing Charges (APC)
    • Proofreading Service
  • Reviewers
    • Guide for Reviewers
    • Become a Reviewer
  • About
    • About Journal
    • Aims and Scope
    • Editorial Team
    • Journal Insights
    • Peer Review Process
    • Publication Ethics
    • Plagiarism
    • Allegations of Misconduct
    • Appeals and Complaints
    • Corrections and Withdrawals
    • Open Access
    • Archiving Policy
    • Abstracting and indexing
    • Announcements
    • Contact

Search Results for self-learning

Article
Current Big Data Issues and Their Solutions via Deep Learning: An Overview

Asif Ali Banka, Roohie Naaz Mir

Pages: 127-138

PDF Full Text
Abstract

The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better.

Article
A Comparative Study of Deep Learning Methods-Based Object/Image Categorization

Saad Albawi, Layth Kamil Almajmaie, Ali J. Abboud

Pages: 168-177

PDF Full Text
Abstract

In recent years, there has been a considerable rise in the applications in which object or image categorization is beneficial for example, analyzing medicinal images, assisting persons to organize their collections of photos, recognizing what is around self-driving vehicles, and many more. These applications necessitate accurately labeled datasets, in their majority involve an extensive diversity in the types of images, from cats or dogs to roads, landscapes, and so forth. The fundamental aim of image categorization is to predict the category or class for the input image by specifying to which it belongs. For human beings, this is not a considerable thing, however, learning computers to perceive represents a hard issue that has become a broad area of research interest, and both computer vision techniques and deep learning algorithms have evolved. Conventional techniques utilize local descriptors for finding likeness between images, however, nowadays; progress in technology has provided the utilization of deep learning algorithms, especially the Convolutional Neural Networks (CNNs) to auto-extract representative image patterns and features for classification The fundamental aim of this paper is to inspect and explain how to utilize the algorithms and technologies of deep learning to accurately classify a dataset of images into their respective categories and keep model structure complication to a minimum. To achieve this aim, must focus precisely and accurately on categorizing the objects or images into their respective categories with excellent results. And, specify the best deep learning-based models in image processing and categorization. The developed CNN-based models have been proposed and a lot of pre-training models such as (VGG19, DenseNet201, ResNet152V2, MobileNetV2, and InceptionV3) have been presented, and all these models are trained on the Caltech-101 and Caltech-256 datasets. Extensive and comparative experiments were conducted on this dataset, and the obtained results demonstrate the effectiveness of the proposed models. The obtained results demonstrate the effectiveness of the proposed models. The accuracy for Caltech-101 and Caltech-256 datasets was (98.06% and 90%) respectively.

Article
Optimal Learning Controller Design Using Particle Swarm Optimization: Applied to CSI System

Khulood Moosa Omran, Abdul-Basset A. Al- Hussein, Basil Hani Jassim

Pages: 104-112

PDF Full Text
Abstract

In this article, a PD-type iterative learning control algorithm (ILC) is proposed to a nonlinear time-varying system for cases of measurement disturbances and the initial state errors. The proposed control approach uses a simple structure and has an easy implementation. The iterative learning controller was utilized to control a constant current source inverter (CSI) with pulse width modulation (PWM); subsequently the output current trajectory converged the sinusoidal reference signal and provided constant switching frequency. The learning controller's parameters were tuned using particle swarm optimization approach to get best optimal control for the system output. The tracking error limit is achieved using the convergence exploration. The proposed learning control scheme was robust against the error in initial conditions and disturbances which outcome from the system modeling inaccuracies and uncertainties. It could correct the distortion of the inverter output current waveform with less computation and less complexity. The proposed algorithm was proved mathematically and through computer simulation. The proposed optimal learning method demonstrated good performances.

Article
Epileptic detection based on deep learning: A review

Ola M. Assim, Ahlam F. Mahmood

Pages: 115-126

PDF Full Text
Abstract

Epilepsy, a neurological disorder characterized by recurring seizures, necessitates early and precise detection for effective management. Deep learning techniques have emerged as powerful tools for analyzing complex medical data, specifically electroencephalogram (EEG) signals, advancing epileptic detection. This review comprehensively presents cutting-edge methodologies in deep learning-based epileptic detection systems. Beginning with an overview of epilepsy’s fundamental concepts and their implications for individuals and healthcare are present. This review then delves into deep learning principles and their application in processing EEG signals. Diverse research papers to know the architectures—convolutional neural networks, recurrent neural networks, and hybrid models—are investigated, emphasizing their strengths and limitations in detecting epilepsy. Preprocessing techniques for improving EEG data quality and reliability, such as noise reduction, artifact removal, and feature extraction, are discussed. Present performance evaluation metrics in epileptic detection, such as accuracy, sensitivity, specificity, and area under the curve, are provided. This review anticipates future directions by highlighting challenges such as dataset size and diversity, model interpretability, and integration with clinical decision support systems. Finally, this review demonstrates how deep learning can improve the precision, efficiency, and accessibility of early epileptic diagnosis. This advancement allows for more timely interventions and personalized treatment plans, potentially revolutionizing epilepsy management.

Article
Fairness Analysis in the Assessment of Several Online Parallel Classes using Process Mining

Rachmadita Andreswari, Ismail Syahputra

Pages: 25-34

PDF Full Text
Abstract

The learning process in online lectures through the Learning Management System (LMS) will produce a learning flow according to the event log. Assessment in a group of parallel classes is expected to produce the same assessment point of view based on the semester lesson plan. However, it does not rule out the implementation of each class to produce unequal fairness. Some of the factors considered to influence the assessment in the classroom include the flow of learning, different lecturers, class composition, time and type of assessment, and student attendance. The implementation of process mining in fairness assessment is used to determine the extent to which the learning flow plays a role in the assessment of ten parallel classes, including international classes. Moreover, a decision tree algorithm will also be applied to determine the root cause of the student assessment analysis based on the causal factors. As a result, there are three variables that have effects on student graduation and assessment, i.e attendance, class and gender. Variable lecturer does not have much impact on the assessment, but has an influence on the learning flow.

Article
Learning the Quadruped Robot by Reinforcement Learning (RL)

A. A. Issa, A. A. Aldair

Pages: 117-126

PDF Full Text
Abstract

In this paper, a simulation was utilized to create and test the suggested controller and to investigate the ability of a quadruped robot based on the SimScape-Multibody toolbox, with PID controllers and deep deterministic policy gradient DDPG Reinforcement learning (RL) techniques. A quadruped robot has been simulated using three different scenarios based on two methods to control its movement, namely PID and DDPG. Instead of using two links per leg, the quadruped robot was constructed with three links per leg, to maximize movement versatility. The quadruped robot-built architecture uses twelve servomotors, three per leg, and 12-PID controllers in total for each servomotor. By utilizing the SimScape-Multibody toolbox, the quadruped robot can build without needing to use the mathematical model. By varying the walking robot's carrying load, the robustness of the developed controller is investigated. Firstly, the walking robot is designed with an open loop system and the result shows that the robot falls at starting of the simulation. Secondly, auto-tuning are used to find the optimal parameter like (KP, KI and KD) of PID controllers and resulting shows the robot can walk in a straight line. Finally, DDPG reinforcement learning is proposed to generate and improve the walking motion of the quadruped robot, and the results show that the behaviour of the walking robot has been improved compared with the previous cases, Also, the results produced when RL is employed instead of PID controllers are better.

Article
Multiple Object Detection-Based Machine Learning Techniques

Athraa S. Hasan, Jianjun Yi, Haider M. AlSabbagh, Liwei Chen

Pages: 149-159

PDF Full Text
Abstract

Object detection has become faster and more precise due to improved computer vision systems. Many successful object detections have dramatically improved owing to the introduction of machine learning methods. This study incorporated cutting- edge methods for object detection to obtain high-quality results in a competitive timeframe comparable to human perception. Object-detecting systems often face poor performance issues. Therefore, this study proposed a comprehensive method to resolve the problem faced by the object detection method using six distinct machine learning approaches: stochastic gradient descent, logistic regression, random forest, decision trees, k-nearest neighbor, and naive Bayes. The system was trained using Common Objects in Context (COCO), the most challenging publicly available dataset. Notably, a yearly object detection challenge is held using COCO. The resulting technology is quick and precise, making it ideal for applications requiring an object detection accuracy of 97%.

Article
Enhancing PV Fault Detection Using Machine Learning: Insights from a Simulated PV System

Halah Sabah Muttashar, Amina Mahmoud Shakir

Pages: 126-133

PDF Full Text
Abstract

Recently, numerous researches have emphasized the importance of professional inspection and repair in case of suspected faults in Photovoltaic (PV) systems. By leveraging electrical and environmental features, many machine learning models can provide valuable insights into the operational status of PV systems. In this study, different machine learning models for PV fault detection using a simulated 0.25MW PV power system were developed and evaluated. The training and testing datasets encompassed normal operation and various fault scenarios, including string-to-string, on-string, and string-to-ground faults. Multiple electrical and environmental variables were measured and exploited as features, such as current, voltage, power, temperature, and irradiance. Four algorithms (Tree, LDA, SVM, and ANN) were tested using 5-fold cross-validation to identify errors in the PV system. The performance evaluation of the models revealed promising results, with all algorithms demonstrating high accuracy. The Tree and LDA algorithms exhibited the best performance, achieving accuracies of 99.544% on the training data and 98.058% on the testing data. LDA achieved perfect accuracy (100%) on the testing data, while SVM and ANN achieved 95.145% and 89.320% accuracy, respectively. These findings underscore the potential of machine learning algorithms in accurately detecting and classifying various types of PV faults. .

Article
Detection of Covid-19 Using CAD System Depending on Chest X-Ray and Machine Learning Techniques

Sadeer Alaa Thamer, Mshari A. Alshmmri

Pages: 75-81

PDF Full Text
Abstract

SARS-COV-2 (severe acute respiratory syndrome coronavirus-2) has caused widespread mortality. Infected individuals had specific radiographic visual features and fever, dry cough, lethargy, dyspnea, and other symptoms. According to the study, the chest X-ray (CXR) is one of the essential non-invasive clinical adjuncts for detecting such visual reactions associated with SARS-COV-2. Manual diagnosis is hindered by a lack of radiologists' availability to interpret CXR images and by the faint appearance of illness radiographic responses. The paper describes an automatic COVID detection based on the deep learning- based system that applied transfer learning techniques to extract features from CXR images to distinguish. The system has three main components. The first part is extracting CXR features with MobileNetV2. The second part used the extracted features and applied Dimensionality reduction using LDA. The final part is a Classifier, which employed XGBoost to classify dataset images into Normal, Pneumonia, and Covid-19. The proposed system achieved both immediate and high results with an overall accuracy of 0.96%, precision of 0.95%, recall of 0.94%, and F1 score of 0.94%.

Article
Transfer Learning Based Fine-Tuned Novel Approach for Detecting Facial Retouching

Kinjal R. Sheth, Vishal S. Vora

Pages: 84-94

PDF Full Text
Abstract

Facial retouching, also referred to as digital retouching, is the process of modifying or enhancing facial characteristics in digital images or photographs. While it can be a valuable technique for fixing flaws or achieving a desired visual appeal, it also gives rise to ethical considerations. This study involves categorizing genuine and retouched facial images from the standard ND-IIITD retouched faces dataset using a transfer learning methodology. The impact of different primary optimization algorithms—specifically Adam, RMSprop, and Adadelta—utilized in conjunction with a fine-tuned ResNet50 model is examined to assess potential enhancements in classification effectiveness. Our proposed transfer learning ResNet50 model demonstrates superior performance compared to other existing approaches, particularly when the RMSprop and Adam optimizers are employed in the fine-tuning process. By training the transfer learning ResNet50 model on the ND-IIITD retouched faces dataset with the ”ImageNet” weight, we achieve a validation accuracy of 98.76%, a training accuracy of 98.32%, and an overall accuracy of 98.52% for classifying real and retouched faces in just 20 epochs. Comparative analysis indicates that the choice of optimizer during the fine-tuning of the transfer learning ResNet50 model can further enhance the classification accuracy.

Article
Session to Session Transfer Learning Method Using Independent Component Analysis with Regularized Common Spatial Patterns for EEG-MI Signals

Zaineb M. Alhakeem, Ramzy S. Ali

Pages: 13-27

PDF Full Text
Abstract

Training the user in Brain-Computer Interface (BCI) systems based on brain signals that recorded using Electroencephalography Motor Imagery (EEG-MI) signal is a time-consuming process and causes tiredness to the trained subject, so transfer learning (subject to subject or session to session) is very useful methods of training that will decrease the number of recorded training trials for the target subject. To record the brain signals, channels or electrodes are used. Increasing channels could increase the classification accuracy but this solution costs a lot of money and there are no guarantees of high classification accuracy. This paper introduces a transfer learning method using only two channels and a few training trials for both feature extraction and classifier training. Our results show that the proposed method Independent Component Analysis with Regularized Common Spatial Pattern (ICA-RCSP) will produce about 70% accuracy for the session to session transfer learning using few training trails. When the proposed method used for transfer subject to subject the accuracy was lower than that for session to session but it still better than other methods.

Article
Agriculture based on Internet of Things and Deep Learning

Marwa Abdulla, Ali Marhoon

Pages: 1-8

PDF Full Text
Abstract

In smart cities, health care, industrial production, and many other fields, the Internet of Things (IoT) have had significant success. Protected agriculture has numerous IoT applications, a highly effective style of modern agriculture development that uses artificial ways to manipulate climatic parameters such as temperature to create ideal circumstances for the growth of animals and plants. Convolutional Neural Networks (CNNs) is a deep learning approach that has made significant progress in image processing. From 2016 to the present, various applications for the automatic diagnosis of agricultural diseases, identifying plant pests, predicting the number of crops, etc., have been developed. This paper involves a presentation of the Internet of Things system in agriculture and its deep learning applications. It summarizes the most essential sensors used and methods of communication between them, in addition to the most important deep learning algorithms devoted to intelligent agriculture.

Article
A Comparison of COIVD-19 Cases Classification Based on Machine Learning Approaches

Oqbah Salim Atiyah, Saadi Hamad Thalij

Pages: 139-143

PDF Full Text
Abstract

COVID-19 emerged in 2019 in china, the worldwide spread rapidly, and caused many injuries and deaths among humans. Accurate and early detection of COVID-19 can ensure the long-term survival of patients and help prohibit the spread of the epidemic. COVID-19 case classification techniques help health organizations quickly identify and treat severe cases. Algorithms of classification are one the essential matters for forecasting and making decisions to assist the diagnosis, early identification of COVID-19, and specify cases that require to intensive care unit to deliver the treatment at appropriate timing. This paper is intended to compare algorithms of classification of machine learning to diagnose COVID-19 cases and measure their performance with many metrics, and measure mislabeling (false-positive and false-negative) to specify the best algorithms for speed and accuracy diagnosis. In this paper, we focus onto classify the cases of COVID-19 using the algorithms of machine learning. we load the dataset and perform dataset preparation, pre-processing, analysis of data, selection of features, split of data, and use of classification algorithm. In the first using four classification algorithms, (Stochastic Gradient Descent, Logistic Regression, Random Forest, Naive Bayes), the outcome of algorithms accuracy respectively was 99.61%, 94.82% ,98.37%,96.57%, and the result of execution time for algorithms respectively were 0.01s, 0.7s, 0.20s, 0.04. The Stochastic Gradient Descent of mislabeling was better. Second, using four classification algorithms, (eXtreme-Gradient Boosting, Decision Tree, Support Vector Machines, K_Nearest Neighbors), the outcome of algorithms accuracy was 98.37%, 99%, 97%, 88.4%, and the result of execution time for algorithms respectively were 0.18s, 0.02s, 0.3s, 0.01s. The Decision Tree of mislabeling was better. Using machine learning helps improve allocate medical resources to maximize their utilization. Classification algorithm of clinical data for confirmed COVID-19 cases can help predict a patient's need to advance to the ICU or not need by using a global dataset of COVID-19 cases due to its accuracy and quality.

Article
Expanding New Covid-19 Data with Conditional Generative Adversarial Networks

Haneen Majid, Khawla Hussein Ali

Pages: 103-110

PDF Full Text
Abstract

COVID-19 is an infectious viral disease that mostly affects the lungs. That quickly spreads across the world. Early detection of the virus boosts the chances of patients recovering quickly worldwide. Many radiographic techniques are used to diagnose an infected person such as X-rays, deep learning technology based on a large amount of chest x-ray images is used to diagnose COVID-19 disease. Because of the scarcity of available COVID-19 X-rays image, the limited COVID-19 Datasets are insufficient for efficient deep learning detection models. Another problem with a limited dataset is that training models suffer from over-fitting, and the predictions are not generalizable to address these problems. In this paper, we developed Conditional Generative Adversarial Networks (CGAN) to produce synthetic images close to real images for the COVID-19 case and traditional augmentation that was used to expand the limited dataset then used to train by Customized deep detection model. The Customized Deep learning model was able to obtain excellent detection accuracy of 97% accurate with only ten epochs. The proposed augmentation outperforms other augmentation techniques. The augmented dataset includes 6988 high-quality and resolution COVID-19 X-rays images. At the same time, the original COVID-19 X-rays images are only 587.

Article
Feature Deep Learning Extraction Approach for Object Detection in Self-Driving Cars

Namareq Odey, Ali Marhoon

Pages: 62-69

PDF Full Text
Abstract

Self-driving cars are a fundamental research subject in recent years; the ultimate goal is to completely exchange the human driver with automated systems. On the other hand, deep learning techniques have revealed performance and effectiveness in several areas. The strength of self-driving cars has been deeply investigated in many areas including object detection, localization as well, and activity recognition. This paper provides an approach to deep learning; which combines the benefits of both convolutional neural network CNN together with Dense technique. This approach learns based on features extracted from the feature extraction technique which is linear discriminant analysis LDA combined with feature expansion techniques namely: standard deviation, min, max, mod, variance and mean. The presented approach has proven its success in both testing and training data and achieving 100% accuracy in both terms.

Article
Deep learning and IoT for Monitoring Tomato Plant

Marwa Abdulla, Ali Marhoon

Pages: 70-78

PDF Full Text
Abstract

Agriculture is the primary food source for humans and livestock in the world and the primary source for the economy of many countries. The majority of the country's population and the world depend on agriculture. Still, at present, farmers are facing difficulty in dealing with the requirements of agriculture. Due to many reasons, including different and extreme weather conditions, the abundance of water quality, etc. This paper applied the Internet of Things and deep learning system to establish a smart farming system to monitor the environmental conditions that affect tomato plants using a mobile phone. Through deep learning networks, trained the dataset taken from PlantVillage and collected from google images to classify tomato diseases, and obtained a test accuracy of 97%, which led to the publication of the model to the mobile application for classification for its high accuracy. Using the IoT, a monitoring system and automatic irrigation were built that were controlled through the mobile remote to monitor the environmental conditions surrounding the plant, such as air temperature and humidity, soil moisture, water quality, and carbon dioxide gas percentage. The designed system has proven its efficiency when tested in terms of disease classification, remote irrigation, and monitoring of the environmental conditions surrounding the plant. And giving alerts when the values of the sensors exceed the minimum or higher values causing damage to the plant. The farmer can take the appropriate action at the right time to prevent any damage to the plant and thus obtain a high-quality product.

Article
License Plate Detection and Recognition in Unconstrained Environment Using Deep Learning

Heba Hakim, Zaineb Alhakeem, Hanadi Al-Musawi, Mohammed A. Al-Ibadi, Alaa Al-Ibadi

Pages: 210-220

PDF Full Text
Abstract

Real-time detection and recognition systems for vehicle license plates present a significant design and implementation challenge, arising from factors such as low image resolution, data noise, and various weather and lighting conditions.This study presents an efficient automated system for the identification and classification of vehicle license plates, utilizing deep learning techniques. The system is specifically designed for Iraqi vehicle license plates, adapting to various backgrounds, different font sizes, and non-standard formats. The proposed system has been designed to be integrated into an automated entrance gate security system. The system’s framework encompasses two primary phases: license plate detection (LPD) and character recognition (CR). The utilization of the advanced deep learning technique YOLOv4 has been implemented for both phases owing to its adeptness in real-time data processing and its remarkable precision in identifying diminutive entities like characters on license plates. In the LPD phase, the focal point is on the identification and isolation of license plates from images, whereas the CR phase is dedicated to the identification and extraction of characters from the identified license plates. A substantial dataset comprising Iraqi vehicle images captured under various lighting and weather circumstances has been amassed for the intention of both training and testing. The system attained a noteworthy accuracy level of 95.07%, coupled with an average processing time of 118.63 milliseconds for complete end-to-end operations on a specified dataset, thus highlighting its suitability for real-time applications. The results suggest that the proposed system has the capability to significantly enhance the efficiency and reliability of vehicle license plate recognition in various environmental conditions, thus making it suitable for implementation in security and traffic management contexts.

Article
Wavelet-based Hybrid Learning Framework for Motor Imagery Classification

Z. T. Al-Qaysi, Ali Al-Saegh, Ahmed Faeq Hussein, M. A. Ahmed

Pages: 47-56

PDF Full Text
Abstract

Due to their vital applications in many real-world situations, researchers are still presenting bunches of methods for better analysis of motor imagery (MI) electroencephalograph (EEG) signals. However, in general, EEG signals are complex because of their nonstationary and high-dimensionality properties. Therefore, high consideration needs to be taken in both feature extraction and classification. In this paper, several hybrid classification models are built and their performance is compared. Three famous wavelet mother functions are used for generating scalograms from the raw signals. The scalograms are used for transfer learning of the well-known VGG-16 deep network. Then, one of six classifiers is used to determine the class of the input signal. The performance of different combinations of mother functions and classifiers are compared on two MI EEG datasets. Several evaluation metrics show that a model of VGG-16 feature extractor with a neural network classifier using the Amor mother wavelet function has outperformed the results of state-of-the-art studies.

Article
Towards for Designing Intelligent Health Care System Based on Machine Learning

Nada Ali Noori, Ali A. Yassin

Pages: 120-128

PDF Full Text
Abstract

Health Information Technology (HIT) provides many opportunities for transforming and improving health care systems. HIT enhances the quality of health care delivery, reduces medical errors, increases patient safety, facilitates care coordination, monitors the updated data over time, improves clinical outcomes, and strengthens the interaction between patients and health care providers. Living in modern large cities has a significant negative impact on people's health, for instance, the increased risk of chronic diseases such as diabetes. According to the rising morbidity in the last decade, the number of patients with diabetes worldwide will exceed 642 million in 2040, meaning that one in every ten adults will be affected. All the previous research on diabetes mellitus indicates that early diagnoses can reduce death rates and overcome many problems. In this regard, machine learning (ML) techniques show promising results in using medical data to predict diabetes at an early stage to save people's lives. In this paper, we propose an intelligent health care system based on ML methods as a real-time monitoring system to detect diabetes mellitus and examine other health issues such as food and drug allergies of patients. The proposed system uses five machine learning methods: K-Nearest Neighbors, Naïve Bayes, Logistic Regression, Random Forest, and Support Vector Machine (SVM). The system selects the best classification method with high accuracy to optimize the diagnosis of patients with diabetes. The experimental results show that in the proposed system, the SVM classifier has the highest accuracy of 83%.

Article
Iraqi License Plate Detection and Segmentation based on Deep Learning

Ghida Yousif Abbass, Ali Fadhil Marhoon

Pages: 102-107

PDF Full Text
Abstract

Nowadays, the trend has become to utilize Artificial Intelligence techniques to replace the human's mind in problem solving. Vehicle License Plate Recognition (VLPR) is one of these problems in which the computer outperforms the human being in terms of processing speed and accuracy of results. The emergence of deep learning techniques enhances and simplifies this task. This work emphasis on detecting the Iraqi License Plates based on SSD Deep Learning Algorithm. Then Segmenting the plate using horizontal and vertical shredding. Finally, the K-Nearest Neighbors (KNN) algorithm utilized to specify the type of car. The proposed system evaluated by using a group of 500 different Iraqi Vehicles. The successful results show that 98% regarding the plate detection, and 96% for segmenting operation.

Article
A Dataset for Kinship Estimation from Image of Hand Using Machine Learning

Sarah Ibrahim Fathi, Mazin H. Aziz

Pages: 127-136

PDF Full Text
Abstract

Kinship (Familial relationships) detection is crucial in many fields and has applications in biometric security, adoption, forensic investigations, and more. It is also essential during wars and natural disasters like earthquakes since it may aid in reunion, missing person searches, establishing emergency contacts, and providing psychological support. The most common method of determining kinship is DNA analysis which is highly accurate. Another approach, which is noninvasive, uses facial photos with computer vision and machine learning algorithms for kinship estimation. Each part of the Human -body has its own embedded information that can be extracted and adopted for identification, verification, or classification of that person. Kinship recognition is based on finding traits that are shared by every family. We investigate the use of hand geometry for kinship detection, which is a new approach. Because of the available hand image Datasets do not contain kinship ground truth; therefore, we created our own dataset. This paper describes the tools, methodology, and details of the collected MKH, which stands for the Mosul Kinship Hand, images dataset. The images of MKH dataset were collected using a mobile phone camera with a suitable setup and consisted of 648 images for 81 individuals from 14 families (8 hand situations per person). This paper also presents the use of this dataset in kinship prediction using machine learning. Google MdiaPipe was used for hand detection, segmentation, and geometrical key points finding. Handcraft feature extraction was used to extract 43 distinctive geometrical features from each image. A neural network classifier was designed and trained to predict kinship, yielding about 93% prediction accuracy. The results of this novel approach demonstrated that the hand possesses biometric characteristics that may be used to establish kinship, and that the suggested method is a promising way as a kinship indicator.

Article
Content-Based Image Retrieval using Hard Voting Ensemble Method of Inception, Xception, and Mobilenet Architectures

Meqdam A. Mohammed, Zakariya A. Oraibi, Mohammed Abdulridha Hussain

Pages: 145-157

PDF Full Text
Abstract

Advancements in internet accessibility and the affordability of digital picture sensors have led to the proliferation of extensive image databases utilized across a multitude of applications. Addressing the semantic gap between low- level attributes and human visual perception has become pivotal in refining Content Based Image Retrieval (CBIR) methodologies, especially within this context. As this field is intensely researched, numerous efficient algorithms for CBIR systems have surfaced, precipitating significant progress in the artificial intelligence field. In this study, we propose employing a hard voting ensemble approach on features derived from three robust deep learning architectures: Inception, Exception, and Mobilenet. This is aimed at bridging the divide between low-level image features and human visual perception. The Euclidean method is adopted to determine the similarity metric between the query image and the features database. The outcome was a noticeable improvement in image retrieval accuracy. We applied our approach to a practical dataset named CBIR 50, which encompasses categories such as mobile phones, cars, cameras, and cats. The effectiveness of our method was thereby validated. Our approach outshone existing CBIR algorithms with superior accuracy (ACC), precision (PREC), recall (REC), and F1-score (F1-S), proving to be a noteworthy addition to the field of CBIR. Our proposed methodology could be potentially extended to various other sectors, including medical imaging and surveillance systems, where image retrieval accuracy is of paramount importance.

Article
Energy Demand Prediction Based on Deep Learning Techniques

Sarab Shanan Swide, Ali F. Marhoon

Pages: 83-89

PDF Full Text
Abstract

The development of renewable resources and the deregulation of the market have made forecasting energy demand more critical in recent years. Advanced intelligent models are created to ensure accurate power projections for several time horizons to address new difficulties. Intelligent forecasting algorithms are a fundamental component of smart grids and a powerful tool for reducing uncertainty in order to make more cost- and energy-efficient decisions about generation scheduling, system reliability and power optimization, and profitable smart grid operations. However, since many crucial tasks of power operators, such as load dispatch, rely on short-term forecasts, prediction accuracy in forecasting algorithms is highly desired. This essay suggests a model for estimating Denmark’s power use that can precisely forecast the month’s demand. In order to identify factors that may have an impact on the pattern of a number of unique qualities in the city direct consumption of electricity. The current paper also demonstrates how to use an ensemble deep learning technique and Random forest to dramatically increase prediction accuracy. In addition to their ensemble, we showed how well the individual Random forest performed.

Article
A Self Learning Fuzzy Logic Controller for Ship Steering System

Ammar A. Aldair

Pages: 25-34

PDF Full Text
Abstract

A self learning fuzzy logic controller for ship steering systems is proposed in this paper. Due to the high nonlinearity of ship steering system, the performances of traditional control algorithms are not satisfactory in fact. An intelligent control system is designed for controlling the direction heading of ships to improve the high e ffi ciency of transportation, the convenience of manoeuvring ships, and the safety of navigation. The design of fuzzy controllers is usually performed in an ad hoc manner where it is hard to justify the choice of some fuzzy control parameters such as the parameters of membership function. In this paper, self tuning algorithm is used to adjust the parameters of fuzzy controller. Simulation results show that the efficiency of proposed algorithm to design a fuzzy controller for ship steering system.

Article
Digital Marketing Data Classification by Using Machine Learning Algorithms

Noor Saud Abd, Oqbah Salim Atiyah, Mohammed Taher Ahmed, Ali Bakhit

Pages: 245-256

PDF Full Text
Abstract

Early in the 20th century, as a result of technological advancements, the importance of digital marketing significantly increased as the necessity for digital customer experience, promotion, and distribution emerged. Since the year 1988, in the case when the term ”Digital Marketing” first appeared, the business sector has undergone drastic growth, moving from small startups to massive corporations on a global scale. The marketer must navigate a chaotic environment caused by the vast volume of generated data. Decision-makers must contend with the fact that user data is dynamic and changes every day. Smart applications must be used within enterprises to better evaluate, classify, enhance, and target audiences. Customers who are tech-savvy are pushing businesses to make bigger financial investments and use cutting-edge technologies. It was only natural that marketing and trade could be one of the areas to move to such development, which helps to move to the speed of spread, advertisements, along with other things to facilitate things for reaching and winning customers. In this study, we utilized machine learning (ML) algorithms (Decision tree (DT), K-Nearest Neighbor (KNN), CatBoost, and Random Forest (RF) (for classifying data in customers to move to development. Improve the ability to forecast customer behavior so one can gain more business from them more quickly and easily. With the use of the aforementioned dataset, the suggested system was put to the test. The results show that the system can accurately predict if a customer will buy something or not; the random forest (RF) had an accuracy of 0.97, DT had an accuracy of 0. 95, KNN had an accuracy of 0. 91, while the CatBoost algorithm had the execution time 15.04 of seconds, and gave the best result of highest f1 score and accuracy (0.91, 0. 98) respectively. Finally, the study’s future goals involve being created a web page, thereby helping many banking institutions with speed and forecast accuracy. Using more techniques of feature selection in conjunction with the marketing dataset to improve diagnosis.

Article
Face Recognition-Based Automatic Attendance System in a Smart Classroom

Ahmad S. Lateef, Mohammed Y. Kamil

Pages: 37-47

PDF Full Text
Abstract

The smart classroom is a fully automated classroom where repetitive tasks, including attendance registration, are automatically performed. Due to recent advances in artificial intelligence, traditional attendance registration methods have become challenging. These methods require significant time and effort to complete the process. Therefore, researchers have sought alternative ways to accomplish attendance registration. These methods include identification cards, radio frequency, or biometric systems. However, all of these methods have faced challenges in safety, accuracy, effort, time, and cost. The development of digital image processing techniques, specifically face recognition technology, has enabled automated attendance registration. Face recognition technology is considered the most suitable for this process due to its ability to recognize multiple faces simultaneously. This study developed an integrated attendance registration system based on the YOLOv7 algorithm, which extracts features and recognizes students’ faces using a specially collected database of 31 students from Mustansiriyah University. A comparative study was conducted by applying the YOLOv7 algorithm, a machine learning algorithm, and a combined machine learning and deep learning algorithm. The proposed method achieved an accuracy of up to 100%. A comparison with previous studies demonstrated that the proposed method is promising and reliable for automating attendance registration.

Article
Neural Network-Based Adaptive Control of Robotic Manipulator: Application to a Three Links Cylindrical Robot

Abdul-Basset A. AL-Hussein

Pages: 114-122

PDF Full Text
Abstract

A composite PD and sliding mode neural network (NN)-based adaptive controller, for robotic manipulator trajectory tracking, is presented in this paper. The designed neural networks are exploited to approximate the robotics dynamics nonlinearities, and compensate its effect and this will enhance the performance of the filtered error based PD and sliding mode controller. Lyapunov theorem has been used to prove the stability of the system and the tracking error boundedness. The augmented Lyapunov function is used to derive the NN weights learning law. To reduce the effect of breaching the NN learning law excitation condition due to external disturbances and measurement noise; a modified learning law is suggested based on e-modification algorithm. The controller effectiveness is demonstrated through computer simulation of cylindrical robot manipulator.

Article
Human Activity and Gesture Recognition Based on WiFi Using Deep Convolutional Neural Networks

Sokienah K. Jawad, Musaab Alaziz

Pages: 110-116

PDF Full Text
Abstract

WiFi-based human activity and gesture recognition explore the interaction between the human hand or body movements and the reflected WiFi signals to identify various activities. This type of recognition has received much attention in recent years since it does not require wearing special sensors or installing cameras. This paper aims to investigate human activity and gesture recognition schemes that use Channel State Information (CSI) provided by WiFi devices. To achieve high accuracy in the measurement, deep learning models such as AlexNet, VGG 19, and SqueezeNet were used for classification and extracting features automatically. Firstly, outliers are removed from the amplitude of each CSI stream during the preprocessing stage by using the Hampel identifier algorithm. Next, the RGB images are created for each activity to feed as input to Deep Convolutional Neural Networks. After that, data augmentation is implemented to reduce the overfitting problems in deep learning models. Finally, the proposed method is evaluated on a publicly available dataset called WiAR, which contains 10 volunteers, each of whom executes 16 activities. The experiment results demonstrate that AlexNet, VGG19, and SqueezeNet all have high recognition accuracy of 99.17 %, 96.25%, and 100 %, respectively.

Article
Internet of Things Based Oil Pipeline Spill Detection System Using Deep Learning and LAB Colour Algorithm

Muhammad H. Obaid, Ali H. Hamad

Pages: 137-148

PDF Full Text
Abstract

Given the role that pipelines play in transporting crude oil, which is considered the basis of the global economy and across different environments, hundreds of studies revolve around providing the necessary protection for it. Various technologies have been employed in this pursuit, differing in terms of cost, reliability, and efficiency, among other factors. Computer vision has emerged as a prominent technique in this field, albeit requiring a robust image-processing algorithm for spill detection. This study employs image segmentation techniques to enable the computer to interpret visual information and images effectively. The research focuses on detecting spills in oil pipes caused by leakage, utilizing images captured by a drone equipped with a Raspberry Pi and Pi camera. These images, along with their global positioning system (GPS) location, are transmitted to the base station using the message queuing telemetry transport Internet of Things (MQTT IoT) protocol. At the base station, deep learning techniques, specifically Holistically-Nested Edge Detection (HED) and extreme inception (Xception) networks, are employed for image processing to identify contours. The proposed algorithm can detect multiple contours in the images. To pinpoint a contour with a black color, representative of an oil spill, the CIELAB color space (LAB) algorithm effectively removes shadow effects. If a contour is detected, its area and perimeter are calculated to determine whether it exceeds a certain threshold. The effectiveness of the proposed system was tested on Iraqi oil pipeline systems, demonstrating its capability to detect spills of different sizes.

Article
Speed Control of BLDC Motor Based on Recurrent Wavelet Neural Network

Adel A. Obed, Ameer L. Saleh

Pages: 118-129

PDF Full Text
Abstract

In recent years, artificial intelligence techniques such as wavelet neural network have been applied to control the speed of the BLDC motor drive. The BLDC motor is a multivariable and nonlinear system due to variations in stator resistance and moment of inertia. Therefore, it is not easy to obtain a good performance by applying conventional PID controller. The Recurrent Wavelet Neural Network (RWNN) is proposed, in this paper, with PID controller in parallel to produce a modified controller called RWNN-PID controller, which combines the capability of the artificial neural networks for learning from the BLDC motor drive and the capability of wavelet decomposition for identification and control of dynamic system and also having the ability of self-learning and self-adapting. The proposed controller is applied for controlling the speed of BLDC motor which provides a better performance than using conventional controllers with a wide range of speed. The parameters of the proposed controller are optimized using Particle Swarm Optimization (PSO) algorithm. The BLDC motor drive with RWNN-PID controller through simulation results proves a better in the performance and stability compared with using conventional PID and classical WNN-PID controllers.

Article
PLC/HMI Based Portable Workbench for PLC and Digital Logic Learning and Application Development

Jawad Radhi Mahmood, Ramzy Salim Ali

Pages: 83-96

PDF Full Text
Abstract

A Programmable logic controller (PLC) uses the digital logic circuits and their operating concepts in its hardware structure and its programming instructions and algorithms. Therefore, the deep understanding of these two items is staple for the development of control applications using the PLC. This target is only possible through the practical sensing of the various components or instructions of these two items and their applications. In this work, a user-friendly and re-configurable ladder, digital logic learning and application development design and testing platform has been designed and implemented using a Programmable Logic Controller (PLC), Human Machine Interface panel (HMI), four magnetic contactors, one Single-phase power line controller and one Variable Frequency Drive (VFD) unit. The PLC role is to implement the ladder and digital logic functions. The HMI role is to establish the virtual circuit wiring and also to drive and monitor the developed application in real time mode of application. The magnetic contactors are to play the role of industrial field actuators or to link the developed application control circuit to another field actuator like three phase induction motor. The Single -phase power line controller is to support an application like that of the soft starter. The VFD is to support induction motor driven applications like that of cut-to-length process in which steel coils are uncoiled and passed through cutting blade to be cut into required lengths. The proposed platform has been tested through the development of 14 application examples. The test results proved the validity of the proposed platform.

Article
Shapley Value is an Equitable Metric for Data Valuation

Seyedamir Shobeiri, Mojtaba Aajami

Pages: 9-14

PDF Full Text
Abstract

Low-quality data can be dangerous for the machine learning models, especially in crucial situations. Some large-scale datasets have low-quality data and false labels, also, datasets with images type probably have artifacts and biases from measurement errors. So, automatic algorithms that are able to recognize low-quality data are needed. In this paper, Shapley Value is used, a metric for evaluation of data, to quantify the value of training data to the performance of a classification algorithm in a large ImageNet dataset. We specify the success of data Shapley in recognizing low-quality against precious data for classification. We figure out that model performance is increased when low Shapley values are removed, whilst classification model performance is declined when high Shapley values are removed. Moreover, there were more true labels in high-Shapley value data and more mislabeled samples in low-Shapley value. Results represent that mislabeled or poor-quality images are in low Shapley value and valuable data for classification are in high Shapley value.

Article
A Review on Voice-based Interface for Human-Robot Interaction

Ameer A. Badr, Alia K. Abdul-Hassan

Pages: 91-102

PDF Full Text
Abstract

With the recent developments of technology and the advances in artificial intelligence and machine learning techniques, it has become possible for the robot to understand and respond to voice as part of Human-Robot Interaction (HRI). The voice-based interface robot can recognize the speech information from humans so that it will be able to interact more naturally with its human counterpart in different environments. In this work, a review of the voice-based interface for HRI systems has been presented. The review focuses on voice-based perception in HRI systems from three facets, which are: feature extraction, dimensionality reduction, and semantic understanding. For feature extraction, numerous types of features have been reviewed in various domains, such as time, frequency, cepstral (i.e. implementing the inverse Fourier transform for the signal spectrum logarithm), and deep domains. For dimensionality reduction, subspace learning can be used to eliminate the redundancies of high-dimensional features by further processing extracted features to reflect their semantic information better. For semantic understanding, the aim is to infer from the extracted features the objects or human behaviors. Numerous types of semantic understanding have been reviewed, such as speech recognition, speaker recognition, speaker gender detection, speaker gender and age estimation, and speaker localization. Finally, some of the existing voice-based interface issues and recommendations for future works have been outlined.

Article
A Hybrid Lung Cancer Model for Diagnosis and Stage Classification from Computed Tomography Images

Abdalbasit Mohammed Qadir, Peshraw Ahmed Abdalla, Dana Faiq Abd

Pages: 266-274

PDF Full Text
Abstract

Detecting pulmonary cancers at early stages is difficult but crucial for patient survival. Therefore, it is essential to develop an intelligent, autonomous, and accurate lung cancer detection system that shows great reliability compared to previous systems and research. In this study, we have developed an innovative lung cancer detection system known as the Hybrid Lung Cancer Stage Classifier and Diagnosis Model (Hybrid-LCSCDM). This system simplifies the complex task of diagnosing lung cancer by categorizing patients into three classes: normal, benign, and malignant, by analyzing computed tomography (CT) scans using a two-part approach: First, feature extraction is conducted using a pre-trained model called VGG-16 for detecting key features in lung CT scans indicative of cancer. Second, these features are then classified using a machine learning technique called XGBoost, which sorts the scans into three categories. A dataset, IQ-OTH/NCCD - Lung Cancer, is used to train and evaluate the proposed model to show its effectiveness. The dataset consists of the three aforementioned classes containing 1190 images. Our suggested strategy achieved an overall accuracy of 98.54%, while the classification precision among the three classes was 98.63%. Considering the accuracy, recall, and precision as well as the F1-score evaluation metrics, the results indicated that when using solely computed tomography scans, the proposed (Hybrid-LCSCDM) model outperforms all previously published models.

Article
Advancements and Challenges in Hand Gesture Recognition: A Comprehensive Review

Bothina Kareem Murad, Abbas H. Hassin Alasadi

Pages: 154-164

PDF Full Text
Abstract

Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.

Article
An Assessment of Ensemble Voting Approaches, Random Forest, and Decision Tree Techniques in Detecting Distributed Denial of Service (DDoS) Attacks

Mustafa S. Ibrahim Alsumaidaie, Khattab M. Ali Alheeti, Abdul Kareem Alaloosy

Pages: 16-24

PDF Full Text
Abstract

The reliance on networks and systems has grown rapidly in contemporary times, leading to increased vulnerability to cyber assaults. The Distributed Denial-of-Service (Distributed Denial of Service) attack, a threat that can cause great financial liabilities and reputation damage. To address this problem, Machine Learning (ML) algorithms have gained huge attention, enabling the detection and prevention of DDOS (Distributed Denial of Service) Attacks. In this study, we proposed a novel security mechanism to avoid Distributed Denial of Service attacks. Using an ensemble learning methodology aims to it also can differentiate between normal network traffic and the malicious flood of Distributed Denial of Service attack traffic. The study also evaluates the performance of two well-known ML algorithms, namely, the decision tree and random forest, which were used to execute the proposed method. Tree in defending against Distributed Denial of Service (DDoS) attacks. We test the models using a publicly available dataset called TIME SERIES DATASET FOR DISTRIBUTED DENIAL OF SERVICE ATTACK DETECTION. We compare the performance of models using a list of evaluation metrics developing the Model. This step involves fetching the data, preprocessing it, and splitting it into training and testing subgroups, model selection, and validation. When applied to a database of nearly 11,000 time series; in some cases, the proposed approach manifested promising results and reached an Accuracy (ACC) of up to 100 % in the dataset. Ultimately, this proposed method detects and mitigates distributed denial of service. The solution to securing communication systems from this increasing cyber threat is this: preventing attacks from being successful.

Article
Distribution Networks Reconfiguration for Power Loss Reduction and Voltage Profile Improvement Using Hybrid TLBO-BH Algorithm

Arsalan Hadaeghi, Ahmadreza Abdollahi Chirani

Pages: 12-20

PDF Full Text
Abstract

In this paper, a new method based on the combination of the Teaching-learning-based-optimization (TLBO) and Black-hole (BH) algorithm has been proposed for the reconfiguration of distribution networks in order to reduce active power losses and improve voltage profile in the presence of distributed generation sources. The proposed method is applied to the IEEE 33-bus radial distribution system. The results show that the proposed method can be a very promising potential method for solving the reconfiguration problem in distribution systems and has a significant effect on loss reduction and voltage profile improvement.

Article
Deep Learning Video Prediction Based on Enhanced Skip Connection

Zahraa T. Al Mokhtar, Shefa A. Dawwd

Pages: 195-205

PDF Full Text
Abstract

Video prediction theories have quickly progressed especially after a great revolution of deep learning methods. The prediction architectures based on pixel generation produced a blurry forecast, but it is preferred in many applications because this model is applied on frames only and does not need other support information like segmentation or flow mapping information making getting a suitable dataset very difficult. In this approach, we presented a novel end-to-end video forecasting framework to predict the dynamic relationship between pixels in time and space. The 3D CNN encoder is used for estimating the dynamic motion, while the decoder part is used to reconstruct the next frame based on adding 3DCNN CONVLSTM2D in skip connection. This novel representation of skip connection plays an important role in reducing the blur predicted and preserved the spatial and dynamic information. This leads to an increase in the accuracy of the whole model. The KITTI and Cityscapes are used in training and Caltech is applied in inference. The proposed framework has achieved a better quality in PSNR=33.14, MES=0.00101, SSIM=0.924, and a small number of parameters (2.3 M).

Article
Classification Algorithms for Determining Handwritten Digit

Hayder Naser Khraibet AL-Behadili

Pages: 96-102

PDF Full Text
Abstract

Data-intensive science is a critical science paradigm that interferes with all other sciences. Data mining (DM) is a powerful and useful technology with wide potential users focusing on important meaningful patterns and discovers a new knowledge from a collected dataset. Any predictive task in DM uses some attribute to classify an unknown class. Classification algorithms are a class of prominent mathematical techniques in DM. Constructing a model is the core aspect of such algorithms. However, their performance highly depends on the algorithm behavior upon manipulating data. Focusing on binarazaition as an approach for preprocessing, this paper analysis and evaluates different classification algorithms when construct a model based on accuracy in the classification task. The Mixed National Institute of Standards and Technology (MNIST) handwritten digits dataset provided by Yann LeCun has been used in evaluation. The paper focuses on machine learning approaches for handwritten digits detection. Machine learning establishes classification methods, such as K-Nearest Neighbor(KNN), Decision Tree (DT), and Neural Networks (NN). Results showed that the knowledge-based method, i.e. NN algorithm, is more accurate in determining the digits as it reduces the error rate. The implication of this evaluation is providing essential insights for computer scientists and practitioners for choosing the suitable DM technique that fit with their data.

Article
A Modified Wavenet-Based Link Status Predictor for Computer Networks

Jassim M. Abdul-Jabbar, Omar A. Hazim

Pages: 48-57

PDF Full Text
Abstract

In this paper, a modified wavelet neural network (WNN) (or wavenet)-based predictor is introduced to predict link status (congestion with load indication) of each link in the computer network. On the contrary of previous wavenet-based predictors, the proposed modified wavenet-based link state predictor (MWBLSP) generates two indicating outputs for congestion and load status of each link based on th e premeasured power burden (square values) of utilization on each link in the previous time intervals. Fortunately, WNNs possess all learning and generalization capabilities of traditional neural networks. In addition, the ability of such WNNs are efficiently enhanced by the local characteristics of wavelet functions to deal with sudden changes and burst network load. The use of power burden utilization at the predictor input supports some non-linear distri butions of the predicted values in a more efficient manner. The proposed MWBLSP pre dictor can be used in the context of active congestion control and link load balancing techniques to improve the performance of all links in the network with best utilization of network resources.

Article
Fuzzy-Neural Petri Net Distributed Control System Using Hybrid Wireless Sensor Network and CAN Fieldbus

Ali A. Abed, Abduladhem A. Ali, Nauman Aslam Computer Science & Digital Techniques, Northumbria Univ. nauman.aslam@northumbria.ac.uk, Ali F. Marhoon

Pages: 54-70

PDF Full Text
Abstract

The reluctance of industry to allow wireless paths to be incorporated in process control loops has limited the potential applications and benefits of wireless systems. The challenge is to maintain the performance of a control loop, which is degraded by slow data rates and delays in a wireless path. To overcome these challenges, this paper presents an application–level design for a wireless sensor/actuator network (WSAN) based on the “automated architecture”. The resulting WSAN system is used in the developing of a wireless distributed control system (WDCS). The implementation of our wireless system involves the building of a wireless sensor network (WSN) for data acquisition and controller area network (CAN) protocol fieldbus system for plant actuation. The sensor/actuator system is controlled by an intelligent digital control algorithm that involves a controller developed with velocity PID- like Fuzzy Neural Petri Net (FNPN) system. This control system satisfies two important real-time requirements: bumpless transfer and anti-windup, which are needed when manual/auto operating aspect is adopted in the system. The intelligent controller is learned by a learning algorithm based on back-propagation. The concept of petri net is used in the development of FNN to get a correlation between the error at the input of the controller and the number of rules of the fuzzy-neural controller leading to a reduction in the number of active rules. The resultant controller is called robust fuzzy neural petri net (RFNPN) controller which is created as a software model developed with MATLAB. The developed concepts were evaluated through simulations as well validated by real-time experiments that used a plant system with a water bath to satisfy a temperature control. The effect of disturbance is also studied to prove the system's robustness.

Article
EEG Motor-Imagery BCI System Based on Maximum Overlap Discrete Wavelet Transform (MODWT) and Machine learning algorithm

Samaa S. Abdulwahab, Hussain K. Khleaf, Manal H. Jassim

Pages: 38-45

PDF Full Text
Abstract

The ability of the human brain to communicate with its environment has become a reality through the use of a Brain-Computer Interface (BCI)-based mechanism. Electroencephalography (EEG) has gained popularity as a non-invasive way of brain connection. Traditionally, the devices were used in clinical settings to detect various brain diseases. However, as technology advances, companies such as Emotiv and NeuroSky are developing low-cost, easily portable EEG-based consumer-grade devices that can be used in various application domains such as gaming, education. This article discusses the parts in which the EEG has been applied and how it has proven beneficial for those with severe motor disorders, rehabilitation, and as a form of communicating with the outside world. This article examines the use of the SVM, k-NN, and decision tree algorithms to classify EEG signals. To minimize the complexity of the data, maximum overlap discrete wavelet transform (MODWT) is used to extract EEG features. The mean inside each window sample is calculated using the Sliding Window Technique. The vector machine (SVM), k-Nearest Neighbor, and optimize decision tree load the feature vectors.

Article
Semantic Segmentation of Aerial Images Using U-Net Architecture

Sarah Kamel Hussein, Khawla Hussein Ali

Pages: 58-63

PDF Full Text
Abstract

Arial images are very high resolution. The automation for map generation and semantic segmentation of aerial images are challenging problems in semantic segmentation. The semantic segmentation process does not give us precise details of the remote sensing images due to the low resolution of the aerial images. Hence, we propose an algorithm U-Net Architecture to solve this problem. It is classified into two paths. The compression path (also called: the encoder) is the first path and is used to capture the image's context. The encoder is just a convolutional and maximal pooling layer stack. The symmetric expanding path (also called: the decoder) is the second path, which is used to enable exact localization by transposed convolutions. This task is commonly referred to as dense prediction, which is completely connected to each other and also with the former neurons which gives rise to dense layers. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains convolutional layers and does not contain any dense layer because of which it can accept images of any size. The performance of the model will be evaluated by improving the image using the proposed method U-NET and obtaining an improved image by measuring the accuracy compared with the value of accuracy with previous methods.

Article
Backward Private Searchable Symmetric Encryption with Improved Locality

Salim S. Bilbul, Ayad I. Abdulsada

Pages: 17-26

PDF Full Text
Abstract

Searchable symmetric encryption (SSE) enables clients to outsource their encrypted documents into a remote server and allows them to search the outsourced data efficiently without violating the privacy of the documents and search queries. Dynamic SSE schemes (DSSE) include performing update queries, where documents can be added or removed at the expense of leaking more information to the server. Two important privacy notions are addressed in DSSE schemes: forward and backward privacy. The first one prevents associating the newly added documents with previously issued search queries. While the second one ensures that the deleted documents cannot be linked with subsequent search queries. Backward has three formal types of leakage ordered from strong to weak security: Type-I, Type-II, and Type-III. In this paper, we propose a new DSSE scheme that achieves Type-II backward and forward privacy by generating fresh keys for each search query and preventing the server from learning the underlying operation (del or add) included in update query. Our scheme improves I/O performance and search cost. We implement our scheme and compare its efficiency against the most efficient backward privacy DSSE schemes in the literature of the same leakage: MITRA and MITRA*. Results show that our scheme outperforms the previous schemes in terms of efficiency in dynamic environments. In our experiments, the server takes 699ms to search and return (100,000) results.

Article
Handwritten Signature Verification Method Using Convolutional Neural Network

Wijdan Yassen A. AlKarem, Eman Thabet Khalid, Khawla. H. Ali

Pages: 77-84

PDF Full Text
Abstract

Automatic signature verification methods play a significant role in providing a secure and authenticated handwritten signature in many applications, to prevent forgery problems, specifically institutions of finance, and transections of legal papers, etc. There are two types of handwritten signature verification methods: online verification (dynamic) and offline verification (static) methods. Besides, signature verification approaches can be categorized into two styles: writer dependent (WD), and writer independent (WI) styles. Offline signature verification methods demands a high representation features for the signature image. However, lots of studies have been proposed for WI offline signature verification. Yet, there is necessity to improve the overall accuracy measurements. Therefore, a proved solution in this paper is depended on deep learning via convolutional neural network (CNN) for signature verification and optimize the overall accuracy measurements. The introduced model is trained on English signature dataset. For model evaluation, the deployed model is utilized to make predictions on new data of Arabic signature dataset to classify whether the signature is real or forged. The overall obtained accuracy is 95.36% based on validation dataset.

Article
CONDENSER AND DEAERATOR CONTROL USING FUZZY-NEURAL TECHNIQUE

Prof. Dr. Abduladhem A. Ali, A'ayad Sh. Mohammed

Pages: 79-96

PDF Full Text
Abstract

A model reference adaptive control of condenser and deaerator of steam power plant is presented. A fuzzy-neural identification is constructed as an integral part of the fuzzy-neural controller. Both forward and inverse identification is presented. In the controller implementation, the indirect controller with propagating the error through the fuzzy-neural identifier based on Back Propagating Through Time (BPTT) learning algorithm as well as inverse control structure are proposed. Simulation results are achieved using Multi Input-Multi output (MIMO) type of fuzzy-neural network. Robustness of the plant is detected by including several tests and observations.

Article
Recognition of Cardiac Arrhythmia using ECG Signals and Bio-inspired AWPSO Algorithms

Jyothirmai Digumarthi, V. M. Gayathri, R. Pitchai

Pages: 95-103

PDF Full Text
Abstract

Studies indicate cardiac arrhythmia is one of the leading causes of death in the world. The risk of a stroke may be reduced when an irregular and fast heart rate is diagnosed. Since it is non-invasive, electrocardiograms are often used to detect arrhythmias. Human data input may be error-prone and time-consuming because of these limitations. For early detection of heart rhythm problems, it is best to use deep learning models. In this paper, a hybrid bio-inspired algorithm has been proposed by combining whale optimization (WOA) with adaptive particle swarm optimization (APSO). The WOA is a recently developed meta-heuristic algorithm. APSO is used to increase convergence speed. When compared to conventional optimization methods, the two techniques work better together. MIT-BIH dataset has been utilized for training, testing and validating this model. The recall, accuracy, and specificity are used to measure efficiency of the proposed method. The efficiency of the proposed method is compared with state-of-art methods and produced 98.25 % of accuracy.

Article
FINGERPRINTS IDENTIFICATION USING NEUROFUZZY SYSTEM

Emad S. Jabber, Maytham A. Shahed

Pages: 89-102

PDF Full Text
Abstract

This paper deals with NeuroFuzzy System (NFS), which is used for fingerprint identification to determine a person's identity. Each fingerprint is represented by 8 bits/pixel grayscale image acquired by a scanner device. Many operations are performed on input image to present it on NFS, this operations are: image enhancement from noisy or distorted fingerprint image input and scaling the image to a suitable size presenting the maximum value for the pixel in grayscale image which represent the inputs for the NFS. For the NFS, it is trained on a set of fingerprints and tested on another set of fingerprints to illustrate its efficiency in identifying new fingerprints. The results proved that the NFS is an effective and simple method, but there are many factors that affect the efficiency of NFS learning and it has been noticed that the changing one of this factors affects the NFS results. These affecting factors are: number of training samples for each person, type and number of membership functions, and the type of fingerprint image that used.

Article
Machine Learning Approach Based on Smart Ball COMSOL Multiphysics Simulation for Pipe Leak Detection

Marwa H. Abed, Wasan A. Wali, Musaab Alaziz

Pages: 100-110

PDF Full Text
Abstract

Due to the changing flow conditions during the pipeline's operation, several locations of erosion, damage, and failure occur. Leak prevention and early leak detection techniques are the best pipeline risk mitigation measures. To reduce detection time, pipeline models that can simulate these breaches are essential. In this study, numerical modeling using COMSOL Multiphysics is suggested for different fluid types, velocities, pressure distributions, and temperature distributions. The system consists of 12 meters of 8-inch pipe. A movable ball with a diameter of 5 inches is placed within. The findings show that dead zones happen more often in oil than in gas. Pipe insulation is facilitated by the gas phase's thermal inefficiency (thermal conductivity). The fluid mixing is improved by 2.5 m/s when the temperature is the lowest. More than water and gas, oil viscosity and dead zones lower maximum pressure. Pressure decreases with maximum velocity and vice versa. The acquired oil data set is utilized to calibrate the Support Vector Machine and Decision Tree techniques using MATLAB R2021a, ensuring the precision of the measurement. The classification result reveals that the Support Vector Machine (SVM) and Decision Tree (DT) models have the best average accuracy, which is 98.8%, and 99.87 %, respectively.

Article
Regeneration Energy for Nonlinear Active Suspension System Using Electromagnetic Actuator

Ammar A. Aldair, Eman Badee Alsaedee

Pages: 113-125

PDF Full Text
Abstract

The main purpose of using the suspension system in vehicles is to prevent the road disturbance from being transmitted to the passengers. Therefore, a precise controller should be designed to improve the performances of suspension system. This paper presents a modeling and control of the nonlinear full vehicle active suspension system with passenger seat utilizing Fuzzy Model Reference Learning Control (FMRLC) technique. The components of the suspension system are: damper, spring and actuator, all of those components have nonlinear behavior, so that, nonlinear forces that are generated by those components should be taken into account when designed the control system. The designed controller consumes high power so that when the control system is used, the vehicle will consume high amount of fuel. It notes that, when vehicle is driven on a rough road; there will be a shock between the sprung mass and the unsprung mass. This mechanical power dissipates and converts into heat power by a damper. In this paper, the wasted power has reclaimed in a proper way by using electromagnetic actuator. The electromagnetic actuator converts the mechanical power into electrical power which can be used to drive the control system. Therefore, overall power consumption demand for the vehicle can be reduced. When the electromagnetic actuator is used three main advantages can be obtained: firstly, fuel consumption by the vehicle is decreased, secondly, the harmful emission is decreases, therefore, our environment is protected, and thirdly, the performance of the suspension system is improved as shown in the obtained results.

Article
Using Pearson Correlation and Mutual Information (PC-MI) to Select Features for Accurate Breast Cancer Diagnosis Based on a Soft Voting Classifier

Mohammed S. Hashim, Ali A. Yassin

Pages: 43-53

PDF Full Text
Abstract

Breast cancer is one of the most critical diseases suffered by many people around the world, making it the most common medical risk they will face. This disease is considered the leading cause of death around the world, and early detection is difficult. In the field of healthcare, where early diagnosis based on machine learning (ML) helps save patients’ lives from the risks of diseases, better-performing diagnostic procedures are crucial. ML models have been used to improve the effectiveness of early diagnosis. In this paper, we proposed a new feature selection method that combines two filter methods, Pearson correlation and mutual information (PC-MI), to analyse the correlation amongst features and then select important features before passing them to a classification model. Our method is capable of early breast cancer prediction and depends on a soft voting classifier that combines a certain set of ML models (decision tree, logistic regression and support vector machine) to produce one model that carries the strengths of the models that have been combined, yielding the best prediction accuracy. Our work is evaluated by using the Wisconsin Diagnostic Breast Cancer datasets. The proposed methodology outperforms previous work, achieving 99.3% accuracy, an F1 score of 0.9922, a recall of 0.9846, a precision of 1 and an AUC of 0.9923. Furthermore, the accuracy of 10-fold cross-validation is 98.2%.

Article
An Enhanced Deployment Approach of Adaptive Equalizer for Multipath Fading Channels

Haider Al-Kanan

Pages: 264-273

PDF Full Text
Abstract

Inter-symbol interference (ISI) exhibits major distortion effect often appears in digital storage and wireless communica- tion channels. The traditional decision feedback equalizer (DFE) is an efficient approach of mitigating the ISI effect using appropriate digital filter to subtract the ISI. However, the error propagation in DFE is a challenging problem that degrades the equalization due to the aliasing distorted symbols in the feedback section of the traditional DFE. The aim of the proposed approach is to minimize the error propagation and improve the modeling stability by incorporating adequate components to control the training and feedback mode of DFE. The proposed enhanced DFE architecture consists of a decision and controller components which are integrated on both the transmitter and receiver sides of communication system to auto alternate the DFE operational modes between training and feedback state based on the quality of the received signal in terms of signal-to-noise ratio SNR. The modeling architecture and performance validation of the proposed DFE are implemented in MATLAB using a raised-cosine pulse filter on the transmitter side and linear time-invariant channel model with additive gaussian noise. The equalizer capability in compensating ISI is evaluated during different operational stages including the training and DFE based on different channel distortion characteristics in terms of SNR using both 0.75 and 1.5 symbol duration in unit delay fraction of FIR filter. The simulation results of eye-diagram pattern showed significant improvement in the DFE equalizer when using a lower unit delay fraction in FIR filter for better suppressing the overlay trails of ISI. Finally, the capability of the proposed approach to mitigate the ISI is improved almost double the number of symbol errors compared to the traditional DFE.

Article
Stochastic Local Search Algorithms for Feature Selection: A Review

Hayder Naser Khraibet Al-Behadili

Pages: 1-10

PDF Full Text
Abstract

In today’s world, the data generated by many applications are increasing drastically, and finding an optimal subset of features from the data has become a crucial task. The main objective of this review is to analyze and comprehend different stochastic local search algorithms to find an optimal feature subset. Simulated annealing, tabu search, genetic programming, genetic algorithm, particle swarm optimization, artificial bee colony, grey wolf optimization, and bat algorithm, which have been used in feature selection, are discussed. This review also highlights the filter and wrapper approaches for feature selection. Furthermore, this review highlights the main components of stochastic local search algorithms, categorizes these algorithms in accordance with the type, and discusses the promising research directions for such algorithms in future research of feature selection.

Article
A Novel Quantum-Behaved Future Search Algorithm for the Detection and Location of Faults in Underground Power Cables Using ANN

Hamzah Abdulkhaleq Naji, Rashid Ali Fayadh, Ammar Hussein Mutlag

Pages: 226-244

PDF Full Text
Abstract

This article introduces a novel Quantum-inspired Future Search Algorithm (QFSA), an innovative amalgamation of the classical Future Search Algorithm (FSA) and principles of quantum mechanics. The QFSA was formulated to enhance both exploration and exploitation capabilities, aiming to pinpoint the optimal solution more effectively. A rigorous evaluation was conducted using seven distinct benchmark functions, and the results were juxtaposed with five renowned algorithms from existing literature. Quantitatively, the QFSA outperformed its counterparts in a majority of the tested scenarios, indicating its superior efficiency and reliability. In the subsequent phase, the utility of QFSA was explored in the realm of fault detection in underground power cables. An Artificial Neural Network (ANN) was devised to identify and categorize faults in these cables. By integrating QFSA with ANN, a hybrid model, QFSA-ANN, was developed to optimize the network’s structure. The dataset, curated from MATLAB simulations, comprised diverse fault types at varying distances. The ANN structure had two primary units: one for fault location and another for detection. These units were fed with nine input parameters, including phase- currents and voltages, current and voltage values from zero sequences, and voltage angles from negative sequences. The optimal architecture of the ANN was determined by varying the number of neurons in the first and second hidden layers and fine-tuning the learning rate. To assert the efficacy of the QFSA-ANN model, it was tested under multiple fault conditions. A comparative analysis with established methods in the literature further accentuated its robustness in terms of fault detection and location accuracy. this research not only augments the field of search algorithms with QFSA but also showcases its practical application in enhancing fault detection in power distribution systems. Quantitative metrics, detailed in the main article, solidify the claim of QFSA-ANN’s superiority over conventional methods.

Article
Group Key Management Protocols for Non-Network: A Survey

Rituraj Jain, Dr. Manish Varshney

Pages: 214-225

PDF Full Text
Abstract

The phenomenal rise of the Internet in recent years, as well as the expansion of capacity in today’s networks, have provided both inspiration and incentive for the development of new services that combine phone, video, and text ”over IP.” Although unicast communications have been prevalent in the past, there is an increasing demand for multicast communications from both Internet Service Providers (ISPs) and content or media providers and distributors. Indeed, multicasting is increasingly being used as a green verbal exchange mechanism for institution-oriented programmers on the Internet, such as video conferencing, interactive college games, video on demand (VoD), TV over the Internet, e-learning, software programme updates, database replication, and broadcasting inventory charges. However, the lack of security within the multicast verbal exchange model prevents the effective and large-scale adoption of such important company multi-celebration activities. This situation prompted a slew of research projects that addressed a variety of issues related to multicast security, including confidentiality, authentication, watermarking, and access control. These issues should be viewed within the context of the safety regulations that work in the specific conditions. For example, in a public inventory charge broadcast, while identification is a vital necessity, secrecy is not. In contrast, video-convention programme requires both identification and confidentiality. This study gives a complete examination and comparison of the issues of group key management. Both network-dependent and network-independent approaches are used. The study also addresses the advantages, disadvantages, and security problems of various protocols.

Article
LabVIEW Venus Flytrap ANFIS Inverse Control System for Microwave Heating Cavity

Wasan A. Wali, Atheel K. Abdul Zahra, Hanady S. Ahmed

Pages: 189-198

PDF Full Text
Abstract

Growing interests in nature-inspired computing and bio-inspired optimization techniques have led to powerful tools for solving learning problems and analyzing large datasets. Several methods have been utilized to create superior performance-based optimization algorithms. However, certain applications, like nonlinear real-time, are difficult to explain using accurate mathematical models. Such large-scale combination and highly nonlinear modeling problems are solved by usage of soft computing techniques. So, in this paper, the researchers have tried to incorporate one of the most advanced plant algorithms known as Venus Flytrap Plant algorithm(VFO) along with soft-computing techniques and, to be specific, the ANFIS inverse model-Adaptive Neural Fuzzy Inference System for controlling the real-time temperature of a microwave cavity that heats oil. The MATLAB was integrated successfully with the LabVIEW platform. Wide ranges of input and output variables were experimented with. Problems were encountered due to heating system conditions like reflected power, variations in oil temperature, and oil inlet absorption and cavity temperatures affecting the oil temperature, besides the temperature’s effect on viscosity. The LabVIEW design followed and the results figure in the performance of the VFO- Inverse ANFIS controller.

Article
Short Circuit Faults Identification and Localization in IEEE 34 Nodes Distribution Feeder Based on the Theory of Wavelets

Sara J. Authafa, Khalid M. Abdul-Hassan

Pages: 65-79

PDF Full Text
Abstract

In this paper a radial distribution feeder protection scheme against short circuit faults is introduced. It is based on utilizing the substation measured current signals in detecting faults and obtaining useful information about their types and locations. In order to facilitate important measurement signals features extraction such that better diagnosis of faults can be achieved, the discrete wavelet transform is exploited. The captured features are then utilized in detecting, identifying the faulted phases (fault type), and fault location. In case of a fault occurrence, the detection scheme will make a decision to trip out a circuit breaker residing at the feeder mains. This decision is made based on a criteria that is set to distinguish between the various system states in a reliable and accurate manner. After that, the fault type and location are predicted making use of the cascade forward neural networks learning and generalization capabilities. Useful information about the fault location can be obtained provided that the fault distance from source, as well as whether it resides on the main feeder or on one of the laterals can be predicted. By testing the functionality of the proposed scheme, it is found that the detection of faults is done fastly and reliably from the view point of power system protection relaying requirements. It also proves to overcome the complexities provided by the feeder structure to the accuracy of the identification process of fault types and locations. All the simulations and analysis are performed utilizing MATLAB R2016b version software package.

1 - 57 of 57 items

Search Parameters

Journal Logo
Iraqi Journal for Electrical and Electronic Engineering

College of Engineering, University of Basrah

  • Copyright Policy
  • Terms & Conditions
  • Privacy Policy
  • Accessibility
  • Cookie Settings
Licensing & Open Access

CC BY 4.0 Logo Licensed under CC-BY-4.0

This journal provides immediate open access to its content.

Editorial Manager Logo Elsevier Logo

Peer-review powered by Elsevier’s Editorial Manager®

Copyright © 2025 College of Engineering, University of Basrah. All rights reserved, including those for text and data mining, AI training, and similar technologies.