Iraqi Journal for Electrical and Electronic Engineering
Login
Iraqi Journal for Electrical and Electronic Engineering
  • Home
  • Articles & Issues
    • Latest Issue
    • All Issues
  • Authors
    • Submit Manuscript
    • Guide for Authors
    • Authorship
    • Article Processing Charges (APC)
    • Proofreading Service
  • Reviewers
    • Guide for Reviewers
    • Become a Reviewer
  • About
    • About Journal
    • Aims and Scope
    • Editorial Team
    • Journal Insights
    • Peer Review Process
    • Publication Ethics
    • Plagiarism
    • Allegations of Misconduct
    • Appeals and Complaints
    • Corrections and Withdrawals
    • Open Access
    • Archiving Policy
    • Abstracting and indexing
    • Announcements
    • Contact

Search Results for feature-extraction

Article
FEATURE EXTRACTION METHODS FOR IC CHIP MARKING INSPECTION-A COMPARISON

R. Nagarajan, Sazali Yaacob, Paulraj Pandian, Mohamed Rizon, nan M.Karthigayan

Pages: 7-18

PDF Full Text
Abstract

In this paper, an Industrial machine vision system incorporating Optical Character Recognition (OCR) is employed to inspect the marking on the Integrated Circuit (IC) Chips. This inspection is carried out while the ICs are coming out from the manufacturing line. A TSSOP-DGG type of IC package from Texas Instrument is used in the investigation. The inspection has to identify the print errors such as illegible character, missing characters and up side down printing. The vision inspection of the printed markings on the IC chip is carried out in three phases namely image preprocessing, feature extraction and classification. Projection profile and Moments are employed for feature extraction. A neural network is used as a classifier to detect the defectively marked IC chips. Both feature extraction methods are compared in terms of marking inspection time.

Article
Off-line Signature Recognition Using Weightless Neural Network and Feature Extraction

Ali Al-Saegh

Pages: 124-131

PDF Full Text
Abstract

The problem of automatic signature recognition and verification has been extensively investigated due to the vitality of this field of research. Handwritten signatures are broadly used in daily life as a secure way for personal identification. In this paper a novel approach is proposed for handwritten signature recognition in an off-line environment based on Weightless Neural Network (WNN) and feature extraction. This type of neural networks (NN) is characterized by its simplicity in design and implementation. Whereas no weights, transfer functions and multipliers are required. Implementing the WNN needs only Random Access Memory (RAM) slices. Moreover, the whole process of training can be accomplished with few numbers of training samples and by presenting them once to the neural network. Employing the proposed approach in signature recognition area yields promising results with rates of 99.67% and 99.55% for recognition of signatures that the network has trained on and rejection of signatures that the network has not trained on, respectively.

Article
Comparative Long-Term Electricity Forecasting Analysis: A Case Study of Load Dispatch Centres in India

Saikat Gochhait, Deepak K. Sharma, Mrinal Bachute

Pages: 207-219

PDF Full Text
Abstract

Accurate long-term load forecasting (LTLF) is crucial for smart grid operations, but existing CNN-based methods face challenges in extracting essential featuresfrom electricity load data, resulting in diminished forecasting performance. To overcome this limitation, we propose a novel ensemble model that integratesa feature extraction module, densely connected residual block (DCRB), longshort-term memory layer (LSTM), and ensemble thinking. The feature extraction module captures the randomness and trends in climate data, enhancing the accuracy of load data analysis. Leveraging the DCRB, our model demonstrates superior performance by extracting features from multi-scale input data, surpassing conventional CNN-based models. We evaluate our model using hourly load data from Odisha and day-wise data from Delhi, and the experimental results exhibit low root mean square error (RMSE) values of 0.952 and 0.864 for Odisha and Delhi, respectively. This research contributes to a comparative long-term electricity forecasting analysis, showcasing the efficiency of our proposed model in power system management. Moreover, the model holds the potential to sup-port decisionmaking processes, making it a valuable tool for stakeholders in the electricity sector.

Article
Advancements and Challenges in Hand Gesture Recognition: A Comprehensive Review

Bothina Kareem Murad, Abbas H. Hassin Alasadi

Pages: 154-164

PDF Full Text
Abstract

Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.

Article
A Review on Voice-based Interface for Human-Robot Interaction

Ameer A. Badr, Alia K. Abdul-Hassan

Pages: 91-102

PDF Full Text
Abstract

With the recent developments of technology and the advances in artificial intelligence and machine learning techniques, it has become possible for the robot to understand and respond to voice as part of Human-Robot Interaction (HRI). The voice-based interface robot can recognize the speech information from humans so that it will be able to interact more naturally with its human counterpart in different environments. In this work, a review of the voice-based interface for HRI systems has been presented. The review focuses on voice-based perception in HRI systems from three facets, which are: feature extraction, dimensionality reduction, and semantic understanding. For feature extraction, numerous types of features have been reviewed in various domains, such as time, frequency, cepstral (i.e. implementing the inverse Fourier transform for the signal spectrum logarithm), and deep domains. For dimensionality reduction, subspace learning can be used to eliminate the redundancies of high-dimensional features by further processing extracted features to reflect their semantic information better. For semantic understanding, the aim is to infer from the extracted features the objects or human behaviors. Numerous types of semantic understanding have been reviewed, such as speech recognition, speaker recognition, speaker gender detection, speaker gender and age estimation, and speaker localization. Finally, some of the existing voice-based interface issues and recommendations for future works have been outlined.

Article
Epileptic detection based on deep learning: A review

Ola M. Assim, Ahlam F. Mahmood

Pages: 115-126

PDF Full Text
Abstract

Epilepsy, a neurological disorder characterized by recurring seizures, necessitates early and precise detection for effective management. Deep learning techniques have emerged as powerful tools for analyzing complex medical data, specifically electroencephalogram (EEG) signals, advancing epileptic detection. This review comprehensively presents cutting-edge methodologies in deep learning-based epileptic detection systems. Beginning with an overview of epilepsy’s fundamental concepts and their implications for individuals and healthcare are present. This review then delves into deep learning principles and their application in processing EEG signals. Diverse research papers to know the architectures—convolutional neural networks, recurrent neural networks, and hybrid models—are investigated, emphasizing their strengths and limitations in detecting epilepsy. Preprocessing techniques for improving EEG data quality and reliability, such as noise reduction, artifact removal, and feature extraction, are discussed. Present performance evaluation metrics in epileptic detection, such as accuracy, sensitivity, specificity, and area under the curve, are provided. This review anticipates future directions by highlighting challenges such as dataset size and diversity, model interpretability, and integration with clinical decision support systems. Finally, this review demonstrates how deep learning can improve the precision, efficiency, and accessibility of early epileptic diagnosis. This advancement allows for more timely interventions and personalized treatment plans, potentially revolutionizing epilepsy management.

Article
Automated Brain Tumor Detection Based on Feature Extraction from The MRI Brain Image Analysis

Ban Mohammed Abd Alreda, Hussain Kareem Khalif, Thamir Rashed Saeid

Pages: 58-67

PDF Full Text
Abstract

The brain tumors are among the common deadly illness that requires early, reliable detection techniques, current identification, and imaging methods that depend on the decisions of neuro-specialists and radiologists who can make possible human error. This takes time to manually identify a brain tumor. This work aims to design an intelligent model capable of diagnosing and predicting the severity of magnetic resonance imaging (MRI) brain tumors to make an accurate decision. The main contribution is achieved by adopting a new multiclass classifier approach based on a collected real database with new proposed features that reflect the precise situation of the disease. In this work, two artificial neural networks (ANNs) methods namely, Feed Forward Back Propagation neural network (FFBPNN) and support vector machine (SVM), used to expectations the level of brain tumors. The results show that the prediction result by the (FFBPN) network will be better than the other method in time record to reach an automatic classification with classification accuracy was 97% for 3-class which is considered excellent accuracy. The software simulation and results of this work have been implemented via MATLAB (R2012b).

Article
Feature Deep Learning Extraction Approach for Object Detection in Self-Driving Cars

Namareq Odey, Ali Marhoon

Pages: 62-69

PDF Full Text
Abstract

Self-driving cars are a fundamental research subject in recent years; the ultimate goal is to completely exchange the human driver with automated systems. On the other hand, deep learning techniques have revealed performance and effectiveness in several areas. The strength of self-driving cars has been deeply investigated in many areas including object detection, localization as well, and activity recognition. This paper provides an approach to deep learning; which combines the benefits of both convolutional neural network CNN together with Dense technique. This approach learns based on features extracted from the feature extraction technique which is linear discriminant analysis LDA combined with feature expansion techniques namely: standard deviation, min, max, mod, variance and mean. The presented approach has proven its success in both testing and training data and achieving 100% accuracy in both terms.

Article
Second-Order Statistical Methods GLCM for Authentication Systems

Mohammed A. Taha, Hanaa M. Ahmed

Pages: 88-93

PDF Full Text
Abstract

For many uses, biometric systems have gained considerable attention. Iris identification was One of the most powerful sophisticated biometrical techniques for effective and confident authentication. The current iris identification system offers accurate and reliable results based on near-infrared light (NIR) images when images are taken in a restricted area with fixed- distance user cooperation. However, for the color eye images obtained under visible wavelength (VW) without collaboration among the users, the efficiency of iris recognition degrades because of noise such as eye blurring images, eye lashing, occlusion, and reflection. This work aims to use the Gray-Level Co-occurrence Matrix (GLCM) to retrieve the iris's characteristics in both NIR iris images and visible spectrum. GLCM is second-order Statistical-Based Methods for Texture Analysis. The GLCM- based extraction technology was applied after the preprocessing method to extract the pure iris region's characteristics. The Energy, Entropy, Correlation, Homogeneity, and Contrast collection of second-order statistical features are determined from the generated co-occurrence matrix, Stored as a vector for numerical features. This approach is used and evaluated on the CASIA v1and ITTD v1 databases as NIR iris image and UBIRIS v1 as a color image. The results showed a high accuracy rate (99.2 %) on CASIA v1, (99.4) on ITTD v1, and (87%) on UBIRIS v1 evaluated by comparing to the other methods.

Article
A Comprehensive Review of Image Segmentation Techniques

Salwa Khalid Abdulateef, Mohanad Dawood Salman

Pages: 166-175

PDF Full Text
Abstract

Image segmentation is a wide research topic; a huge amount of research has been performed in this context. Image segmentation is a crucial procedure for most object detection, image recognition, feature extraction, and classification tasks depend on the quality of the segmentation process. Image segmentation is the dividing of a specific image into a numeral of homogeneous segments; therefore, the representation of an image into simple and easy forms increases the effectiveness of pattern recognition. The effectiveness of approaches varies according to the conditions of objects arrangement, lighting, shadow and other factors. However, there is no generic approach for successfully segmenting all images, where some approaches have been proven to be more effective than others. The major goal of this study is to provide summarize of the disadvantages and the advantages of each of the reviewed approaches of image segmentation.

Article
A Hybrid Lung Cancer Model for Diagnosis and Stage Classification from Computed Tomography Images

Abdalbasit Mohammed Qadir, Peshraw Ahmed Abdalla, Dana Faiq Abd

Pages: 266-274

PDF Full Text
Abstract

Detecting pulmonary cancers at early stages is difficult but crucial for patient survival. Therefore, it is essential to develop an intelligent, autonomous, and accurate lung cancer detection system that shows great reliability compared to previous systems and research. In this study, we have developed an innovative lung cancer detection system known as the Hybrid Lung Cancer Stage Classifier and Diagnosis Model (Hybrid-LCSCDM). This system simplifies the complex task of diagnosing lung cancer by categorizing patients into three classes: normal, benign, and malignant, by analyzing computed tomography (CT) scans using a two-part approach: First, feature extraction is conducted using a pre-trained model called VGG-16 for detecting key features in lung CT scans indicative of cancer. Second, these features are then classified using a machine learning technique called XGBoost, which sorts the scans into three categories. A dataset, IQ-OTH/NCCD - Lung Cancer, is used to train and evaluate the proposed model to show its effectiveness. The dataset consists of the three aforementioned classes containing 1190 images. Our suggested strategy achieved an overall accuracy of 98.54%, while the classification precision among the three classes was 98.63%. Considering the accuracy, recall, and precision as well as the F1-score evaluation metrics, the results indicated that when using solely computed tomography scans, the proposed (Hybrid-LCSCDM) model outperforms all previously published models.

Article
Wavelet-based Hybrid Learning Framework for Motor Imagery Classification

Z. T. Al-Qaysi, Ali Al-Saegh, Ahmed Faeq Hussein, M. A. Ahmed

Pages: 47-56

PDF Full Text
Abstract

Due to their vital applications in many real-world situations, researchers are still presenting bunches of methods for better analysis of motor imagery (MI) electroencephalograph (EEG) signals. However, in general, EEG signals are complex because of their nonstationary and high-dimensionality properties. Therefore, high consideration needs to be taken in both feature extraction and classification. In this paper, several hybrid classification models are built and their performance is compared. Three famous wavelet mother functions are used for generating scalograms from the raw signals. The scalograms are used for transfer learning of the well-known VGG-16 deep network. Then, one of six classifiers is used to determine the class of the input signal. The performance of different combinations of mother functions and classifiers are compared on two MI EEG datasets. Several evaluation metrics show that a model of VGG-16 feature extractor with a neural network classifier using the Amor mother wavelet function has outperformed the results of state-of-the-art studies.

Article
Session to Session Transfer Learning Method Using Independent Component Analysis with Regularized Common Spatial Patterns for EEG-MI Signals

Zaineb M. Alhakeem, Ramzy S. Ali

Pages: 13-27

PDF Full Text
Abstract

Training the user in Brain-Computer Interface (BCI) systems based on brain signals that recorded using Electroencephalography Motor Imagery (EEG-MI) signal is a time-consuming process and causes tiredness to the trained subject, so transfer learning (subject to subject or session to session) is very useful methods of training that will decrease the number of recorded training trials for the target subject. To record the brain signals, channels or electrodes are used. Increasing channels could increase the classification accuracy but this solution costs a lot of money and there are no guarantees of high classification accuracy. This paper introduces a transfer learning method using only two channels and a few training trials for both feature extraction and classifier training. Our results show that the proposed method Independent Component Analysis with Regularized Common Spatial Pattern (ICA-RCSP) will produce about 70% accuracy for the session to session transfer learning using few training trails. When the proposed method used for transfer subject to subject the accuracy was lower than that for session to session but it still better than other methods.

Article
Semantic Segmentation of Aerial Images Using U-Net Architecture

Sarah Kamel Hussein, Khawla Hussein Ali

Pages: 58-63

PDF Full Text
Abstract

Arial images are very high resolution. The automation for map generation and semantic segmentation of aerial images are challenging problems in semantic segmentation. The semantic segmentation process does not give us precise details of the remote sensing images due to the low resolution of the aerial images. Hence, we propose an algorithm U-Net Architecture to solve this problem. It is classified into two paths. The compression path (also called: the encoder) is the first path and is used to capture the image's context. The encoder is just a convolutional and maximal pooling layer stack. The symmetric expanding path (also called: the decoder) is the second path, which is used to enable exact localization by transposed convolutions. This task is commonly referred to as dense prediction, which is completely connected to each other and also with the former neurons which gives rise to dense layers. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains convolutional layers and does not contain any dense layer because of which it can accept images of any size. The performance of the model will be evaluated by improving the image using the proposed method U-NET and obtaining an improved image by measuring the accuracy compared with the value of accuracy with previous methods.

Article
BRAIN MACHINE INTERFACE: ANALYSIS OF SEGMENTED EEG SIGNAL CLASSIFICATION USING SHORT-TIME PCA AND RECURRENT NEURAL NETWORKS

Hema C.R., Paulraj M.P., Nagarajan R., Sazali Yaacob, Abdul Hamid Adom

Pages: 77-85

PDF Full Text
Abstract

Brain machine interface provides a communication channel between the human brain and an external device. Brain interfaces are studied to provide rehabilitation to patients with neurodegenerative diseases; such patients loose all communication pathways except for their sensory and cognitive functions. One of the possible rehabilitation methods for these patients is to provide a brain machine interface (BMI) for communication; the BMI uses the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper a BMI design using five mental tasks from two subjects were studied, a combination of two tasks is studied per subject. An Elman recurrent neural network is proposed for classification of EEG signals. Two feature extraction algorithms using overlapped and non overlapped signal segments are analyzed. Principal component analysis is used for extracting features from the EEG signal segments. Classification performance of overlapping EEG signal segments is observed to be better in terms of average classification with a range of 78.5% to 100%, while the non overlapping EEG signal segments show better classification in terms of maximum classifications.

Article
Interactive Real-Time Control System for The Artificial Hand

Hanadi Abbas Jaber, Mofeed Turky Rashid, Luigi Fortuna

Pages: 62-71

PDF Full Text
Abstract

In recent years, the number of researches in the field of artificial limbs has increased significantly in order to improve the performance of the use of these limbs by amputees. During this period, High-Density surface Electromyography (HD-sEMG) signals have been employed for hand gesture identification, in which the performance of the classification process can be improved by using robust spatial features extracted from HD-sEMG signals. In this paper, several algorithms of spatial feature extraction have been proposed to increase the accuracy of the SVM classifier, while the histogram oriented gradient (HOG) has been used to achieve this mission. So, several feature sets have been extracted from HD-sEMG signals such as; features extracted based on HOG denoted by (H); features have been generated by combine intensity feature with H features denoted as (HI); features have been generated by combine average intensity with H features denoted as (AIH). The proposed system has been simulated by MATLAB to calculate the accuracy of the classification process, in addition, the proposed system is practically validated in order to show the ability to use this system by amputees. The results show the high accuracy of the classifier in real-time which leads to an increase in the possibility of using this system as an artificial hand.

Article
A Dataset for Kinship Estimation from Image of Hand Using Machine Learning

Sarah Ibrahim Fathi, Mazin H. Aziz

Pages: 127-136

PDF Full Text
Abstract

Kinship (Familial relationships) detection is crucial in many fields and has applications in biometric security, adoption, forensic investigations, and more. It is also essential during wars and natural disasters like earthquakes since it may aid in reunion, missing person searches, establishing emergency contacts, and providing psychological support. The most common method of determining kinship is DNA analysis which is highly accurate. Another approach, which is noninvasive, uses facial photos with computer vision and machine learning algorithms for kinship estimation. Each part of the Human -body has its own embedded information that can be extracted and adopted for identification, verification, or classification of that person. Kinship recognition is based on finding traits that are shared by every family. We investigate the use of hand geometry for kinship detection, which is a new approach. Because of the available hand image Datasets do not contain kinship ground truth; therefore, we created our own dataset. This paper describes the tools, methodology, and details of the collected MKH, which stands for the Mosul Kinship Hand, images dataset. The images of MKH dataset were collected using a mobile phone camera with a suitable setup and consisted of 648 images for 81 individuals from 14 families (8 hand situations per person). This paper also presents the use of this dataset in kinship prediction using machine learning. Google MdiaPipe was used for hand detection, segmentation, and geometrical key points finding. Handcraft feature extraction was used to extract 43 distinctive geometrical features from each image. A neural network classifier was designed and trained to predict kinship, yielding about 93% prediction accuracy. The results of this novel approach demonstrated that the hand possesses biometric characteristics that may be used to establish kinship, and that the suggested method is a promising way as a kinship indicator.

1 - 17 of 17 items

Search Parameters

Journal Logo
Iraqi Journal for Electrical and Electronic Engineering

College of Engineering, University of Basrah

  • Copyright Policy
  • Terms & Conditions
  • Privacy Policy
  • Accessibility
  • Cookie Settings
Licensing & Open Access

CC BY 4.0 Logo Licensed under CC-BY-4.0

This journal provides immediate open access to its content.

Editorial Manager Logo Elsevier Logo

Peer-review powered by Elsevier’s Editorial Manager®

Copyright © 2025 College of Engineering, University of Basrah. All rights reserved, including those for text and data mining, AI training, and similar technologies.