Iraqi Journal for Electrical and Electronic Engineering
Login
Iraqi Journal for Electrical and Electronic Engineering
  • Home
  • Articles & Issues
    • Latest Issue
    • All Issues
  • Authors
    • Submit Manuscript
    • Guide for Authors
    • Authorship
    • Article Processing Charges (APC)
    • Proofreading Service
  • Reviewers
    • Guide for Reviewers
    • Become a Reviewer
  • About
    • About Journal
    • Aims and Scope
    • Editorial Team
    • Journal Insights
    • Peer Review Process
    • Publication Ethics
    • Plagiarism
    • Allegations of Misconduct
    • Appeals and Complaints
    • Corrections and Withdrawals
    • Open Access
    • Archiving Policy
    • Abstracting and indexing
    • Announcements
    • Contact

Search Results for recognition

Article
Advancements and Challenges in Hand Gesture Recognition: A Comprehensive Review

Bothina Kareem Murad, Abbas H. Hassin Alasadi

Pages: 154-164

PDF Full Text
Abstract

Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.

Article
The Effect of Using Projective Cameras on View- Independent Gait Recognition Performance

Fatimah S. Abdulsattar

Pages: 22-29

PDF Full Text
Abstract

Gait as a biometric can be used to identify subjects at a distance and thus it receives great attention from the research community for security and surveillance applications. One of the challenges that affects gait recognition performance is view variation. Much work has been done to tackle this challenge. However, the majority of the work assumes that gait silhouettes are captured by affine cameras where only the height of silhouettes changes and the difference in viewing angle of silhouettes in one gait cycle is relatively small. In this paper, we analyze the variation in gait recognition performance when using silhouettes from projective cameras and from affine cameras with different distance from the center of a walking path. This is done by using 3D models of walking people in the gallery set and 2D gait silhouettes from independent (single) cameras in the probe set. Different factors that affect matching 3D human models with 2D gait silhouettes from single cameras for view-independent gait recognition are analyzed. In all experiments, we use 258 multi-view sequences belong to 46 subjects from Multi-View Soton gait dataset. We evaluate the matching performance for 12 different views using Gait Energy Image (GEI) as gait features. Then, we analyze the effect of using different camera configurations for 3D model reconstruction, the GEI from cameras with different settings, the upper and lower body parts for recognition and different GEI resolutions. The results illustrate that low recognition performance is achieved when using gait silhouettes from affine cameras while lower recognition performance is obtained when using gait silhouettes from projective cameras.

Article
Face Recognition Approach Based on the Integration of Image Preprocessing, CMLABP and PCA Methods

Yaqeen S. Mezaal

Pages: 104-113

PDF Full Text
Abstract

Face recognition technique is an automatic approach for recognizing a person from digital images using mathematical interpolation as matrices for these images. It can be adopted to realize facial appearance in the situations of different poses, facial expressions, ageing and other changes. This paper presents efficient face recognition model based on the integration of image preprocessing, Co-occurrence Matrix of Local Average Binary Pattern (CMLABP) and Principle Component Analysis (PCA) methods respectively. The proposed model can be used to compare the input image with existing database images in order to display or record the citizen information such as name, surname, birth date, etc. The recognition rate of the model is better than 99%. Accordingly, the proposed face recognition system is functional for criminal investigations. Furthermore, it has been compared with other reported works in the literature using diverse databases and training images. .

Article
Face Recognition System Against Adversarial Attack Using Convolutional Neural Network

Ansam Kadhi, Salah Al-Darraji

Pages: 1-8

PDF Full Text
Abstract

Face recognition is the technology that verifies or recognizes faces from images, videos, or real-time streams. It can be used in security or employee attendance systems. Face recognition systems may encounter some attacks that reduce their ability to recognize faces properly. So, many noisy images mixed with original ones lead to confusion in the results. Various attacks that exploit this weakness affect the face recognition systems such as Fast Gradient Sign Method (FGSM), Deep Fool, and Projected Gradient Descent (PGD). This paper proposes a method to protect the face recognition system against these attacks by distorting images through different attacks, then training the recognition deep network model, specifically Convolutional Neural Network (CNN), using the original and distorted images. Diverse experiments have been conducted using combinations of original and distorted images to test the effectiveness of the system. The system showed an accuracy of 93% using FGSM attack, 97% using deep fool, and 95% using PGD.

Article
Human Activity and Gesture Recognition Based on WiFi Using Deep Convolutional Neural Networks

Sokienah K. Jawad, Musaab Alaziz

Pages: 110-116

PDF Full Text
Abstract

WiFi-based human activity and gesture recognition explore the interaction between the human hand or body movements and the reflected WiFi signals to identify various activities. This type of recognition has received much attention in recent years since it does not require wearing special sensors or installing cameras. This paper aims to investigate human activity and gesture recognition schemes that use Channel State Information (CSI) provided by WiFi devices. To achieve high accuracy in the measurement, deep learning models such as AlexNet, VGG 19, and SqueezeNet were used for classification and extracting features automatically. Firstly, outliers are removed from the amplitude of each CSI stream during the preprocessing stage by using the Hampel identifier algorithm. Next, the RGB images are created for each activity to feed as input to Deep Convolutional Neural Networks. After that, data augmentation is implemented to reduce the overfitting problems in deep learning models. Finally, the proposed method is evaluated on a publicly available dataset called WiAR, which contains 10 volunteers, each of whom executes 16 activities. The experiment results demonstrate that AlexNet, VGG19, and SqueezeNet all have high recognition accuracy of 99.17 %, 96.25%, and 100 %, respectively.

Article
Off-line Signature Recognition Using Weightless Neural Network and Feature Extraction

Ali Al-Saegh

Pages: 124-131

PDF Full Text
Abstract

The problem of automatic signature recognition and verification has been extensively investigated due to the vitality of this field of research. Handwritten signatures are broadly used in daily life as a secure way for personal identification. In this paper a novel approach is proposed for handwritten signature recognition in an off-line environment based on Weightless Neural Network (WNN) and feature extraction. This type of neural networks (NN) is characterized by its simplicity in design and implementation. Whereas no weights, transfer functions and multipliers are required. Implementing the WNN needs only Random Access Memory (RAM) slices. Moreover, the whole process of training can be accomplished with few numbers of training samples and by presenting them once to the neural network. Employing the proposed approach in signature recognition area yields promising results with rates of 99.67% and 99.55% for recognition of signatures that the network has trained on and rejection of signatures that the network has not trained on, respectively.

Article
License Plate Detection and Recognition in Unconstrained Environment Using Deep Learning

Heba Hakim, Zaineb Alhakeem, Hanadi Al-Musawi, Mohammed A. Al-Ibadi, Alaa Al-Ibadi

Pages: 210-220

PDF Full Text
Abstract

Real-time detection and recognition systems for vehicle license plates present a significant design and implementation challenge, arising from factors such as low image resolution, data noise, and various weather and lighting conditions.This study presents an efficient automated system for the identification and classification of vehicle license plates, utilizing deep learning techniques. The system is specifically designed for Iraqi vehicle license plates, adapting to various backgrounds, different font sizes, and non-standard formats. The proposed system has been designed to be integrated into an automated entrance gate security system. The system’s framework encompasses two primary phases: license plate detection (LPD) and character recognition (CR). The utilization of the advanced deep learning technique YOLOv4 has been implemented for both phases owing to its adeptness in real-time data processing and its remarkable precision in identifying diminutive entities like characters on license plates. In the LPD phase, the focal point is on the identification and isolation of license plates from images, whereas the CR phase is dedicated to the identification and extraction of characters from the identified license plates. A substantial dataset comprising Iraqi vehicle images captured under various lighting and weather circumstances has been amassed for the intention of both training and testing. The system attained a noteworthy accuracy level of 95.07%, coupled with an average processing time of 118.63 milliseconds for complete end-to-end operations on a specified dataset, thus highlighting its suitability for real-time applications. The results suggest that the proposed system has the capability to significantly enhance the efficiency and reliability of vehicle license plate recognition in various environmental conditions, thus making it suitable for implementation in security and traffic management contexts.

Article
A k-Nearest Neighbor Based Algorithm for Human Arm Movements Recognition Using EMG Signals

Mohammed Z. Al-Faiz, MIEEE, Abduladhem A.Ali, Abbas H. Miry

Pages: 158-166

PDF Full Text
Abstract

In a human-robot interface, the prediction of motion, which is based on context information of a task, has the potential to improve the robustness and reliability of motion classification to control human-assisting manipulators. The objective of this work is to achieve better classification with multiple parameters using K-Nearest Neighbor (K-NN) for different movements of a prosthetic arm. The proposed structure is simulated using MATLAB Ver. R2009a, and satisfied results are obtained by comparing with the conventional recognition method using Artificial Neural Network (ANN). Results show the proposed K-NN technique achieved a uniformly good performance with respect to ANN in terms of time, which is important in recognition systems, and better accuracy in recognition when applied to lower Signal-to-Noise Ratio (SNR) signals.

Article
HIERARCHICAL ARABIC PHONEME RECOGNITION USING MFCC ANALYSIS

INTESSAR T. HWAIDY, PROF. DR. ABDULADHEM A. ALI

Pages: 97-106

PDF Full Text
Abstract

In this paper, a hierarchical Arabic phoneme recognition system is proposed in which Mel Frequency Cepstrum Coefficients (MFCC) features is used to train the hierarchical neural networks architecture. Here, separate neural networks (subnetworks) are to be recursively trained to recognize subsets of phonemes. The overall recognition process is a combination of the outputs of these subnetworks. Experiments that explore the performance of the proposed hierarchical system in comparison to non-hierarchical (flat) baseline systems are also presented in this paper.

Article
Emotion Recognition Based on Mining Sub-Graphs of Facial Components

Suhaila N. Mohammed, Alia K. Abdul Hassan

Pages: 39-48

PDF Full Text
Abstract

Facial emotion recognition finds many real applications in the daily life like human robot interaction, eLearning, healthcare, customer services etc. The task of facial emotion recognition is not easy due to the difficulty in determining the effective feature set that can recognize the emotion conveyed within the facial expression accurately. Graph mining techniques are exploited in this paper to solve facial emotion recognition problem. After determining positions of facial landmarks in face region, twelve different graphs are constructed using four facial components to serve as a source for sub-graphs mining stage using gSpan algorithm. In each group, the discriminative set of sub-graphs are selected and fed to Deep Belief Network (DBN) for classification purpose. The results obtained from the different groups are then fused using Naïve Bayes classifier to make the final decision regards the emotion class. Different tests were performed using Surrey Audio-Visual Expressed Emotion (SAVEE) database and the achieved results showed that the system gives the desired accuracy (100%) when fusion decisions of the facial groups. The achieved result outperforms state-of-the-art results on the same database.

Article
Indoor Low Cost Assistive Device using 2D SLAM Based on LiDAR for Visually Impaired People

Heba Hakim, Ali Fadhil

Pages: 115-121

PDF Full Text
Abstract

Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.

Article
Face Recognition-Based Automatic Attendance System in a Smart Classroom

Ahmad S. Lateef, Mohammed Y. Kamil

Pages: 37-47

PDF Full Text
Abstract

The smart classroom is a fully automated classroom where repetitive tasks, including attendance registration, are automatically performed. Due to recent advances in artificial intelligence, traditional attendance registration methods have become challenging. These methods require significant time and effort to complete the process. Therefore, researchers have sought alternative ways to accomplish attendance registration. These methods include identification cards, radio frequency, or biometric systems. However, all of these methods have faced challenges in safety, accuracy, effort, time, and cost. The development of digital image processing techniques, specifically face recognition technology, has enabled automated attendance registration. Face recognition technology is considered the most suitable for this process due to its ability to recognize multiple faces simultaneously. This study developed an integrated attendance registration system based on the YOLOv7 algorithm, which extracts features and recognizes students’ faces using a specially collected database of 31 students from Mustansiriyah University. A comparative study was conducted by applying the YOLOv7 algorithm, a machine learning algorithm, and a combined machine learning and deep learning algorithm. The proposed method achieved an accuracy of up to 100%. A comparison with previous studies demonstrated that the proposed method is promising and reliable for automating attendance registration.

Article
Design and Implementation of Locations Matching Algorithm for Multi-Object Recognition and Localization

Abdulmuttalib T. Rashid, Wael H. Zayer, Mofeed T. Rashid

Pages: 10-21

PDF Full Text
Abstract

A new algorithm for multi-object recognition and localization is introduced in this paper. This algorithm deals with objects which have different reflectivity factors and distinguish color with respect to the other objects. Two beacons scan multi-color objects using long distance IR sensors to estimate their absolute locations. These two beacon nodes are placed at two corners of the environment. The recognition of these objects is estimated by matching the locations of each object with respect to the two beacons. A look-up table contains the distances information about different color objects is used to convert the reading of the long distance IR sensor from voltage to distance units. The locations of invisible objects are computed by using absolute locations of invisible objects method. The performance of introduced algorithm is tested with several experimental scenarios that implemented on color objects.

Article
ECG SIGNAL RECOGNITION BASED ON WAVELET TRANSFORM USING NEURAL NETWORKS AND FUZZY SYSTEMS

HAIDER MEHDI ABDUL-RIDHA, ABDULADHEM A. ALI

Pages: 86-91

PDF Full Text
Abstract

This work presents aneural and fuzzy based ECG signal recognition system based on wavelet transform. The suitable coefficients that can be used as a feature for each fuzzy network or neural network is found using a proposed best basis technique. Using the proposed best bases reduces the dimension of the input vector and hence reduces the complexity of the classifier. The fuzzy network and the neural network parameters are learned using back propagation algorithm.

Article
BIN OBJECT RECOGNITION USING IMAGE MATRIX DECOMPOSITION AND NEURAL NETWORKS

Hema CR, Paulraj M., R. Nagarajan, Sazali Yaacob

Pages: 60-64

PDF Full Text
Abstract

Bin picking robots require vision sensors capable of recognizing objects in the bin irrespective of the orientation and pose of the objects inside the bin. Bin picking systems are still a challenge to the robot vision research community due to the complexity of segmenting of occluded industrial objects as well as recognizing the segmented objects which have irregular shapes. In this paper a simple object recognition method is presented using singular value decomposition of the object image matrix and a functional link neural network for a bin picking vision system. The results of the functional link net are compared with that of a simple feed forward net. The network is trained using the error back propagation procedure. The proposed method is robust for recognition of objects.

Article
New Architectures and Algorithm for Optical Pattern Recognition using Joint Transform Correlation Technique

Prof. Dr. R. S. Fyath, Kh. N. Darraj

Pages: 33-50

PDF Full Text
Abstract

Recently, there is increasing interest in using joint transform correlation (JTC) technique for optical pattern recognition. In this technique, the target and reference images are jointed together in the input plane and no matched filter is required. In this paper, the JTC is investigated using simulation technique. A new discrimination decision algorithm is proposed to recognize the correlation output for different object shapes (dissimilar shapes). Also, new architectures are proposed to overcome the main problems of the conventional JTC.

Article
A Biometric System for Iris Recognition Based on Fourier Descriptors and Principle Component Analysis

Muthana H. Hamd, Samah K. Ahmed

Pages: 180-187

PDF Full Text
Abstract

Iris pattern is one of the most important biological traits of humans. In last years, the iris pattern is used for human verification because of uniqueness of its texture. In this paper, biometric system based iris recognition is designed and implemented using two comparative approaches. The first approach is the Fourier descriptors, in this method the iris features have been extracted in frequency domain, where the low spectrums define the general description of iris pattern, while the high spectrums describes the fine detail. The second approach, the principle component analysis uses statistic technique to select the most important feature values by reducing its dimensionality. The biometric system is tested by applying one-to-one pattern matching procedure for 50 persons. The distance measurement method is applied for Manhattan, Euclidean, and Cosine classifiers for purpose of comparison. In all three classification methods, Fourier descriptors were always advanced principle component analysis in matching results. It satisfied 96%, 94%, and 86% correct matching against 94%, 92%, and 80% for principle component analysis using Manhattan, Euclidean, and Cosine classifiers respectively.

Article
Identifying Discourse Elements in Writing by Longformer for NER Token Classification

Alia Salih Alkabool, Sukaina Abdul Hussain Abdullah, Sadiq Mahdi Zadeh, Hani Mahfooz

Pages: 87-92

PDF Full Text
Abstract

Current automatic writing feedback systems cannot distinguish between different discourse elements in students' writing. This is a problem because, without this ability, the guidance provided by these systems is too general for what students want to achieve on arrival. This is cause for concern because automated writing feedback systems are a great tool for combating student writing declines. According to the National Assessment of Educational Progress, less than 30 percent of high school graduates are gifted writers. If we can improve the automatic writing feedback system, we can improve the quality of student writing and stop the decline of skilled writers among students. Solutions to this problem have been proposed, the most popular being the fine-tuning of bidirectional encoder representations from Transformers models that recognize various utterance elements in student written assignments. However, these methods have their drawbacks. For example, these methods do not compare the strengths and weaknesses of different models, and these solutions encourage training models over sequences (sentences) rather than entire articles. In this article, I'm redesigning the Persuasive Essays for Rating, Selecting, and Understanding Argumentative and Discourse Elements corpus so that models can be trained for the entire article, and I've included Transformers, the Long Document Transformer's bidirectional encoder representation, and the Generative Improving a pre trained Transformer 2 model for utterance classification in the context of a named entity recognition token classification problem. Overall, the bi-directional encoder representation of the Transformers model railway using my sequence-merging preprocessing method outperforms the standard model by 17% and 41% in overall accuracy. I also found that the Long Document Transformer model performed the best in utterance classification with an overall f-1 score of 54%. However, the increase in validation loss from 0.54 to 0.79 indicates that the model is overfitting. Some improvements can still be made due to model overfittings, such as B. Implementation of early stopping techniques and further examples of rare utterance elements during training.

Article
Second-Order Statistical Methods GLCM for Authentication Systems

Mohammed A. Taha, Hanaa M. Ahmed

Pages: 88-93

PDF Full Text
Abstract

For many uses, biometric systems have gained considerable attention. Iris identification was One of the most powerful sophisticated biometrical techniques for effective and confident authentication. The current iris identification system offers accurate and reliable results based on near-infrared light (NIR) images when images are taken in a restricted area with fixed- distance user cooperation. However, for the color eye images obtained under visible wavelength (VW) without collaboration among the users, the efficiency of iris recognition degrades because of noise such as eye blurring images, eye lashing, occlusion, and reflection. This work aims to use the Gray-Level Co-occurrence Matrix (GLCM) to retrieve the iris's characteristics in both NIR iris images and visible spectrum. GLCM is second-order Statistical-Based Methods for Texture Analysis. The GLCM- based extraction technology was applied after the preprocessing method to extract the pure iris region's characteristics. The Energy, Entropy, Correlation, Homogeneity, and Contrast collection of second-order statistical features are determined from the generated co-occurrence matrix, Stored as a vector for numerical features. This approach is used and evaluated on the CASIA v1and ITTD v1 databases as NIR iris image and UBIRIS v1 as a color image. The results showed a high accuracy rate (99.2 %) on CASIA v1, (99.4) on ITTD v1, and (87%) on UBIRIS v1 evaluated by comparing to the other methods.

Article
A Comprehensive Review of Image Segmentation Techniques

Salwa Khalid Abdulateef, Mohanad Dawood Salman

Pages: 166-175

PDF Full Text
Abstract

Image segmentation is a wide research topic; a huge amount of research has been performed in this context. Image segmentation is a crucial procedure for most object detection, image recognition, feature extraction, and classification tasks depend on the quality of the segmentation process. Image segmentation is the dividing of a specific image into a numeral of homogeneous segments; therefore, the representation of an image into simple and easy forms increases the effectiveness of pattern recognition. The effectiveness of approaches varies according to the conditions of objects arrangement, lighting, shadow and other factors. However, there is no generic approach for successfully segmenting all images, where some approaches have been proven to be more effective than others. The major goal of this study is to provide summarize of the disadvantages and the advantages of each of the reviewed approaches of image segmentation.

Article
A Review on Voice-based Interface for Human-Robot Interaction

Ameer A. Badr, Alia K. Abdul-Hassan

Pages: 91-102

PDF Full Text
Abstract

With the recent developments of technology and the advances in artificial intelligence and machine learning techniques, it has become possible for the robot to understand and respond to voice as part of Human-Robot Interaction (HRI). The voice-based interface robot can recognize the speech information from humans so that it will be able to interact more naturally with its human counterpart in different environments. In this work, a review of the voice-based interface for HRI systems has been presented. The review focuses on voice-based perception in HRI systems from three facets, which are: feature extraction, dimensionality reduction, and semantic understanding. For feature extraction, numerous types of features have been reviewed in various domains, such as time, frequency, cepstral (i.e. implementing the inverse Fourier transform for the signal spectrum logarithm), and deep domains. For dimensionality reduction, subspace learning can be used to eliminate the redundancies of high-dimensional features by further processing extracted features to reflect their semantic information better. For semantic understanding, the aim is to infer from the extracted features the objects or human behaviors. Numerous types of semantic understanding have been reviewed, such as speech recognition, speaker recognition, speaker gender detection, speaker gender and age estimation, and speaker localization. Finally, some of the existing voice-based interface issues and recommendations for future works have been outlined.

Article
Iraqi License Plate Detection and Segmentation based on Deep Learning

Ghida Yousif Abbass, Ali Fadhil Marhoon

Pages: 102-107

PDF Full Text
Abstract

Nowadays, the trend has become to utilize Artificial Intelligence techniques to replace the human's mind in problem solving. Vehicle License Plate Recognition (VLPR) is one of these problems in which the computer outperforms the human being in terms of processing speed and accuracy of results. The emergence of deep learning techniques enhances and simplifies this task. This work emphasis on detecting the Iraqi License Plates based on SSD Deep Learning Algorithm. Then Segmenting the plate using horizontal and vertical shredding. Finally, the K-Nearest Neighbors (KNN) algorithm utilized to specify the type of car. The proposed system evaluated by using a group of 500 different Iraqi Vehicles. The successful results show that 98% regarding the plate detection, and 96% for segmenting operation.

Article
FEATURE EXTRACTION METHODS FOR IC CHIP MARKING INSPECTION-A COMPARISON

R. Nagarajan, Sazali Yaacob, Paulraj Pandian, Mohamed Rizon, nan M.Karthigayan

Pages: 7-18

PDF Full Text
Abstract

In this paper, an Industrial machine vision system incorporating Optical Character Recognition (OCR) is employed to inspect the marking on the Integrated Circuit (IC) Chips. This inspection is carried out while the ICs are coming out from the manufacturing line. A TSSOP-DGG type of IC package from Texas Instrument is used in the investigation. The inspection has to identify the print errors such as illegible character, missing characters and up side down printing. The vision inspection of the printed markings on the IC chip is carried out in three phases namely image preprocessing, feature extraction and classification. Projection profile and Moments are employed for feature extraction. A neural network is used as a classifier to detect the defectively marked IC chips. Both feature extraction methods are compared in terms of marking inspection time.

Article
Recognition of Cardiac Arrhythmia using ECG Signals and Bio-inspired AWPSO Algorithms

Jyothirmai Digumarthi, V. M. Gayathri, R. Pitchai

Pages: 95-103

PDF Full Text
Abstract

Studies indicate cardiac arrhythmia is one of the leading causes of death in the world. The risk of a stroke may be reduced when an irregular and fast heart rate is diagnosed. Since it is non-invasive, electrocardiograms are often used to detect arrhythmias. Human data input may be error-prone and time-consuming because of these limitations. For early detection of heart rhythm problems, it is best to use deep learning models. In this paper, a hybrid bio-inspired algorithm has been proposed by combining whale optimization (WOA) with adaptive particle swarm optimization (APSO). The WOA is a recently developed meta-heuristic algorithm. APSO is used to increase convergence speed. When compared to conventional optimization methods, the two techniques work better together. MIT-BIH dataset has been utilized for training, testing and validating this model. The recall, accuracy, and specificity are used to measure efficiency of the proposed method. The efficiency of the proposed method is compared with state-of-art methods and produced 98.25 % of accuracy.

Article
Control of Robot Directions Based on Online Hand Gestures

Mohammed A.Tawfeeq, Ayam M. Abbass

Pages: 41-50

PDF Full Text
Abstract

The evolution of wireless communication technology increases human machine interaction capabilities especially in controlling robotic systems. This paper introduces an effective wireless system in controlling the directions of a wheeled robot based on online hand gestures. The hand gesture images are captured and processed to be recognized and classified using neural network (NN). The NN is trained using extracted features to distinguish five different gestures; accordingly it produces five different signals. These signals are transmitted to control the directions of the cited robot. The main contribution of this paper is, the technique used to recognize hand gestures is required only two features, these features can be extracted in very short time using quite easy methodology, and this makes the proposed technique so suitable for online interaction. In this methodology, the preprocessed image is partitioned column-wise into two half segments; from each half one feature is extracted. This feature represents the ratio of white to black pixels of the segment histogram. The NN showed very high accuracy in recognizing all of the proposed gesture classes. The NN output signals are transmitted to the robot microcontroller wirelessly using Bluetooth. Accordingly the microcontroller guides the robot to the desired direction. The overall system showed high performance in controlling the robot movement directions.

Article
Feature Deep Learning Extraction Approach for Object Detection in Self-Driving Cars

Namareq Odey, Ali Marhoon

Pages: 62-69

PDF Full Text
Abstract

Self-driving cars are a fundamental research subject in recent years; the ultimate goal is to completely exchange the human driver with automated systems. On the other hand, deep learning techniques have revealed performance and effectiveness in several areas. The strength of self-driving cars has been deeply investigated in many areas including object detection, localization as well, and activity recognition. This paper provides an approach to deep learning; which combines the benefits of both convolutional neural network CNN together with Dense technique. This approach learns based on features extracted from the feature extraction technique which is linear discriminant analysis LDA combined with feature expansion techniques namely: standard deviation, min, max, mod, variance and mean. The presented approach has proven its success in both testing and training data and achieving 100% accuracy in both terms.

Article
A Comparative Study of Deep Learning Methods-Based Object/Image Categorization

Saad Albawi, Layth Kamil Almajmaie, Ali J. Abboud

Pages: 168-177

PDF Full Text
Abstract

In recent years, there has been a considerable rise in the applications in which object or image categorization is beneficial for example, analyzing medicinal images, assisting persons to organize their collections of photos, recognizing what is around self-driving vehicles, and many more. These applications necessitate accurately labeled datasets, in their majority involve an extensive diversity in the types of images, from cats or dogs to roads, landscapes, and so forth. The fundamental aim of image categorization is to predict the category or class for the input image by specifying to which it belongs. For human beings, this is not a considerable thing, however, learning computers to perceive represents a hard issue that has become a broad area of research interest, and both computer vision techniques and deep learning algorithms have evolved. Conventional techniques utilize local descriptors for finding likeness between images, however, nowadays; progress in technology has provided the utilization of deep learning algorithms, especially the Convolutional Neural Networks (CNNs) to auto-extract representative image patterns and features for classification The fundamental aim of this paper is to inspect and explain how to utilize the algorithms and technologies of deep learning to accurately classify a dataset of images into their respective categories and keep model structure complication to a minimum. To achieve this aim, must focus precisely and accurately on categorizing the objects or images into their respective categories with excellent results. And, specify the best deep learning-based models in image processing and categorization. The developed CNN-based models have been proposed and a lot of pre-training models such as (VGG19, DenseNet201, ResNet152V2, MobileNetV2, and InceptionV3) have been presented, and all these models are trained on the Caltech-101 and Caltech-256 datasets. Extensive and comparative experiments were conducted on this dataset, and the obtained results demonstrate the effectiveness of the proposed models. The obtained results demonstrate the effectiveness of the proposed models. The accuracy for Caltech-101 and Caltech-256 datasets was (98.06% and 90%) respectively.

Article
Human Activity Recognition Using The Human Skeleton Provided by Kinect

Heba A. Salim, Musaab Alaziz, Turki Y. Abdalla

Pages: 183-189

PDF Full Text
Abstract

In this paper, a new method is proposed for people tracking using the human skeleton provided by the Kinect sensor, Our method is based on skeleton data, which includes the coordinate value of each joint in the human body. For data classification, the Support Vector Machine (SVM) and Random Forest techniques are used. To achieve this goal, 14 classes of movements are defined, using the Kinect Sensor to extract data containing 46 features and then using them to train the classification models. The system was tested on 12 subjects, each of whom performed 14 movements in each experiment. Experiment results show that the best average accuracy is 90.2 % for the SVM model and 99 % for the Random forest model. From the experiments, we concluded that the best distance between the Kinect sensor and the human body is one meter.

Article
A Fast and Accurate Method for Power System Voltage Sag Detection

Adnan Romi Diwan, Khalid M. Abdulhassan, Falih M. Alnahwi

Pages: 78-84

PDF Full Text
Abstract

In order to mitigate the effect of voltage sag on sensitive loads, a dynamic voltage restorer (DVR) should be used for this purpose. The DVR should be accompanied with a fast and accurate sag detection circuit or algorithm to determine the sag information as quickly as possible with an acceptable precision. This paper presents the numerical matrix method as a distinctive candidate for voltage sag detection. The design steps of this method are demonstrated in detail in this work. The simulation results exhibit the superiority of this technique over the other detection techniques in term of the speed and accuracy of detection, simplicity in implementation, and the memory size. The results also accentuate the recognition capability of the proposed method in distinguishing different types of voltage sag by testing three different voltage sag scenarios.

Article
FINGERPRINTS IDENTIFICATION USING NEUROFUZZY SYSTEM

Emad S. Jabber, Maytham A. Shahed

Pages: 89-102

PDF Full Text
Abstract

This paper deals with NeuroFuzzy System (NFS), which is used for fingerprint identification to determine a person's identity. Each fingerprint is represented by 8 bits/pixel grayscale image acquired by a scanner device. Many operations are performed on input image to present it on NFS, this operations are: image enhancement from noisy or distorted fingerprint image input and scaling the image to a suitable size presenting the maximum value for the pixel in grayscale image which represent the inputs for the NFS. For the NFS, it is trained on a set of fingerprints and tested on another set of fingerprints to illustrate its efficiency in identifying new fingerprints. The results proved that the NFS is an effective and simple method, but there are many factors that affect the efficiency of NFS learning and it has been noticed that the changing one of this factors affects the NFS results. These affecting factors are: number of training samples for each person, type and number of membership functions, and the type of fingerprint image that used.

Article
A Dataset for Kinship Estimation from Image of Hand Using Machine Learning

Sarah Ibrahim Fathi, Mazin H. Aziz

Pages: 127-136

PDF Full Text
Abstract

Kinship (Familial relationships) detection is crucial in many fields and has applications in biometric security, adoption, forensic investigations, and more. It is also essential during wars and natural disasters like earthquakes since it may aid in reunion, missing person searches, establishing emergency contacts, and providing psychological support. The most common method of determining kinship is DNA analysis which is highly accurate. Another approach, which is noninvasive, uses facial photos with computer vision and machine learning algorithms for kinship estimation. Each part of the Human -body has its own embedded information that can be extracted and adopted for identification, verification, or classification of that person. Kinship recognition is based on finding traits that are shared by every family. We investigate the use of hand geometry for kinship detection, which is a new approach. Because of the available hand image Datasets do not contain kinship ground truth; therefore, we created our own dataset. This paper describes the tools, methodology, and details of the collected MKH, which stands for the Mosul Kinship Hand, images dataset. The images of MKH dataset were collected using a mobile phone camera with a suitable setup and consisted of 648 images for 81 individuals from 14 families (8 hand situations per person). This paper also presents the use of this dataset in kinship prediction using machine learning. Google MdiaPipe was used for hand detection, segmentation, and geometrical key points finding. Handcraft feature extraction was used to extract 43 distinctive geometrical features from each image. A neural network classifier was designed and trained to predict kinship, yielding about 93% prediction accuracy. The results of this novel approach demonstrated that the hand possesses biometric characteristics that may be used to establish kinship, and that the suggested method is a promising way as a kinship indicator.

Article
A Survey on Segmentation Techniques for Image Processing

Wala’a N. Jasim, Rana Jassim Mohammed

Pages: 73-93

PDF Full Text
Abstract

The segmentation methods for image processing are studied in the presented work. Image segmentation can be defined as a vital step in digital image processing. Also, it is used in various applications including object co-segmentation, recognition tasks, medical imaging, content based image retrieval, object detection, machine vision and video surveillance. A lot of approaches were created for image segmentation. In addition, the main goal of segmentation is to facilitate and alter the image representation into something which is more important and simply to be analyzed. The approaches of image segmentation are splitting the images into a few parts on the basis of image’s features including texture, color, pixel intensity value and so on. With regard to the presented study, many approaches of image segmentation are reviewed and discussed. The techniques of segmentation might be categorized into six classes: First, thresholding segmentation techniques such as global thresholding (iterative thresholding, minimum error thresholding, otsu's, optimal thresholding, histogram concave analysis and entropy based thresholding), local thresholding (Sauvola’s approach, T.R Singh’s approach, Niblack’s approaches, Bernsen’s approach Bruckstein’s and Yanowitz method and Local Adaptive Automatic Binarization) and dynamic thresholding. Second, edge-based segmentation techniques such as gray-histogram technique, gradient based approach (laplacian of gaussian, differential coefficient approach, canny approach, prewitt approach, Roberts approach and sobel approach). Thirdly, region based segmentation approaches including Region growing techniques (seeded region growing (SRG), statistical region growing, unseeded region growing (UsRG)), also merging and region splitting approaches. Fourthly, clustering approaches, including soft clustering (fuzzy C-means clustering (FCM)) and hard clustering (K-means clustering). Fifth, deep neural network techniques such as convolution neural network, recurrent neural networks (RNNs), encoder-decoder and Auto encoder models and support vector machine. Finally, hybrid techniques such as evolutionary approaches, fuzzy logic and swarm intelligent (PSO and ABC techniques) and discusses the pros and cons of each method.

1 - 32 of 32 items

Search Parameters

Journal Logo
Iraqi Journal for Electrical and Electronic Engineering

College of Engineering, University of Basrah

  • Copyright Policy
  • Terms & Conditions
  • Privacy Policy
  • Accessibility
  • Cookie Settings
Licensing & Open Access

CC BY 4.0 Logo Licensed under CC-BY-4.0

This journal provides immediate open access to its content.

Editorial Manager Logo Elsevier Logo

Peer-review powered by Elsevier’s Editorial Manager®

Copyright © 2025 College of Engineering, University of Basrah. All rights reserved, including those for text and data mining, AI training, and similar technologies.