Independent Component Analysis (ICA) has been successfully applied to a variety of problems, from speaker identification and image processing to functional magnetic resonance imaging (fMRI) of the brain. In particular, it has been applied to analyze EEG data in order to estimate the sources form the measurements. However, it soon became clear that for EEG signals the solutions found by ICA often depends on the particular ICA algorithm, and that the solutions may not always have a physiologically plausible interpretation. Therefore, nowadays many researchers are using ICA largely for artifact detection and removal from EEG, but not for the actual analysis of signals from cortical sources. However, a recent modification of an ICA algorithm has been applied successfully to EEG signals from the resting state. The key idea was to perform a particular preprocessing and then apply a complex- valued ICA algorithm. In this paper, we consider multiple complex-valued ICA algorithms and compare their performance on real-world resting state EEG data. Such a comparison is problematic because the way of mixing the original sources (the “ground truth”) is not known. We address this by developing proper measures to compare the results from multiple algorithms. The comparisons consider the ability of an algorithm to find interesting independent sources, i.e. those related to brain activity and not to artifact activity. The performance of locating a dipole for each separated independent component is considered in the comparison as well. Our results suggest that when using complex-valued ICA algorithms on preprocessed signals the resting state EEG activity can be analyzed in terms of physiological properties. This reestablishes the suitability of ICA for EEG analysis beyond the detection and removal of artifacts with real-valued ICA applied to the signals in the time-domain.
The main problem of line follower robot is how to make the mobile robot follows a desired path (which is a line drawn on the floor) smoothly and accurately in shortest time. In this paper, the design and implementation of a complex line follower mission is presented by using Matlab Simulink toolbox. The motion of mobile robot on the complex path is simulated by using the Robot Simulator which is programed in Matlab to design and test the performance of the proposed line follower algorithm and the designed PID controller. Due to the complexity of selection the parameters of PID controller, the Particle Swarm Optimization (PSO) algorithm are used to select and tune the parameters of designed PID controller. Five Infrared Ray (IR) sensors are used to collect the information about the location of mobile robot with respect to the desired path (black line). Depending on the collected information, the steering angle of the mobile robot will be controlled to maintain the robot on the desired path by controlling the speed of actuators (two DC motors). The obtained simulation results show that, the motion of mobile robot is still stable even the complex maneuver is performed. The hardware design of the robot system is perform by using the Arduino Mobile Robot (AMR). The Simulink Support Package for Arduino and control system toolbox are used to program the AMR. The practical results show that the performances of real mobile robot are exactly the same of the performances of simulated mobile robot.
In this paper new semi-empirical formulas are developed to evaluate the variation of both real and imaginary parts of soil complex permittivity with depth inside the earth's surface. Computed values using these models show good agreement with published measured values for soils of the same textures and same frequency band. Use of these models may serve to handle more accurate results especially in the ground probing radar (GPR) applications and other applications relating the detection of buried objects inside the earth's surface, where the use of a single average value of the soil complex permittivity had not necessarily led, for most of the times, to accurate results for the electromagnetic fields propagated inside the earth's surface.
To gain insight into complex biological endocrine glucose-insulin regulatory system where the interactions of components of the metabolic system and time-delay inherent in the biological system give rise to complex dynamics. The modeling has increased interest and importance in physiological research and enhanced the medical treatment protocols. This brief contains a new model using time delay differential equations, which give an accurate result by utilizing two explicit time delays. The bifurcation analysis has been conducted to find the main system parameters bifurcation values and corresponding system behaviors. The results found consistent with the biological experiments results.
This review article puts forward the phenomena of chaotic oscillation in electrical power systems. The aim is to present some short summaries written by distinguished researchers in the field of chaotic oscillation in power systems. The reviewed papers are classified according to the phenomena that cause the chaotic oscillations in electrical power systems. Modern electrical power systems are evolving day by day from small networks toward large-scale grids. Electrical power systems are constituted of multiple inter-linked together elements, such as synchronous generators, transformers, transmission lines, linear and nonlinear loads, and many other devices. Most of these components are inherently nonlinear in nature rendering the whole electrical power system as a complex nonlinear network. Nonlinear systems can evolve very complex dynamics such as static and dynamic bifurcations and may also behave chaotically. Chaos in electrical power systems is very unwanted as it can drive system bus voltage to instability and can lead to voltage collapse and ultimately cause a general blackout.
Due to their vital applications in many real-world situations, researchers are still presenting bunches of methods for better analysis of motor imagery (MI) electroencephalograph (EEG) signals. However, in general, EEG signals are complex because of their nonstationary and high-dimensionality properties. Therefore, high consideration needs to be taken in both feature extraction and classification. In this paper, several hybrid classification models are built and their performance is compared. Three famous wavelet mother functions are used for generating scalograms from the raw signals. The scalograms are used for transfer learning of the well-known VGG-16 deep network. Then, one of six classifiers is used to determine the class of the input signal. The performance of different combinations of mother functions and classifiers are compared on two MI EEG datasets. Several evaluation metrics show that a model of VGG-16 feature extractor with a neural network classifier using the Amor mother wavelet function has outperformed the results of state-of-the-art studies.
Epilepsy, a neurological disorder characterized by recurring seizures, necessitates early and precise detection for effective management. Deep learning techniques have emerged as powerful tools for analyzing complex medical data, specifically electroencephalogram (EEG) signals, advancing epileptic detection. This review comprehensively presents cutting-edge methodologies in deep learning-based epileptic detection systems. Beginning with an overview of epilepsy’s fundamental concepts and their implications for individuals and healthcare are present. This review then delves into deep learning principles and their application in processing EEG signals. Diverse research papers to know the architectures—convolutional neural networks, recurrent neural networks, and hybrid models—are investigated, emphasizing their strengths and limitations in detecting epilepsy. Preprocessing techniques for improving EEG data quality and reliability, such as noise reduction, artifact removal, and feature extraction, are discussed. Present performance evaluation metrics in epileptic detection, such as accuracy, sensitivity, specificity, and area under the curve, are provided. This review anticipates future directions by highlighting challenges such as dataset size and diversity, model interpretability, and integration with clinical decision support systems. Finally, this review demonstrates how deep learning can improve the precision, efficiency, and accessibility of early epileptic diagnosis. This advancement allows for more timely interventions and personalized treatment plans, potentially revolutionizing epilepsy management.
It's not easy to implement the mixed / optimal controller for high order system, since in the conventional mixed / optimal feedback the order of the controller is much than that of the plant. This difficulty had been solved by using the structured specified PID controller. The merit of PID controllers comes from its simple structure, and can meets the industry processes. Also it have some kind of robustness. Even that it's hard to PID to cope the complex control problems such as the uncertainty and the disturbance effects. The present ideas suggests combining some of model control theories with the PID controller to achieve the complicated control problems. One of these ideas is presented in this paper by tuning the PID parameters to achieve the mixed / optimal performance by using Intelligent Genetic Algorithm (IGA). A simple modification is added to IGA in this paper to speed up the optimization search process. Two MIMO example are used during investigation in this paper. Each one of them has different control problem.
Brain tumors are collections of abnormal tissues within the brain. The regular function of the brain may be affected as it grows within the region of the skull. Brain tumors are critical for improving treatment options and patient survival rates to prevent and treat them. The diagnosis of cancer utilizing manual approaches for numerous magnetic resonance imaging (MRI) images is the most complex and time-consuming task. Brain tumor segmentation must be carried out automatically. A proposed strategy for brain tumor segmentation is developed in this paper. For this purpose, images are segmented based on region-based and edge-based. Brain tumor segmentation 2020 (BraTS2020) dataset is utilized in this study. A comparative analysis of the segmentation of images using the edge-based and region-based approach with U-Net with ResNet50 encoder, architecture is performed. The edge-based segmentation model performed better in all performance metrics compared to the region-based segmentation model and the edge-based model achieved the dice loss score of 0. 008768, IoU score of 0. 7542, f1 score of 0. 9870, the accuracy of 0. 9935, the precision of 0. 9852, recall of 0. 9888, and specificity of 0. 9951.
Nowadays, it is difficult to imagine a powerful algorithm of cryptography that can continue cryptanalyzing and attacking without the use of unconventional techniques. Although some of the substitution algorithms are old, such as Vigen`ere, Alberti, and Trithemius ciphers, they are considered powerful and cannot be broken. In this paper we produce the novelty algorithm, by using of biological computation as an unconventional search tool combined with an uninhibited analysis method is the vertical probabilistic model, that makes attacking and analyzing these ciphers possible and very easy to transform the problem from a complex to a linear one, which is a novelty achievement. The letters of the encoded message are processed in the form of segments of equal length, to report the available hardware components. Each letter codon represents a region of the memory strand, and the letters calculated for it are symbolized within the probabilistic model so that each pair has a triple encoding: the first is given as a memory strand encoding and the others are its complement in the sticker encoding; These encodings differ from one region to another. The solution space is calculated and then the parallel search process begins. Some memory complexities are excluded even though they are within the solution paths formed, because the natural language does not contain its sequences. The precision of the solution and the time consuming of access to it depend on the length of the processed text, and the precision of the solution is often inversely proportional to the speed of access to it. As an average of the time spent to reach the solution, a text with a length of 200 cipher characters needs approximately 15 minutes to give 98% of the correct components of the specific hardware. The aim of the paper is to transform OTP substitution analysis from a NP problem to a O(nm) problem, which makes it easier to find solutions to it easily with the available capabilities and to develop methods that are harnessed to attack difficult and powerful ciphers that differ in class and type from the OTP polyalphabetic substitution ciphers.
Pre-processing is very useful in a variety of situations since it helps to suppress information that is not related to the exact image processing or analysis task. Mathematical morphology is used for analysis, understanding and image processing. It is an influential method in the geometric morphological analysis and image understanding. It has befallen a new theory in the digital image processing domain. Edges detection and noise reduction are a crucial and very important pre-processing step. The classical edge detection methods and filtering are less accurate in detecting complex edge and filtering various types of noise. This paper proposed some useful mathematic morphological techniques to detect edge and to filter noise in metal parts image. The experimental result showed that the proposed algorithm helps to increase accuracy of metal parts inspection system.
In this paper, a new nonlinear dynamic system, new three-dimensional fractional order complex chaotic system, is presented. This new system can display hidden chaotic attractors or self-excited chaotic attractors. The Dynamic behaviors of this system have been considered analytically and numerically. Different means including the equilibria, chaotic attractor phase portraits, the Lyapunov exponent, and the bifurcation diagrams are investigated to show the chaos behavior in this new system. Also, a synchronization technique between two identical new systems has been developed in master- slave configuration. The two identical systems are synchronized quickly. Furthermore, the master-slave synchronization is applied in secure communication scheme based on chaotic masking technique. In the application, it is noted that the message is encrypted and transmitted with high security in the transmitter side, in the other hand the original message has been discovered with high accuracy in the receiver side. The corresponding numerical simulation results proved the efficacy and practicability of the developed synchronization technique and its application
The decline in the marketing volume of Rabobank Group ICT is a serious incident as it can hinder the implementation of an increasing number of software releases for business development. The Service Desk Agent records the activities that occur to find out the problems experienced in the form of an event log. Process mining can be used to generate process model visualizations based on event logs to explicitly monitor the business. Fuzzy Miner and Heuristic Miner algorithms can be used to handle complex event logs. In this study, an analysis of the Rabobank Group ICT incident was carried out with process mining using the Fuzzy Miner and Heuristic Miner algorithms. Process mining is done by discovery, conformance, and enhancement. Based on the results of the study, it is known that the division of the work area is not good enough to cause a team to work on a lot of events while there are other teams that only work on one event. Therefore, it is necessary to have a clear and balanced division of domains and workloads so that incidents do not recur.
The rapid progress in mobile computing necessitates energy efficient solutions to support substantially diverse and complex workloads. Heterogeneous many core platforms are progressively being adopted in contemporary embedded implementations for high performance at low power cost estimations. These implementations experience diverse workloads that offer drastic opportunities to improve energy efficiency. In this paper, we propose a novel per core power gating (PCPG) approach based on workload classifications (WLC) for drastic energy cost minimization in the dark silicon era. Core of our paradigm is to use an integrated sleep mode management based on workloads classification indicated by the performance counters. A number of real applications benchmark (PARSEC) are adopted as a practical example of diverse workloads, including memory- and CPU-intensive ones. In this paper, these applications are exercised on Samsung Exynos 5422 heterogeneous many core system showing up to 37% to 110% energy efficient when compared with our most recent published work, and ondemand governor, respectively. Furthermore, we illustrate low-complexity and low-cost runtime per core power gating algorithm that consistently maximize IPS/Watt at all state space.
The conventional multilevel inverter (MLI) is divided into three types: diode clamped MLI, cascade H Bridge MLI and flying capacitor MLI. The main disadvantage of these types is the higher required number of components when the number of the levels increases and this results in more switching losses, system higher cost, more complex of control circuit as well as less accuracy. The work in this paper proposes two topologies of nonconventional diode clamping MLI three phase nine levels and eleven levels. The first proposed topology has ten switches and six diodes per phase while the second topology has nine switches and four diodes per phase. The pulse width modulation (PWM) control method is used as a control to gate switches. THD of the two proposed topologies are analyzed and calculated according different values of Modulation index (where the power loss and efficiency are obtained and plotted.
The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better.
Detecting pulmonary cancers at early stages is difficult but crucial for patient survival. Therefore, it is essential to develop an intelligent, autonomous, and accurate lung cancer detection system that shows great reliability compared to previous systems and research. In this study, we have developed an innovative lung cancer detection system known as the Hybrid Lung Cancer Stage Classifier and Diagnosis Model (Hybrid-LCSCDM). This system simplifies the complex task of diagnosing lung cancer by categorizing patients into three classes: normal, benign, and malignant, by analyzing computed tomography (CT) scans using a two-part approach: First, feature extraction is conducted using a pre-trained model called VGG-16 for detecting key features in lung CT scans indicative of cancer. Second, these features are then classified using a machine learning technique called XGBoost, which sorts the scans into three categories. A dataset, IQ-OTH/NCCD - Lung Cancer, is used to train and evaluate the proposed model to show its effectiveness. The dataset consists of the three aforementioned classes containing 1190 images. Our suggested strategy achieved an overall accuracy of 98.54%, while the classification precision among the three classes was 98.63%. Considering the accuracy, recall, and precision as well as the F1-score evaluation metrics, the results indicated that when using solely computed tomography scans, the proposed (Hybrid-LCSCDM) model outperforms all previously published models.
Mathematical modeling is very effective method to investigate interaction between insulin and glucose. In this paper, a new mathematical model for insulin-glucose regulation system is introduced based on well-known Lokta-Volterra model. Chaos is a common property in complex biological systems in the previous studies. The results here are in accordance with previous ones and indicating that insulin-glucose regulating system has many dynamics in different situations. The overall result of this paper may be helpful for better understanding of diabetes mellitus regulation system including diseases such as hyperinsulinemia and Type1 DM.
This paper presents a low-cost Brushless DC (BLDC) motor drive system with fewer switches. BLDC motors are widely utilized in variable speed drives and industrial applications due to their high efficiency, high power factor, high torque, low maintenance, and ease of control. The proposed control strategy for robust speed control is dependent on two feedback signals which are speed sensor loop which is regulated by Sliding Mode Controller (SMC) and current sensor loop which is regulated by Proportional-Integral (PI) for boosting the drive system adaptability. In this work, the BLDC motor is driven by a four-switch three-phase inverter emulating a three-phase six switch inverter, to reduce switching losses with a low complex control strategy. In order to reach a robust performance of the proposed control strategy, the Lévy Flight Distribution (LFD) technique is used to tune the gains of PI and SMC parameters. The Integral Time Absolute Error (ITAE) is used as a fitness function. The simulation results show the SMC with LFD technique has superiority over conventional SMC and optimization PI controller in terms of fast-tracking to the desired value, reduction speed error to the zero value, and low overshoot under sudden change conditions.
Fuzzy PID controller design is still a complex task due to the involvement of a large number of parameters in defining the fuzzy rule base. To reduce the huge number of fuzzy rules required in the normal design for fuzzy PID controller, the fuzzy PID controller is represented as Proportional-Derivative Fuzzy (PDF) controller and Proportional-Integral Fuzzy (PIF) controller connected in parallel through a summer. The PIF controller design has been simplified by replacing the PIF controller by PDF controller with accumulating output. In this paper, the modified Fuzzy PID controller design for bench-top helicopter has been presented. The proposed Fuzzy PID controller has been described using Very High Speed Integrated Circuit Hardware Description Language (VHDL) and implemented using the Field Programmable Gate Array (FPGA) board. The bench-top helicopter has been used to test the proposed controller. The results have been compared with the conventional PID controller and Internal Model Control Tuned PID (IMC-PID) Controller. Simulation results show that the modified Fuzzy PID controller produces superior control performance than the other two controllers in handling the nonlinearity of the helicopter system. The output signal from the FPGA board is compared with the output of the modified Fuzzy PID controller to show that the FPGA board works like the Fuzzy PID controller. The result shows that the plant responses with the FPGA board are much similar to the plant responses when using simulation software based controller.
Path-planning is a crucial part of robotics, helping robots move through challenging places all by themselves. In this paper, we introduce an innovative approach to robot path-planning, a crucial aspect of robotics. This technique combines the power of Genetic Algorithm (GA) and Probabilistic Roadmap (PRM) to enhance efficiency and reliability. Our method takes into account challenges caused by moving obstacles, making it skilled at navigating complex environments. Through merging GA’s exploration abilities with PRM’s global planning strengths, our GA-PRM algorithm improves computational efficiency and finds optimal paths. To validate our approach, we conducted rigorous evaluations against well-known algorithms including A*, RRT, Genetic Algorithm, and PRM in simulated environments. The results were remarkable, with our GA-PRM algorithm outperforming existing methods, achieving an average path length of 25.6235 units and an average computational time of 0.6881 seconds, demonstrating its speed and effectiveness. Additionally, the paths generated were notably smoother, with an average value of 0.3133. These findings highlight the potential of the GA-PRM algorithm in real-world applications, especially in crucial sectors like healthcare, where efficient path-planning is essential. This research contributes significantly to the field of path-planning and offers valuable insights for the future design of autonomous robotic systems.
In this paper, a model of PI-speed control current-driven induction motor based on indirect field oriented control (IFOC) is addressed. To assess the complex dynamics of a system, different dynamical properties, such as stability of equilibrium points, bifurcation diagrams, Lyapunov exponents spectrum, and phase portraits are characterized. It is found that the induction motor model exhibits chaotic behaviors when its parameters fall into a certain region. Small variations of PI parameters and load torque affect the dynamics and stability of this electric machine. A chaotic attractor has been observed and the speed of the motor oscillates chaotically. Numerical simulation results are validating the theoretical analysis.
Hand gesture recognition is a quickly developing field with many uses in human-computer interaction, sign language recognition, virtual reality, gaming, and robotics. This paper reviews different ways to model hands, such as vision-based, sensor-based, and data glove-based techniques. It emphasizes the importance of accurate hand modeling and feature extraction for capturing and analyzing gestures. Key features like motion, depth, color, shape, and pixel values and their relevance in gesture recognition are discussed. Challenges faced in hand gesture recognition include lighting variations, complex backgrounds, noise, and real-time performance. Machine learning algorithms are used to classify and recognize gestures based on extracted features. The paper emphasizes the need for further research and advancements to improve hand gesture recognition systems’ robustness, accuracy, and usability. This review offers valuable insights into the current state of hand gesture recognition, its applications, and its potential to revolutionize human-computer interaction and enable natural and intuitive interactions between humans and machines. In simpler terms, hand gesture recognition is a way for computers to understand what people are saying with their hands. It has many potential applications, such as allowing people to control computers without touching them or helping people with disabilities communicate. The paper reviews different ways to develop hand gesture recognition systems and discusses the challenges and opportunities in this area.
Unmanned aerial vehicles (UAV), have enormous important application in many fields. Quanser three degree of freedom (3-DOF) helicopter is a benchmark laboratory model for testing and validating the validity of various flight control algorithms. The elevation control of a 3-DOF helicopter is a complex task due to system nonlinearity, uncertainty and strong coupling dynamical model. In this paper, an RBF neural network model reference adaptive controller has been used, employing the grate approximation capability of the neural network to match the unknown and nonlinearity in order to build a strong MRAC adaptive control algorithm. The control law and stable neural network updating law are determined using Lyapunov theory.