In a counterfeit clever control procedure, another productive methodology for an indoor robot localization framework is arranged. In this paper, a new mathematic calculation for the robot confinement framework utilizing light sensors is proposed. This procedure takes care of the issue of localization (position recognizing) when utilizing a grid of LEDs distributed uniformly in the environment, and a multi- portable robot outfitted with a multi-LDRs sensor and just two of them activate the visibility robot. The proposed method is utilized to assess the robot's situation by drawing two virtual circles for each two LDR sensors; one of them is valid and the other is disregarded according to several suggested equations. The midpoint of this circle is assumed to be the robot focus. The new framework is simulated on a domain with (n*n) LEDs exhibit. The simulation impact of this framework shows great execution in the localization procedure.
In this paper, a new technique for multi-robot localization in an unknown environment, called the leader-follower localization algorithm is presented. The framework utilized here is one robot that goes about as a leader and different robots are considered as followers distributed randomly in the environment. Every robot equipped with RP lidar sensors to scan the environment and gather information about every robot. This information utilized by the leader to distinguish and confine every robot in the environment. The issue of not noticeable robots is solved by contrasting their distances with the leader. Moreover, the equivalent distance robot issue is unraveled by utilizing the permutation algorithm. Several simulation scenarios with different positions and orientations are implemented on (3- 7) robots to show the performance of the introduced technique.
A new algorithm for the localization and identification of multi-node systems has been introduced in this paper; this algorithm is based on the idea of using a beacon provided with a distance sensor and IR sensor to calculate the location and to know the identity of each visible node during scanning. Furthermore, the beacon is fixed at middle of the frame bottom edge for a better vision of nodes. Any detected node will start to communicate with the neighboring nodes by using the IR sensors distributed on its perimeter; that information will be used later for the localization of invisible nodes. The performance of this algorithm is shown by the implementation of several simulations .
This paper proposes a low-cost Light Emitting Diodes (LED) system with a novel arrangement that allows an indoor multi- robot localization. The proposed system uses only a matrix of low-cost LED installed uniformly on the ground of an environment and low-cost Light Dependent Resistor (LDR), each equipped on bottom of the robot for detection. The matrix of LEDs which are driven by a modified binary search algorithm are used as active beacons. The robot localizes itself based on the signals it receives from a group of neighbor LEDs. The minimum bounded circle algorithm is used to draw a virtual circle from the information collected from the neighbor LEDs and the center of this circle represents the robot’s location. The propose system is practically implemented on an environment with (16*16) matrix of LEDs. The experimental results show good performance in the localization process.
In this paper, a new algorithm called table-based matching for multi-robot (node) that used for localization and orientation are suggested. The environment is provided with two distance sensors fixed on two beacons at the bottom corners of the frame. These beacons have the ability to scan the environment and estimate the location and orientation of the visible nodes and save the result in matrices which are used later to construct a visible node table. This table is used for matching with visible-robot table which is constructed from the result of each robot scanning to its neighbors with a distance sensor that rotates at 360⁰; at this point, the location and identity of all visible nodes are known. The localization and orientation of invisible robots rely on the matching of other tables obtained from the information of visible robots. Several simulations implementation are experienced on a different number of nodes to submit the performance of this introduced algorithm.
A new algorithm for multi-object recognition and localization is introduced in this paper. This algorithm deals with objects which have different reflectivity factors and distinguish color with respect to the other objects. Two beacons scan multi-color objects using long distance IR sensors to estimate their absolute locations. These two beacon nodes are placed at two corners of the environment. The recognition of these objects is estimated by matching the locations of each object with respect to the two beacons. A look-up table contains the distances information about different color objects is used to convert the reading of the long distance IR sensor from voltage to distance units. The locations of invisible objects are computed by using absolute locations of invisible objects method. The performance of introduced algorithm is tested with several experimental scenarios that implemented on color objects.
Independent Component Analysis (ICA) has been successfully applied to a variety of problems, from speaker identification and image processing to functional magnetic resonance imaging (fMRI) of the brain. In particular, it has been applied to analyze EEG data in order to estimate the sources form the measurements. However, it soon became clear that for EEG signals the solutions found by ICA often depends on the particular ICA algorithm, and that the solutions may not always have a physiologically plausible interpretation. Therefore, nowadays many researchers are using ICA largely for artifact detection and removal from EEG, but not for the actual analysis of signals from cortical sources. However, a recent modification of an ICA algorithm has been applied successfully to EEG signals from the resting state. The key idea was to perform a particular preprocessing and then apply a complex- valued ICA algorithm. In this paper, we consider multiple complex-valued ICA algorithms and compare their performance on real-world resting state EEG data. Such a comparison is problematic because the way of mixing the original sources (the “ground truth”) is not known. We address this by developing proper measures to compare the results from multiple algorithms. The comparisons consider the ability of an algorithm to find interesting independent sources, i.e. those related to brain activity and not to artifact activity. The performance of locating a dipole for each separated independent component is considered in the comparison as well. Our results suggest that when using complex-valued ICA algorithms on preprocessed signals the resting state EEG activity can be analyzed in terms of physiological properties. This reestablishes the suitability of ICA for EEG analysis beyond the detection and removal of artifacts with real-valued ICA applied to the signals in the time-domain.
With the recent developments of technology and the advances in artificial intelligence and machine learning techniques, it has become possible for the robot to understand and respond to voice as part of Human-Robot Interaction (HRI). The voice-based interface robot can recognize the speech information from humans so that it will be able to interact more naturally with its human counterpart in different environments. In this work, a review of the voice-based interface for HRI systems has been presented. The review focuses on voice-based perception in HRI systems from three facets, which are: feature extraction, dimensionality reduction, and semantic understanding. For feature extraction, numerous types of features have been reviewed in various domains, such as time, frequency, cepstral (i.e. implementing the inverse Fourier transform for the signal spectrum logarithm), and deep domains. For dimensionality reduction, subspace learning can be used to eliminate the redundancies of high-dimensional features by further processing extracted features to reflect their semantic information better. For semantic understanding, the aim is to infer from the extracted features the objects or human behaviors. Numerous types of semantic understanding have been reviewed, such as speech recognition, speaker recognition, speaker gender detection, speaker gender and age estimation, and speaker localization. Finally, some of the existing voice-based interface issues and recommendations for future works have been outlined.
In this paper a radial distribution feeder protection scheme against short circuit faults is introduced. It is based on utilizing the substation measured current signals in detecting faults and obtaining useful information about their types and locations. In order to facilitate important measurement signals features extraction such that better diagnosis of faults can be achieved, the discrete wavelet transform is exploited. The captured features are then utilized in detecting, identifying the faulted phases (fault type), and fault location. In case of a fault occurrence, the detection scheme will make a decision to trip out a circuit breaker residing at the feeder mains. This decision is made based on a criteria that is set to distinguish between the various system states in a reliable and accurate manner. After that, the fault type and location are predicted making use of the cascade forward neural networks learning and generalization capabilities. Useful information about the fault location can be obtained provided that the fault distance from source, as well as whether it resides on the main feeder or on one of the laterals can be predicted. By testing the functionality of the proposed scheme, it is found that the detection of faults is done fastly and reliably from the view point of power system protection relaying requirements. It also proves to overcome the complexities provided by the feeder structure to the accuracy of the identification process of fault types and locations. All the simulations and analysis are performed utilizing MATLAB R2016b version software package.
Self-driving cars are a fundamental research subject in recent years; the ultimate goal is to completely exchange the human driver with automated systems. On the other hand, deep learning techniques have revealed performance and effectiveness in several areas. The strength of self-driving cars has been deeply investigated in many areas including object detection, localization as well, and activity recognition. This paper provides an approach to deep learning; which combines the benefits of both convolutional neural network CNN together with Dense technique. This approach learns based on features extracted from the feature extraction technique which is linear discriminant analysis LDA combined with feature expansion techniques namely: standard deviation, min, max, mod, variance and mean. The presented approach has proven its success in both testing and training data and achieving 100% accuracy in both terms.
Arial images are very high resolution. The automation for map generation and semantic segmentation of aerial images are challenging problems in semantic segmentation. The semantic segmentation process does not give us precise details of the remote sensing images due to the low resolution of the aerial images. Hence, we propose an algorithm U-Net Architecture to solve this problem. It is classified into two paths. The compression path (also called: the encoder) is the first path and is used to capture the image's context. The encoder is just a convolutional and maximal pooling layer stack. The symmetric expanding path (also called: the decoder) is the second path, which is used to enable exact localization by transposed convolutions. This task is commonly referred to as dense prediction, which is completely connected to each other and also with the former neurons which gives rise to dense layers. Thus it is an end-to-end fully convolutional network (FCN), i.e. it only contains convolutional layers and does not contain any dense layer because of which it can accept images of any size. The performance of the model will be evaluated by improving the image using the proposed method U-NET and obtaining an improved image by measuring the accuracy compared with the value of accuracy with previous methods.
In coordination of a group of mobile robots in a real environment, the formation is an important task. Multi- mobile robot formations in global knowledge environments are achieved using small robots with small hardware capabilities. To perform formation, localization, orientation, path planning and obstacle and collision avoidance should be accomplished. Finally, several static and dynamic strategies for polygon shape formation are implemented. For these formations minimizing the energy spent by the robots or the time for achieving the task, have been investigated. These strategies have better efficiency in completing the formation, since they use the cluster matching algorithm instead of the triangulation algorithm.
A robot is a smart machine that can help people in their daily lives and keep everyone safe. the three general sequences to accomplish any robot task is mapping the environment, the localization, and the navigation (path planning with obstacle avoidance). Since the goal of the robot is to reach its target without colliding, the most important and challenging task of the mobile robot is the navigation. In this paper, the robot navigation problem is solved by proposed two algorithms using low-cost IR receiver sensors arranged as an array, and a robot has been equipped with one IR transmitter. Firstly, the shortest orientation algorithm is proposed, the robot direction is corrected at each step of movement depending on the angle calculation. secondly, an Active orientation algorithm is presented to solve the weakness in the preceding algorithm. A chain of the active sensors in the environment within the sensing range of the virtual path is activated to be scan through the robot movement. In each algorithm, the initial position of the robot is detected using the modified binary search algorithm, various stages are used to avoid obstacles through suitable equations focusing on finding the shortest and the safer path of the robot. Simulation results with multi-resolution environment explained the efficiency of the algorithms, they are compatible with the designed environment, it provides safe movements (without hitting obstacles) and a good system control performance. A Comparison table is also provided.