Navigational sensors are evolving both on a commercial and research level. However, the limitation still lies in the accuracy of the respective sensors. For a navigation system to reach a certain accuracy, multi sensors or fusion sensors are used. In this paper, a framework of fuzzy sensor data fusing is proposed to obtain an optimised navigational system. Different types of sensors without a known state of inaccuracy can be fused using the same method proposed. This is demonstrated by fusing compass/accelerometer and GPS signal. GPS is prone to inaccuracies due to environmental factors. These inaccuracies are available in the extracted NMEA protocols as SNR and HDOP. Dead reckoning sensors on the other hand do not depend on external radio signal coverage and can be used in areas with low coverage, but the errors are unbounded and have an accumulative effect over time.
In the city of Basrah, there is an urgent need to use the water for irrigation process more efficiently for many reasons: one of them, the high temperature in long summer season and the other is the lack of sources fresh water sources. In this work, a smart irrigation system based wireless sensor networks (WSNs) is implemented. This system consists of the main unit that represented by an Arduino Uno board which include an ATmega328 microcontroller, different sensors as moisture sensors, temperature sensors, humidity sensors, XBee modules and solenoid valve. Zigbee technology is used in this project for implementing wireless technology. This system has two modes one manual mode, the other is a smart mode. The set points must be changed manually according to the specified season to satisfy the given conditions for the property irrigation, and the smart operation of the system will be according to these set points.
The gyroscope and accelerometer are the basic sensors used by most Unmanned Aerial Vehicle (UAV) like quadcopter to control itself. In this paper, the fault detection of measured angular and linear states by gyroscope and accelerometer sensors are present. Uncertainties in measurement and physical sensors itself are the main reasons that lead to generate noise and cause the fault in measured states. Most previous solutions are process angular or linear states to improving the performance of quadcopter. Also, in most of the previous solutions, KF and EKF filters are used, which are inefficient in dealing with high nonlinearity systems such as quadcopter. The proposed algorithm is developed by the robust nonlinear filter, Unscented Kalman Filter (UKF), as an angular and linear estimation filter. Simulation results show that the proposed algorithm is efficient to decrease the effect of sensors noise and estimate accurate angular and linear states. Also, improving the stability and performance properties of the quadcopter. In addition, the new algorithm leads to increasing the range of nonlinearity movements that quadcopter can perform it.
The scarcity of clean water resources around the globe has generated a need for their optimum utilization. Internet of Things (IoT) solutions, based on the application-specific sensors’ data acquisition and intelligent processing, are bridging the gaps between the cyber and physical worlds. IoT based smart irrigation management systems can help in achieving optimum water- resource utilization in the precision farming landscape. This paper presents an open-source technology-based smart system to predict the irrigation requirements of a field using the sensing of ground parameters like soil moisture, soil temperature, and environmental conditions along with the weather forecast data from the Internet. The sensing nodes, involved in the ground and environmental sensing, consider soil moisture, air temperature, and relative humidity of the crop field. This mainly focused on wastage of water, which is a major concern of the modern era. It is also time-saving, allows a user to monitor environmental data for agriculture using a web browser and Email, cost-effectiveness, environmental protection, low maintenance and operating cost and efficient irrigation service. The proposed system is made up of two parts: hardware and software. The hardware consists of a Base Station Unit (BSU) and several Terminal Nodes (TNs). The software is made up of the programming of the Wi-Fi network and the system protocol. In this paper, an MQTT (Message Queue Telemetry Transportation) broker was built on the BSU and TU board.
In a counterfeit clever control procedure, another productive methodology for an indoor robot localization framework is arranged. In this paper, a new mathematic calculation for the robot confinement framework utilizing light sensors is proposed. This procedure takes care of the issue of localization (position recognizing) when utilizing a grid of LEDs distributed uniformly in the environment, and a multi- portable robot outfitted with a multi-LDRs sensor and just two of them activate the visibility robot. The proposed method is utilized to assess the robot's situation by drawing two virtual circles for each two LDR sensors; one of them is valid and the other is disregarded according to several suggested equations. The midpoint of this circle is assumed to be the robot focus. The new framework is simulated on a domain with (n*n) LEDs exhibit. The simulation impact of this framework shows great execution in the localization procedure.
In maze maneuvering, it is needed for a mobile robot to feasibly plan the shortest path from its initial posture to the desired destination in a given environment. To achieve that, the mobile robot is combined with multiple distance sensors to assist the navigation while avoiding obstructing obstacles and following the shortest path toward the target. Additionally, a vision sensor is used to detect and track colored objects. A new algorithm is proposed based on different type of utilized sensors to aid the maneuvering of differential drive mobile robot in an unknown environment. In the proposed algorithm, the robot has the ability to traverse surrounding hindrances and seek for a particular object based on its color. Six infrared sensors are used to detect any located obstacles and one color detection sensor is used to locate the colored object. The Mobile Robotics Simulation Toolbox in Matlab is used to test the proposed algorithm. Three different scenarios are studied to prove the efficiency of the proposed algorithm. The simulation results demonstrate that the mobile robot has successfully accomplished the tracking and locating of a colored object without collision with hurdles.
A robot is a smart machine that can help people in their daily lives and keep everyone safe. the three general sequences to accomplish any robot task is mapping the environment, the localization, and the navigation (path planning with obstacle avoidance). Since the goal of the robot is to reach its target without colliding, the most important and challenging task of the mobile robot is the navigation. In this paper, the robot navigation problem is solved by proposed two algorithms using low-cost IR receiver sensors arranged as an array, and a robot has been equipped with one IR transmitter. Firstly, the shortest orientation algorithm is proposed, the robot direction is corrected at each step of movement depending on the angle calculation. secondly, an Active orientation algorithm is presented to solve the weakness in the preceding algorithm. A chain of the active sensors in the environment within the sensing range of the virtual path is activated to be scan through the robot movement. In each algorithm, the initial position of the robot is detected using the modified binary search algorithm, various stages are used to avoid obstacles through suitable equations focusing on finding the shortest and the safer path of the robot. Simulation results with multi-resolution environment explained the efficiency of the algorithms, they are compatible with the designed environment, it provides safe movements (without hitting obstacles) and a good system control performance. A Comparison table is also provided.
Energy consumption problems in wireless sensor networks are an essential aspect of our days where advances have been made in the sizes of sensors and batteries, which are almost very small to be placed in the patient's body for remote monitoring. These sensors have inadequate resources, such as battery power that is difficult to replace or recharge. Therefore, researchers should be concerned with the area of saving and controlling the quantities of energy consumption by these sensors efficiently to keep it as long as possible and increase its lifetime. In this paper energy-efficient and fault-tolerance strategy is proposed by adopting the fault tolerance technique by using the self-checking process and sleep scheduling mechanism for avoiding the faults that may cause an increase in power consumption as well as energy-efficient at the whole network. this is done by improving the LEACH protocol by adding these proposed strategies to it. Simulation results show that the recommended method has higher efficiency than the LEACH protocol in power consumption also can prolong the network lifetime. In addition, it can detect and recover potential errors that consume high energy.
It can be said that the system of sensing the tilt angle and speed of a multi-rotor copter come in the first rank among all the other sensors on the multi-rotor copters and all other planes due to its important roles for stabilization. The MPU6050 sensor is one of the most popular sensors in this field. It has an embedded 3-axis accelerometer and a 3-axis gyroscope. It is a simple sensor in dealing with it and extracting accurate data. Everything changes when this sensor is placed on the plane. It becomes very complicated to deal with it due to vibration of the motors on the multirotor copter. In this study, two main problems were diagnosed was solved that appear in most sensors when they are applied to a high-frequency vibrating environment. The first problem is how to get a precise angle of the sensor despite the presence of vibration. The second problem is how to overcome the errors that appear when the multirotor copter revolves around its vertical axis during the tilting in either direction x or y or both. The first problem was solved in two steps. The first step involves mixing data of the gyroscope sensor with the data of auxetometer sensor by a mathematical equation based on optimized complementary filter using gray wolf optimization algorithm GWO. The second step involves designing a suitable FIR filter for data. The second problem was solved by finding a non-linear mathematical relationship between the angles of the copter in both X and Y directions, and the rotation around the vertical axis of multirotor copter frame.
A wireless body area network (WBAN) connects separate sensors in many places of the human body, such as clothes, under the skin. WBAN can be used in many domains such as health care, sports, and control system. In this paper, a scheme focused on managing a patient’s health care is presented based on building a WBAN that consists of three components, biometric sensors, mobile applications related to the patient, and a remote server. An excellent scheme is proposed for the patient’s device, such as a mobile phone or a smartwatch, which can classify the signal coming from a biometric sensor into two types, normal and abnormal. In an abnormal signal, the device can carry out appropriate activities for the patient without requiring a doctor as a first case. The patient does not respond to the warning message in a critical case sometimes, and the personal device sends an alert to the patient’s family, including his/her location. The proposed scheme can preserve the privacy of the sensitive data of the patient in a protected way and can support several security features such as mutual authentication, key management, anonymous password, and resistance to malicious attacks. These features have been proven depending on the Automated Validation of Internet Security Protocols and Applications. Moreover, the computation and communication costs are efficient compared with other related schemes.
This paper provides a two algorithms for designing robust formation control of multiple robots called Leader- Neighbor algorithm and Neighbor-Leader algorithm in unknown environment. The main function of the robot group is to use the RP lidar sensor attached to each robot to form a static geometric polygon. The algorithms consist of two phases implemented to investigate the formation of polygon shape. In the leading- neighbor algorithm, the first stage is the leader alignment and the adjacent alignment is the second stage. The first step uses the information gathered by the main RP Lidar sensor to determine and compute the direction of each adjacent robot. The adjacent RP Lidar sensors are used to align the adjacent robots of the leader by transferring these adjacent robots to the leader. By performing this stage, the neighboring robots will be far from the leader. The second stage uses the information gathered by adjacent RP sensors to reposition the robots so that the distance between them is equal. On the other hand, in the neighbor-leader algorithm, the adjacent robots are rearranged in a regular distribution by moving in a circular path around the leader, with equal angles between each of the two neighbor robots. A new distribution will be generated in this paper by using one leader and four adjacent robots to approve the suggested leader neighbor algorithm and neighbor-leader algorithm .
The main problem of line follower robot is how to make the mobile robot follows a desired path (which is a line drawn on the floor) smoothly and accurately in shortest time. In this paper, the design and implementation of a complex line follower mission is presented by using Matlab Simulink toolbox. The motion of mobile robot on the complex path is simulated by using the Robot Simulator which is programed in Matlab to design and test the performance of the proposed line follower algorithm and the designed PID controller. Due to the complexity of selection the parameters of PID controller, the Particle Swarm Optimization (PSO) algorithm are used to select and tune the parameters of designed PID controller. Five Infrared Ray (IR) sensors are used to collect the information about the location of mobile robot with respect to the desired path (black line). Depending on the collected information, the steering angle of the mobile robot will be controlled to maintain the robot on the desired path by controlling the speed of actuators (two DC motors). The obtained simulation results show that, the motion of mobile robot is still stable even the complex maneuver is performed. The hardware design of the robot system is perform by using the Arduino Mobile Robot (AMR). The Simulink Support Package for Arduino and control system toolbox are used to program the AMR. The practical results show that the performances of real mobile robot are exactly the same of the performances of simulated mobile robot.
Nowadays, renewable energy is being used increasingly because of the global warming and destruction of the environment. Therefore, the studies are concentrating on gain of maximum power from this energy such as the solar energy. A sun tracker is device which rotates a photovoltaic (PV) panel to the sun to get the maximum power. Disturbances which are originated by passing the clouds are one of great challenges in design of the controller in addition to the losses power due to energy consumption in the motors and lifetime limitation of the sun tracker. In this paper, the neuro-fuzzy controller has been designed and implemented using Field Programmable Gate Array (FPGA) board for dual axis sun tracker based on optical sensors to orient the PV panel by two linear actuators. The experimental results reveal that proposed controller is more robust than fuzzy logic controller and proportional- integral (PI) controller since it has been trained offline using Matlab tool box to overcome those disturbances. The proposed controller can track the sun trajectory effectively, where the experimental results reveal that dual axis sun tracker power can collect 50.6% more daily power than fixed angle panel. Whilst one axis sun tracker power can collect 39.4 % more daily power than fixed angle panel. Hence, dual axis sun tracker can collect 8 % more daily power than one axis sun tracker .
The monitoring of COVID-19 patients has been greatly aided by the Internet of Things (IoT). Vital signs, symptoms, and mobility data can be gathered and analyzed by IoT devices, including wearables, sensors, and cameras. This information can be utilized to spot early infection symptoms, monitor the illness’s development, and stop the virus from spreading. It’s critical to take vital signs of hospitalized patients in order to assess their health. Although early warning scores are often calculated three times a day, they might not indicate decompensation symptoms right away. Death rates are higher when deterioration is not properly diagnosed. By employing wearable technology, these ongoing assessments may be able to spot clinical deterioration early and facilitate prompt therapies. This research describes the use of Internet of Things (IoT) to follow fatal events in high-risk COVID-19 patients. These patients’ vital signs, which include blood pressure, heart rate, respiration rate, blood oxygen level, and fever, are taken and fed to a central server on a regular basis so that information may be processed, stored, and published instantly. After processing, the data is utilized to monitor the patients’ condition and send Short Message Service (SMS) alerts when the patients’ vital signs rise above predetermined thresholds. The system’s design, which is based on two ESP32 controllers, sensors for the vital signs listed above, and a gateway, provides real-time reports, high-risk alerts, and patient status information. Clinicians, the patient’s family, or any other authorized person can keep an eye on and follow the patient’s status at any time and from any location. The main contribution in this work is the designed algorithm used in the gateway and the manner in which this gateway collects, analyze, process, and send the patient’s data to the IoT server from one side and the manner in which the gateway deals with the IoT server in the other side. The proposed method leads to reduce the cost and the time the system it takes to get the patient’s status report.
This work presents a healthcare monitoring system that can be used in an intensive care room. Biological information represented by ECG signals is achieved by ECG acquisition part . AD620 Instrumentation Amplifier selected due to its low current noise. The ECG signals of patients in the intensive care room are measured through wireless nodes. A base node is connected to the nursing room computer via a USB port , and is programmed with a specific firmware. The ECG signals are transferred wirelessly to the base node using nRF24L01+ wireless module. So, the nurse staff has a real time information for each patient available in the intensive care room. A star Wireless Sensor Network is designed for collecting ECG signals . ATmega328 MCU in the Arduino Uno board used for this purpose. Internet for things used For transferring ECG signals to the remote doctor, a Virtual Privet Network is established to connect the nursing room computer and the doctor computer . So, the patients information kept secure. Although the constructed network is tested for ECG monitoring, but it can be used to monitor any other signals. INTRODUCTION For elderly people, or the patient suffering from the cardiac disease it is very vital to perform accurate and quick diagnosis. Putting such person under continuous monitoring is very necessary. (ECG) is one of the critical health indicators that directly bene ¿ t from long-term monitoring. ECG signal is a time-varying signal representing the electrical activity of the heart. It is an effective, non- invasive diagnostic tool for cardiac monitoring[1]. In this medical field, a big improvement has been achieved in last few years. In the past, several remote monitoring systems using wired communications were accessible while nowadays the evolution of wireless communication means enables these systems to operate everywhere in the world by expanding internet benefits, applications, and services [2]. Wireless Sensor Networks (WSNs), as the name suggests consist of a network of wireless nodes that have the capability to sense a parameter of interest like temperature, humidity, vibration etc[3,4]. The health care application of wireless sensory network attracts many researches nowadays[ 5-7] . Among these applications ECG monitoring using smart phones[6,8], wearable Body sensors[9], remote patient mentoring[10],...etc. This paper presents wireless ECG monitoring system for people who are lying at intensive care room. At this room ECG signals for every patient are measured using wireless nodes then these signals are transmitted to the nursing room for remote monitoring. The nursing room computer is then connected to the doctors computer who is available at any location over the word by Virtual Privet Network (VPN) in such that the patients information is kept secure and inaccessible from unauthorized persons. II. M OTE H ARDWARE A RCHITECTURE The proposed mote as shown in Fig.1 consists of two main sections : the digital section which is represented by the Arduino UNO Board and the wireless module and the analog section. The analog section consists of Instrumentation Amplifier AD620 , Bandpass filter and an operational amplifier for gain stage, in addition to Right Leg Drive Circuit. The required power is supplied by an internal 3800MAH Lithium-ion (Li-ion) battery which has 3.7V output voltage.
In this paper, a new technique for multi-robot localization in an unknown environment, called the leader-follower localization algorithm is presented. The framework utilized here is one robot that goes about as a leader and different robots are considered as followers distributed randomly in the environment. Every robot equipped with RP lidar sensors to scan the environment and gather information about every robot. This information utilized by the leader to distinguish and confine every robot in the environment. The issue of not noticeable robots is solved by contrasting their distances with the leader. Moreover, the equivalent distance robot issue is unraveled by utilizing the permutation algorithm. Several simulation scenarios with different positions and orientations are implemented on (3- 7) robots to show the performance of the introduced technique.
A new algorithm for multi-object recognition and localization is introduced in this paper. This algorithm deals with objects which have different reflectivity factors and distinguish color with respect to the other objects. Two beacons scan multi-color objects using long distance IR sensors to estimate their absolute locations. These two beacon nodes are placed at two corners of the environment. The recognition of these objects is estimated by matching the locations of each object with respect to the two beacons. A look-up table contains the distances information about different color objects is used to convert the reading of the long distance IR sensor from voltage to distance units. The locations of invisible objects are computed by using absolute locations of invisible objects method. The performance of introduced algorithm is tested with several experimental scenarios that implemented on color objects.
the proposed design offers a complete solution to support and surveillance vehicles remotely. The offered algorithm allows a monitoring center to track vehicles; diagnoses fault remotely, control the traffic and control CO emission. The system is programmed to scan the on-board diagnostic OBD periodically or based on request to check if there are any faults and read all the available sensors, then make an early fault prediction based on the sensor readings, an experience with the vehicle type and fault history. It is so useful for people who are not familiar with fault diagnosis as well as the maintenance center. The system offers tracking the vehicle remotely, which protects it against theft and warn the driver if it exceeds the speed limit according to its location. Finally, it allows the user to report any traffic congestion and allow s a vehicle navigator to be up to date with the traffic condition based on the other system’s user feedback.
Agriculture is the primary food source for humans and livestock in the world and the primary source for the economy of many countries. The majority of the country's population and the world depend on agriculture. Still, at present, farmers are facing difficulty in dealing with the requirements of agriculture. Due to many reasons, including different and extreme weather conditions, the abundance of water quality, etc. This paper applied the Internet of Things and deep learning system to establish a smart farming system to monitor the environmental conditions that affect tomato plants using a mobile phone. Through deep learning networks, trained the dataset taken from PlantVillage and collected from google images to classify tomato diseases, and obtained a test accuracy of 97%, which led to the publication of the model to the mobile application for classification for its high accuracy. Using the IoT, a monitoring system and automatic irrigation were built that were controlled through the mobile remote to monitor the environmental conditions surrounding the plant, such as air temperature and humidity, soil moisture, water quality, and carbon dioxide gas percentage. The designed system has proven its efficiency when tested in terms of disease classification, remote irrigation, and monitoring of the environmental conditions surrounding the plant. And giving alerts when the values of the sensors exceed the minimum or higher values causing damage to the plant. The farmer can take the appropriate action at the right time to prevent any damage to the plant and thus obtain a high-quality product.
This paper presents a new strategy of sticky bomb detection. The detection strategy is based on measuring the magnetic field around the targeted car using compass device. A compass measure the earth gravitation of the car as (x,y,z) coordination , a threshold value of magnetic fields around the targeted car are recorded. If a difference is detected with any (x,y,z) coordination, an alert SMS message is sent to the car's owner. The detection system presented in this paper has been implemented based on Arduino board. The alarm signal is a Short Message Service (SMS) through Global System for Mobile Communication (GSM) module. The proposed method can gives the people of unstable countries a chance to discover whether their cars have been trapped with an IED bomb or their car still safe.
Wavelet-based algorithms are increasingly used in the source coding of remote sensing, satellite and other geospatial imagery. At the same time, wavelet-based coding applications are also increased in robust communication and network transmission of images. Although wireless multimedia sensors are widely used to deliver multimedia content due to the availability of inexpensive CMOS cameras, their computational and memory resources are still typically very limited. It is known that allowing a low-cost camera sensor node with limited RAM size to perform a multi-level wavelet transform, will in return limit the size of the acquired image. Recently, fractional wavelet filter technique became an interesting solution to reduce communication energy and wireless bandwidth, for resource-constrained devices (e.g. digital cameras). The reduction in the required memory in these fractional wavelet transforms is achieved at the expense of the image quality. In this paper, an adaptive fractional artifacts reduction approach is proposed for efficient filtering operations according to the desired compromise between the effectiveness of artifact reduction and algorithm simplicity using some local image features to reduce boundaries artifacts caused by fractional wavelet. Applying such technique on different types of images with different sizes using CDF 9/7 wavelet filters results in a good performance.
Nowadays, the Wireless Sensor Network (WSN) has materialized its working areas, including environmental engineering, agriculture sector, industrial, business applications, military, intelligent buildings, etc. Sensor networks emerge as an attractive technology with great promise for the future. Indeed, issues remain to be resolved in the areas of coverage and deployment, scalability, service quality, size, energy consumption and security. The purpose of this paper is to present the integration of WSNs for IoT networks with the intention of exchanging information, applying security and configuration. These aspects are the challenges of network construction in which authentication, confidentiality, availability, integrity, network development. This review sheds some light on the potential integration challenges imposed by the integration of WSNs for IoT, which are reflected in the difference in traffic features.
A wireless sensor network consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Different approaches have used for simulation and modeling of SN (Sensor Network) and WSN. Traditional approaches consist of various simulation tools based on different languages such as C, C++ and Java. In this paper, MATLAB (7.6) Simulink was used to build a complete WSN system. Simulation procedure includes building the hardware architecture of the transmitting nodes, modeling both the communication channel and the receiving master node architecture. Bluetooth was chosen to undertake the physical layer communication with respect to different channel parameters (i.e., Signal to Noise ratio, Attenuation and Interference). The simulation model was examined using different topologies under various conditions and numerous results were collected. This new simulation methodology proves the ability of the Simulink MATLAB to be a useful and flexible approach to study the effect of different physical layer parameters on the performance of wireless sensor networks.
WiFi-based human activity and gesture recognition explore the interaction between the human hand or body movements and the reflected WiFi signals to identify various activities. This type of recognition has received much attention in recent years since it does not require wearing special sensors or installing cameras. This paper aims to investigate human activity and gesture recognition schemes that use Channel State Information (CSI) provided by WiFi devices. To achieve high accuracy in the measurement, deep learning models such as AlexNet, VGG 19, and SqueezeNet were used for classification and extracting features automatically. Firstly, outliers are removed from the amplitude of each CSI stream during the preprocessing stage by using the Hampel identifier algorithm. Next, the RGB images are created for each activity to feed as input to Deep Convolutional Neural Networks. After that, data augmentation is implemented to reduce the overfitting problems in deep learning models. Finally, the proposed method is evaluated on a publicly available dataset called WiAR, which contains 10 volunteers, each of whom executes 16 activities. The experiment results demonstrate that AlexNet, VGG19, and SqueezeNet all have high recognition accuracy of 99.17 %, 96.25%, and 100 %, respectively.
A new algorithm for the localization and identification of multi-node systems has been introduced in this paper; this algorithm is based on the idea of using a beacon provided with a distance sensor and IR sensor to calculate the location and to know the identity of each visible node during scanning. Furthermore, the beacon is fixed at middle of the frame bottom edge for a better vision of nodes. Any detected node will start to communicate with the neighboring nodes by using the IR sensors distributed on its perimeter; that information will be used later for the localization of invisible nodes. The performance of this algorithm is shown by the implementation of several simulations .
Bin picking robots require vision sensors capable of recognizing objects in the bin irrespective of the orientation and pose of the objects inside the bin. Bin picking systems are still a challenge to the robot vision research community due to the complexity of segmenting of occluded industrial objects as well as recognizing the segmented objects which have irregular shapes. In this paper a simple object recognition method is presented using singular value decomposition of the object image matrix and a functional link neural network for a bin picking vision system. The results of the functional link net are compared with that of a simple feed forward net. The network is trained using the error back propagation procedure. The proposed method is robust for recognition of objects.
Among all control methods for induction motor drives, Direct Torque Control (DTC) seems to be particularly interesting being independent of machine rotor parameters and requiring no speed or position sensors. The DTC scheme is characterized by the absence of PI regulators, coordinate transformations, current regulators and PWM signals generators. In spite of its simplicity, DTC allows a good torque control in steady state and transient operating conditions to be obtained. However, the presence of hysterics controllers for flux and torque could determine torque and current ripple and variable switching frequency operation for the voltage source inverter. This paper is aimed to analyze DTC principles, and the problems related to its implementation, especially the torque ripple and the possible improvements to reduce this torque ripple by using a proposed fuzzy based duty cycle controller. The effectiveness of the duty ratio method was verified by simulation using Matlab/Simulink software package. The results are compared with that of the traditional DTC models.
In smart cities, health care, industrial production, and many other fields, the Internet of Things (IoT) have had significant success. Protected agriculture has numerous IoT applications, a highly effective style of modern agriculture development that uses artificial ways to manipulate climatic parameters such as temperature to create ideal circumstances for the growth of animals and plants. Convolutional Neural Networks (CNNs) is a deep learning approach that has made significant progress in image processing. From 2016 to the present, various applications for the automatic diagnosis of agricultural diseases, identifying plant pests, predicting the number of crops, etc., have been developed. This paper involves a presentation of the Internet of Things system in agriculture and its deep learning applications. It summarizes the most essential sensors used and methods of communication between them, in addition to the most important deep learning algorithms devoted to intelligent agriculture.
Most Internet of Vehicles (IoV) applications are delay-sensitive and require resources for data storage and tasks processing, which is very difficult to afford by vehicles. Such tasks are often offloaded to more powerful entities, like cloud and fog servers. Fog computing is decentralized infrastructure located between data source and cloud, supplies several benefits that make it a non-frivolous extension of the cloud. The high volume data which is generated by vehicles’ sensors and also the limited computation capabilities of vehicles have imposed several challenges on VANETs systems. Therefore, VANETs is integrated with fog computing to form a paradigm namely Vehicular Fog Computing (VFC) which provide low-latency services to mobile vehicles. Several studies have tackled the task offloading problem in the VFC field. However, recent studies have not carefully addressed the transmission path to the destination node and did not consider the energy consumption of vehicles. This paper aims to optimize the task offloading process in the VFC system in terms of latency and energy objectives under deadline constraint by adopting a Multi-Objective Evolutionary Algorithm (MOEA). Road Side Units (RSUs) x-Vehicles Mutli- Objective Computation offloading method (RxV-MOC) is proposed, where an elite of vehicles are utilized as fog nodes for tasks execution and all vehicles in the system are utilized for tasks transmission. The well-known Dijkstra's algorithm is adopted to find the minimum path between each two nodes. The simulation results show that the RxV-MOC has reduced significantly the energy consumption and latency for the VFC system in comparison with First-Fit algorithm, Best-Fit algorithm, and the MOC method.
Many assistive devices have been developed for visually impaired (VI) person in recent years which solve the problems that face VI person in his/her daily moving. Most of researches try to solve the obstacle avoidance or navigation problem, and others focus on assisting VI person to recognize the objects in his/her surrounding environment. However, a few of them integrate both navigation and recognition capabilities in their system. According to above needs, an assistive device is presented in this paper that achieves both capabilities to aid the VI person to (1) navigate safely from his/her current location (pose) to a desired destination in unknown environment, and (2) recognize his/her surrounding objects. The proposed system consists of the low cost sensors Neato XV-11 LiDAR, ultrasonic sensor, Raspberry pi camera (CameraPi), which are hold on a white cane. Hector SLAM based on 2D LiDAR is used to construct a 2D-map of unfamiliar environment. While A* path planning algorithm generates an optimal path on the given 2D hector map. Moreover, the temporary obstacles in front of VI person are detected by an ultrasonic sensor. The recognition system based on Convolution Neural Networks (CNN) technique is implemented in this work to predict object class besides enhance the navigation system. The interaction between the VI person and an assistive system is done by audio module (speech recognition and speech synthesis). The proposed system performance has been evaluated on various real-time experiments conducted in indoor scenarios, showing the efficiency of the proposed system.
Over the previous decade, significant research has been conducted in the field of healthcare services and their technological advancement. To be more precise, the Internet of Things (IoT) has demonstrated potential for connecting numerous medical devices, sensors, and healthcare professionals in order to deliver high-quality medical services in remote locations. This has resulted in an increase in patient safety, a decrease in healthcare expenses, an increase in the healthcare services' accessibility, and an increase in the industry's healthcare operational efficiency. This paper provides an overview of the possible healthcare uses of Internet of Things (IoT)-based technologies. The evolution of the HIoT application has been discussed in this article in terms of enabling technology, services of healthcare, and applications for resolving different healthcare challenges. Additionally, effort difficulties and drawbacks with the HIoT system are explored. In summary, this study provides a complete source of information on the many applications of HIoT together the purpose is to help future academics who are interested in working in the field and making advances gain knowledge into the issue.
Linearization sensors characteristics becomes very interest field for researchers due to the importance in enhance the system performance, measurement accuracy, system design simplicity (hardware and software), reduce system cost, ..etc. in this paper, two approaches has been introduced in order to linearize the sensor characteristics; first is signal condition circuit based on lock up table (LUT) which this method performed for linearize NTC sensor characteristic. Second is ratiometric measurement equation which this method performed for linearize LVDT sensor characteristic. The proposed methods has been simulated by MATLAB, and then implemented by using Anadigm AN221E04 Field Programmable Analog Array (FPAA) development kit which several experiments performed in order to improve the performance of these approaches.
Advancements in internet accessibility and the affordability of digital picture sensors have led to the proliferation of extensive image databases utilized across a multitude of applications. Addressing the semantic gap between low- level attributes and human visual perception has become pivotal in refining Content Based Image Retrieval (CBIR) methodologies, especially within this context. As this field is intensely researched, numerous efficient algorithms for CBIR systems have surfaced, precipitating significant progress in the artificial intelligence field. In this study, we propose employing a hard voting ensemble approach on features derived from three robust deep learning architectures: Inception, Exception, and Mobilenet. This is aimed at bridging the divide between low-level image features and human visual perception. The Euclidean method is adopted to determine the similarity metric between the query image and the features database. The outcome was a noticeable improvement in image retrieval accuracy. We applied our approach to a practical dataset named CBIR 50, which encompasses categories such as mobile phones, cars, cameras, and cats. The effectiveness of our method was thereby validated. Our approach outshone existing CBIR algorithms with superior accuracy (ACC), precision (PREC), recall (REC), and F1-score (F1-S), proving to be a noteworthy addition to the field of CBIR. Our proposed methodology could be potentially extended to various other sectors, including medical imaging and surveillance systems, where image retrieval accuracy is of paramount importance.
In this paper, a new algorithm called table-based matching for multi-robot (node) that used for localization and orientation are suggested. The environment is provided with two distance sensors fixed on two beacons at the bottom corners of the frame. These beacons have the ability to scan the environment and estimate the location and orientation of the visible nodes and save the result in matrices which are used later to construct a visible node table. This table is used for matching with visible-robot table which is constructed from the result of each robot scanning to its neighbors with a distance sensor that rotates at 360⁰; at this point, the location and identity of all visible nodes are known. The localization and orientation of invisible robots rely on the matching of other tables obtained from the information of visible robots. Several simulations implementation are experienced on a different number of nodes to submit the performance of this introduced algorithm.