Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications such as noise cancellation. Noise cancellation is a common occurrence in today telecommunication systems. The LMS algorithm which is one of the most efficient criteria for determining the values of the adaptive noise cancellation coefficients are very important in communication systems, but the LMS adaptive noise cancellation suffers response degrades and slow convergence rate under low Signal-to- Noise ratio (SNR) condition. This paper presents an adaptive noise canceller algorithm based fuzzy and neural network. The major advantage of the proposed system is its ease of implementation and fast convergence. The proposed algorithm is applied to noise canceling problem of long distance communication channel. The simulation results showed that the proposed model is effectiveness.
Independent Component Analysis (ICA) has been successfully applied to a variety of problems, from speaker identification and image processing to functional magnetic resonance imaging (fMRI) of the brain. In particular, it has been applied to analyze EEG data in order to estimate the sources form the measurements. However, it soon became clear that for EEG signals the solutions found by ICA often depends on the particular ICA algorithm, and that the solutions may not always have a physiologically plausible interpretation. Therefore, nowadays many researchers are using ICA largely for artifact detection and removal from EEG, but not for the actual analysis of signals from cortical sources. However, a recent modification of an ICA algorithm has been applied successfully to EEG signals from the resting state. The key idea was to perform a particular preprocessing and then apply a complex- valued ICA algorithm. In this paper, we consider multiple complex-valued ICA algorithms and compare their performance on real-world resting state EEG data. Such a comparison is problematic because the way of mixing the original sources (the “ground truth”) is not known. We address this by developing proper measures to compare the results from multiple algorithms. The comparisons consider the ability of an algorithm to find interesting independent sources, i.e. those related to brain activity and not to artifact activity. The performance of locating a dipole for each separated independent component is considered in the comparison as well. Our results suggest that when using complex-valued ICA algorithms on preprocessed signals the resting state EEG activity can be analyzed in terms of physiological properties. This reestablishes the suitability of ICA for EEG analysis beyond the detection and removal of artifacts with real-valued ICA applied to the signals in the time-domain.
Multiplication-accumulation (MAC) operation plays a crucial role in digital signal processing (DSP) applications, such as image convolution and filters, especially when performed on floating-point numbers to achieve high-level of accuracy. The performance of MAC module highly relies upon the performance of the multiplier utilized. This work offers a distinctive and efficient floating-point Vedic multiplier (VM) called adjusted-VM (AVM) to be utilized in MAC module to meet modern DSP demands. The proposed AVM is based on Urdhva-Tiryakbhyam-Sutra (UT-Sutra) approach and utilizes an enhanced design for the Brent-Kung carry-select adder (EBK-CSLA) to generate the final product. A (6*6)-bit AVM is designed first, then, it is extended to design (12*12)-bit AVM which in turns, utilized to design (24*24)-bit AVM. Moreover, the pipelining concept is used to optimize the speed of the offered (24*24)-bit AVM design. The proposed (24*24)-bit AVM can be used to achieve efficient multiplication for mantissa part in binary single-precision (BSP) floating-point MAC module. The proposed AVM architectures are modeled in VHDL, simulated, and synthesized by Xilinx-ISE14.7 tool using several FPGA families. The implementation results demonstrated a noticeable reduction in delay and area occupation by 33.16% and 42.42%, respectively compared with the most recent existing unpipelined design, and a reduction in delay of 44.78% compared with the existing pipelined design.
The Fast Fourier Transform (FFT) and Inverse FFT(IFFT) are used in most of the digital signal processing applications. Real time implementation of FFT/IFFT is required in many of these applications. In this paper, an FPGA reconfigurable fixed point implementation of FFT/IFFT is presented. A manually VHDL codes are written to model the proposed FFT/IFFT processor. Two CORDIC-based FFT/IFFT processors based on radix-2and radix-4 architecture are designed. They have one butterfly processing unit. An efficient In-place memory assignment and addressing for the shared memory of FFT/IFFT processors are proposed to reduce the complexity of memory scheme. With "in-place" strategy, the outputs of butterfly operation are stored back to the same memory location of the inputs. Because of using DIF FFT, the output was to be in reverse order. To solve this issue, we have re-use the block RAM that used for storing the input sample as reordering unit to reduce hardware cost of the proposed processor. The Spartan-3E FPGA of 500,000 gates is employed to synthesize and implement the proposed architecture. The CORDIC based processors can save 40% of power consumption as compared with Xilinx logic core architectures of system generator.
In the last couple decades, several successful steganography approaches have been proposed. Least Significant Bit (LSB) Insertion technique has been deployed due to its simplicity in implementation and reasonable payload capacity. The most important design parameter in LSB techniques is the embedding location selection criterion. In this work, LSB insertion technique is proposed which is based on selecting the embedding locations depending on the weights of coefficients in Cosine domain (2D DCT). The cover image is transformed to the Cosine domain (by 2D DCT) and predefined number of coefficients are selected to embed the secret message (which is in the binary form). Those weights are the outputs of an adaptive algorithm that analyses the cover image in two domains (Haar and Cosine). Coefficients, in the Cosine transform domain, with small weights are selected. The proposed approach is tested with samples from the BOSSbase, and a custom-built databases. Two metrics are utilized to show the effectiveness of the technique, namely, Root Mean Squared Error (RMSE), and Peak Signal-to-Noise Ratio (PSNR). In addition, human visual inspection of the result image is also considered. As shown in the results, the proposed approach performs better, in terms of (RMSE, and PSNR) than commonly employed truncation and energy based methods.
In this paper, a robust wavelet based watermarking scheme has been proposed for digital audio. A single bit is embedded in the approximation part of each frame. The watermark bits are embedded in two subsets of indexes randomly generated by using two keys for security purpose. The embedding process is done in adaptively fashion according to the mean of each approximation part. The detection of watermark does not depend on the original audio. To measure the robustness of the algorithm, different signal processing operations have been applied on the watermarked audio. Several experimental results have been conducted to illustrate the robustness and efficiency of the proposed watermarked audio scheme.
Electrical issues such as old wires and faulty connections are the most common causes of arc faults. Arc faults cause electrical fires by generating high temperatures and discharging molten metal. Every year, such fires cause a considerable deal of destruction and loss. This paper proposes a new method for detecting residential series and parallel arc faults. A simulation model for the arc is employed to simulate the arc faults in series and parallel circuits. The fault features are then retrieved using a signal processing approach called Discrete Wavelet Transform (DWT) designed in MATLAB/Simulink based on the fault detection algorithm. Then db2 and one level were found appropriate mother and level of wavelet transform for extracting arc-fault features. MATLAB Simulink was used to build and simulate the arc-fault model.
Brain machine interface provides a communication channel between the human brain and an external device. Brain interfaces are studied to provide rehabilitation to patients with neurodegenerative diseases; such patients loose all communication pathways except for their sensory and cognitive functions. One of the possible rehabilitation methods for these patients is to provide a brain machine interface (BMI) for communication; the BMI uses the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper a BMI design using five mental tasks from two subjects were studied, a combination of two tasks is studied per subject. An Elman recurrent neural network is proposed for classification of EEG signals. Two feature extraction algorithms using overlapped and non overlapped signal segments are analyzed. Principal component analysis is used for extracting features from the EEG signal segments. Classification performance of overlapping EEG signal segments is observed to be better in terms of average classification with a range of 78.5% to 100%, while the non overlapping EEG signal segments show better classification in terms of maximum classifications.
Arc problems are most commonly caused by electrical difficulties such as worn cables and improper connections. Electrical fires are caused by arc faults, which generate tremendous temperatures and discharge molten metal. Every year, flames of this nature inflict a great lot of devastation and loss. A novel approach for identifying residential series and parallel arc faults is presented in this study. To begin, arc faults in series and parallel are simulated using a suitable simulation arc model. The fault characteristics are then recovered using a signal processing technique based on the fault detection technique called Discrete Wavelet Transform (DWT), which is built in MATLAB/Simulink. Then came db2, and one level was discovered for obtaining arc-fault features. The suitable mother and level of wavelet transform should be used, and try to compare results with conventional methods (FFT-Fast Fourier Transform). MATLAB was used to build and simulate arc-fault models with these techniques.
Recently, chaos theory has been widely used in multimedia and digital communications due to its unique properties that can enhance security, data compression, and signal processing. It plays a significant role in securing digital images and protecting sensitive visual information from unauthorized access, tampering, and interception. In this regard, chaotic signals are used in image encryption to empower the security; that’s because chaotic systems are characterized by their sensitivity to initial conditions, and their unpredictable and seemingly random behavior. In particular, hyper-chaotic systems involve multiple chaotic systems interacting with each other. These systems can introduce more randomness and complexity, leading to stronger encryption techniques. In this paper, Hyper-chaotic Lorenz system is considered to design robust image encryption/ decryption system based on master-slave synchronization. Firstly, the rich dynamic characteristics of this system is studied using analytical and numerical nonlinear analysis tools. Next, the image secure system has been implemented through Field-Programmable Gate Arrays (FPGAs) Zedboard Zynq xc7z020-1clg484 to verify the image encryption/decryption directly on programmable hardware Kit. Numerical simulations, hardware implementation, and cryptanalysis tools are conducted to validate the effectiveness and robustness of the proposed system.
The futuristic age requires progress in handwork or even sub-machine dependency and Brain-Computer Interface (BCI) provides the necessary BCI procession. As the article suggests, it is a pathway between the signals created by a human brain thinking and the computer, which can translate the signal transmitted into action. BCI-processed brain activity is typically measured using EEG. Throughout this article, further intend to provide an available and up-to-date review of EEG-based BCI, concentrating on its technical aspects. In specific, we present several essential neuroscience backgrounds that describe well how to build an EEG-based BCI, including evaluating which signal processing, software, and hardware techniques to use. Individuals discuss Brain-Computer Interface programs, demonstrate some existing device shortcomings, and propose some eld’s viewpoints.