Self-driving cars are a fundamental research subject in recent years; the ultimate goal is to completely exchange the human driver with automated systems. On the other hand, deep learning techniques have revealed performance and effectiveness in several areas. The strength of self-driving cars has been deeply investigated in many areas including object detection, localization as well, and activity recognition. This paper provides an approach to deep learning; which combines the benefits of both convolutional neural network CNN together with Dense technique. This approach learns based on features extracted from the feature extraction technique which is linear discriminant analysis LDA combined with feature expansion techniques namely: standard deviation, min, max, mod, variance and mean. The presented approach has proven its success in both testing and training data and achieving 100% accuracy in both terms.
In recent years, self-driving cars and reducing the number of accident casualties have drawn a lot of attention. Although it is crucial to increase driver awareness on the road, autonomous vehicles can emulate human driving and guarantee improved levels of road safety. Artificial intelligence (AI) technologies are often employed for this purpose. However, deep learning, a subset of AI, is prone to numerous errors, a wide range of threats, and needs to handle vast amounts of data, which imposes high-performance hardware requirements. This study suggests a deep learning model for object recognition that employs characteristics to describe data rather than images. Our model employs the COCO dataset as the training foundation, and it was suggested that the features be retrieved using the principal component analysis PCA extraction method. The current results demonstrate the efficacy and precision of our model, with an accuracy of 99.96 %. Furthermore, the performance indices, i.e., recall, precision, and F1-score, achieved about 1 for most of the COCO classes in training phase and promising results in testing phase.