Page 70 - 2023-Vol19-Issue2
P. 70

66 |                                                             Odey & Marhoon

      Fig. 2. Flow chart of our deep model

max. Our model also contains one dropout of 0.6 and one
normalization layer represented by flatten. Adam optimizer
is used to update our deep network weights; our model is
summarized in Figure-2.

                      V. RESULTS                                              Fig. 4. Values of evaluating parameters

   In this study, (720,984) features were extracted from (2687)      Furthermore, our model is also executed in real-time mode,
image data which we get after some image augmentations           although the low-resolution camera and noisy and uncertain
and preprocessing operations of the original (6,499) images      YouTube data used it shows promising results, samples of
downloaded from udacity website.                                 samples of the exactions results are shown in figure-5(a-f).

    Our deep model has a number of parameters (3,305,095)            A comparison between our model and some models pro-
and it is trained in 100 epochs, the main advantage of the       vided by some previous related works in terms of Accuracy,
proposed model is to reduce computational time as the training   datasets and procedures has been shown in Table-I.
time for one epoch is between 15 to 10 seconds as obtained in
Figure-3.

         Fig. 3. Computational time for epoch training                             VI. CONCLUSION

    Our model learns based on features, 70% of our features         In this work, a comprehensive deep learning model has
were used for training the model and the remaining 30% of        been proposed to be used in baker, cars, pedestrians, traffic
the features were used for testing our model. Our model was      lights, and trucks recognition and identification aiming to de-
evaluated in terms of accuracy, Recall, Precision as well as F1  velop autonomous driving technologies. our approach shows
score and shows promising results of 100% for accuracy in        promising results in terms of accuracy, reducing training time,
both training and testing phases, and 1 for the rest parameters  and using less computational resources The given model is a
for each class as shown in Figure -4.                            deep 37-layer network architecture, which has been adopted
                                                                 after many extensive trials. The learning process was accom-
   65   66   67   68   69   70   71   72   73   74   75