Page 69 - 2023-Vol19-Issue2
P. 69

65 |                                                                                Odey & Marhoon

                                          Fig. 1. Framework of the proposed method

[23],[24],[25] the LDA can be computed using the following       Min, standard deviation, variance, and Mod operations [26],
equations:                                                       these operations are applied as a sliding window to the LDA
                                                                 features using C++ functions.
trace((X T SW X )-1(X T SbX ))            (5)
                                                                            IV. DEEP LEARNING MODEL
?Sb   =  1  m  Ki(ci - c)(ci - c)T        (6)
         n  i                                                       Our deep model is a (37) layer network compromising
                                                                 of (11) deep layers formed by one-direction convolutional
      =  1   m                            (7)                    layers, (6) fully connected layers formed by dense layers,
         m                                                       and the remaining layers Consist of 9 Max pooling layers,
                   (x - ci)(x - ci)T                             1 normalization layer represented by flatten layer, 10 leaky
                                                                 rectified linear units (leaky ReLU) activation layers and one
            i=1 xexi                                             dropout layer.
? ?Sw
                                                                     The (720,984) features are divided into receptive fields
Where                                                            that feed into a convolutional layer. The network has an input
    X is the sample of our data.                                 size of 24 features, all the layers are piled up on each other or
    SW is within the class matrix.                               arranged one after the other. The convolution layers are based
    Sb is the between-class matrix.                              on filters of different sizes which are 16, 32, 64, 128, 256, 512,
    c is the number of distinct classes.                         1024, 1024, 512, 512, and 50 respectively with kernel size
                                                                 3, the stride of 1, with the same padding. The Max pooling
2) Features Expansion Techniques                                 layers have stride=1, size=2, and the same padding.
   Features extracted from LDA are expanded using statistical
                                                                     Whereas the linear collectors or Dense layers have differ-
features techniques to maximize the number of deep model         ent kernel sizes of 128,512, 1024, 32, 16, and 5 respectively
probabilities so that our deep model will be more efficient and  and different activation functions namely the linear and soft-
robust. Statistical features used in this stage are Mean, Max,
   64   65   66   67   68   69   70   71   72   73   74