Page 91 - 2023-Vol19-Issue2
P. 91
87 | Swide & Marhoon
a dataset on power energy consumption to show how well TABLE II.
the suggested model performed in comparison to rival bench- RANDOM FOREST MODEL EVALUATION METRICS
marks. Implementing a reliable power energy consumption
prediction system with low prediction error and high accuracy Method Parameters Time Resolution MSE R2 score
is the main goal of this effort. The suggested model combines
LSTM and RNN, two deep learning models that are effective Max depth=10 3.33 0.913
at forecasting how much energy will be used in the short term.
Table I lists the parameters for the deep learning recurrent Max depth=12 2.3 0.946
neural network prediction algorithm. In terms of RF, this ma-
chine learning model is really good at forecasting how much Max depth=18 0.8 0.975
energy will be used. Using the gridsearchcv package, we
selected the ideal parameter for this model to obtain the best RF Max depth=25 Hourly 0.52 0.986
maximum depth as Table II. In this study, the Mean Square
Error (MSE), coefficient of determination are the statistical Max depth=30 0.51 0.987
measures chosen to evaluate the accuracy of the deep learning
recurrent neural network predictive model over two states of Max depth=35 0.51 0.987
the art in residential energy consumption (R2). Equations (7)
through (8) define the various statistical measurements [23]. Max depth=40 0.51 0.987
?MSE = 1 N - Yi )2 (7)
N
(Xi
i=1
R2 = 1 - ?iN=1(Xi -Yi pred)2 Fig. 5. loss curve of LSTM model
?Ni=1(Xi -Yimean)2 (8)
where N is the number of data points in the test dataset,
xi is the actual energy consumption at time step i, Yi is the
predicted energy consumption.
TABLE I.
DEEP LEARNING MODELS EVALUATION METRICS
Method Parameters Time resolution MSE R2 Score
Activation function=” tanh” Fig. 6. loss curve of RNN model
RNN Alpha=0.7 Hourly 0.012 0.94 As shown the fig. 5,6, compared to the orange line which
Epochs=10 represents validation loss, the blue line reflects training loss.
The total number of epochs is shown on the X-axis, and the
Batch size=70 training and testing losses for each period are shown on the Y-
Activation function=”tanh” axis. As shown in fig 3, the maximum validation loss is 0.001
and the highest training loss is 0.012, both of which decrease
LSTM Alpha=0.5 Hourly 0.008 0.95 as the number of epochs rises. According to Figure 4, the
Epochs=10 highest training loss is 0.008 and the highest validation loss
Batch size=70