1 Introduction
2 Deep learning model
2.1 Data
2.2 Recurrent neural network
3 Model training
4 Experiments
-
CPU AMD Ryzen Threadripper 2950X 16 cores/32 threads
-
GPU 2× NVidia RTX 2080 8GB
-
RAM 64GB
Architecture | Accuracy (%) |
---|---|
Bi-LSTM | 99.14 |
LSTM | 99.12 |
ANN | 88.11 |
Gru-RNN | 99.081 |
4.1 Finding the best training algorithm
-
Adamax Network did learn to predict values with an Accuracy of 96.29-,
-
Adam Network reached 97.78% Accuracy; however, it struggled to correctly predict the highest and the lowest values,
-
RMSprop Results were better than Adam and reached 98.60%; however, the network had the same troubles with the highest and the lowest values as the Adam algorithm.
-
NAdam Predictor reached over 99.12% making it the best fitting algorithm for our task.
4.2 Searching for the best time step
Time Step | Accuracy (%) |
---|---|
10 | 96.41 |
11 | 89.27 |
12 | 97.26 |
13 | 90.48 |
14 | 49.41 |
15 | 91.22 |
16 | 97.59 |
17 | 89.94 |
18 | 97.56 |
19 | 49.44 |
20 | 98.29 |
21 | 93.07 |
22 | 98.20 |
23 | 50.50 |
24 | 50.44 |
25 | 91.71 |
26 | 98.17 |
27 | 50.49 |
28 | 50.21 |
29 | 50.28 |
4.3 Computing overall Accuracy based on error margin experiments
Error margin | Accuracy (%) |
---|---|
0.05 | 75.66 |
0.1 | 99.12 |
0.15 | 99.91 |
0.2 | 99.99 |
0.3 | 100.0 |
0.4 | 100.0 |
0.5 | 100.0 |
0.6 | 100.0 |
0.7 | 100.0 |
0.8 | 100.0 |
4.4 Training times
Algorithm | Accuracy | Training Time |
---|---|---|
Adamax | 97.29% | 56 min 3.90 s |
Adam | 97.78% | 56 min 7.82 s |
RMSprop | 98.60% | 48 min 32.06 s |
NAdam | 99.12% | 48 min 35.86 s |
4.5 Non-machine learning prediction methods
Solution | MAE | MSE | |
---|---|---|---|
Our | Recurrent Neural Network with Long Short-Term Memory neurons | 0.019 | 0.00056 |
Camero et al. [1] | Recurrent Neural Network based on the Mean Absolute Error random sampling | 0.036 | N/A |
Qin et al. [14] | Dual-stage attention-based Recurrent Neural Network | 0.19 | 0.014 |
Sahoo et al. [15] | Recurrent Neural Network with Long Short-Term Memory neurons | 0.361 | N/A |
Liu et al. [9] | Dual-Stage Two-Phase attention-based Recurrent Neural Network | 0.0661 | N/A |