1 Introduction
-
A novel deep learning model has been developed for forecasting aquaponics. The model comprises a TFT-based solution.
-
The TFT network improves the hourly forecasting performance for the aquaponics environment over the previous works regarding Mean Absolute Error (MAE) and Explained Variance.
-
The high-performing forecasting accuracy provides opportunities for automated processes.
2 Related work
Study | Dataset | Method | Metrics | Gaps |
---|---|---|---|---|
Proposed approach | Sensor-based aquaponics | TFT | MAE, RMSE | Multi-variate time-series analysis with, |
Multi-head attention-based TFT architecture | ||||
Arvind et al. [7] | PASCAL VOC 2017 and 2012 | AutoML | ROC, F1 | Non-temporal analysis |
Mehra et al. [8] | Proprietary | ANN, Bayesian Network | Accuracy | Small number of observations, |
Non-temporal analysis | ||||
Lauguico et al. [11] | Proprietary | Logistic regression, KNN, L-SVM | F1 | Small number of observations, |
Non-temporal analysis | ||||
Liu et. al. [14] | Water quality dataset | Bi-S-SRU | MSE | Non-temporal analysis |
Cardenas et. al. [12] | Proprietary | MLP, LSTM, GRU | MSE | Method is susceptible to small-scale noises |
Dhal et al. [15] | Proprietary | XG-boost, extra trees classifier | F-Score | Small number of observations, |
Non-temporal analysis | ||||
Thai-Nghe et al. [13] | Water quality—Tomales Bay | LSTM, SVM | RMSE | Only univariate analysis |
3 Materials and methods
3.1 Dataset
Attribute | Definition | Average | SD |
---|---|---|---|
Time | Timestamp | – | – |
Temperature | Temperature sensor (DS18B20) | 24.565268 | 0.899205 |
Turbidity | DF Robot Turbidity sensor | 69.490202 | 43.233901 |
Dissolved oxygen | DF robot dissolved oxygen sensor | 10.583218 | 10.673741 |
pH | DF robot pH sensor V2.2 | 6.033098 | 2.949616 |
Ammonia | MQ-137 ammonia sensor | 229841283.6471 | 9104453144.3300 |
Nitrate | MQ-135 nitrate sensor | 699.520674 | 550.081504 |
3.2 Data preparation
3.3 LSTM
3.4 Encoder–decoder networks
3.5 Attention mechanism
3.6 The temporal fusion transformer
3.6.1 Gated residual network (GRN)
3.6.2 Variable selection network (VSN)
3.6.3 Interpretable multi-head attention
3.7 ELM
3.8 Experimental environment
3.9 Evaluation metrics
4 Experimental results
4.1 Hyperparameter optimization
Hyperparameter | Search space | Selected value |
---|---|---|
Learning rate | [30.0 - - - 0.00001] | 0.0912 |
Epochs | [10 - - - 400] | 27 |
Dropout rate | [0.1 - - - 0.5] | 0.36411 |
Gradient clip value | [0.01 - - - 1.0] | 0.03279 |
Number of attention heads | [1 - - - 4] | 2 |
Hidden size | [8 - - - 128] | 13 |
Hidden continuous size | [8 - - - 128] | 12 |
4.2 Discussion of results
Methods | Metrics | Time Windows | |||||
---|---|---|---|---|---|---|---|
15 | 30 | 45 | 60 | 90 | 120 | ||
Lstm | RMSE: | 0.03559 | 0.03904 | 0.04524 | 0.04830 | 0.04928 | 0.05732 |
Mae: | 0.02341 | 0.02686 | 0.03246 | 0.03495 | 0.03404 | 0.04329 | |
Explained variance score: | 0.76193 | 0.72133 | 0.63764 | 0.58409 | 0.51928 | 0.43771 | |
\(R^{2}\) Score: | 0.70776 | 0.64832 | 0.52807 | 0.46250 | 0.44105 | 0.24616 | |
Attention | RMSE: | 0.03441 | 0.04220 | 0.04275 | 0.04416 | 0.04679 | 0.05352 |
Mae: | 0.02132 | 0.03049 | 0.02941 | 0.02951 | 0.03033 | 0.03864 | |
Explained variance score: | 0.73827 | 0.68339 | 0.64295 | 0.02951 | 0.49679 | 0.44146 | |
\(R^{2}\) Score: | 0.72683 | 0.58925 | 0.57854 | 0.55075 | 0.49607 | 0.34273 | |
EncoderDecoder | RMSE: | 0.03553 | 0.04136 | 0.04466 | 0.04523 | 0.05119 | 0.05615 |
Mae: | 0.02300 | 0.02962 | 0.03132 | 0.02906 | 0.03508 | 0.04111 | |
Explained variance score: | 0.75628 | 0.71764 | 0.63622 | 0.55144 | 0.43935 | 0.39943 | |
\(R^{2}\) Score: | 0.70885 | 0.60529 | 0.54000 | 0.52870 | 0.39686 | 0.27651 | |
ELM | RMSE: | 0.04487 | 0.04808 | 0.04850 | 0.05113 | 0.05379 | 0.05730 |
Mae: | 0.02757 | 0.03167 | 0.03131 | 0.03449 | 0.03591 | 0.03961 | |
Explained variance score: | 0.53905 | 0.48314 | 0.45864 | 0.40324 | 0.35113 | 0.28430 | |
\(R^{2}\) Score: | 0.53550 | 0.46666 | 0.45758 | 0.39770 | 0.33413 | 0.24656 | |
TFT | RMSE: | 0.02556 | 0.02658 | 0.03171 | 0.03222 | 0.03792 | 0.04039 |
Mae: | 0.01665 | 0.01850 | 0.02233 | 0.02261 | 0.02687 | 0.02899 | |
Explained variance score: | 0.81492 | 0.80844 | 0.70583 | 0.67483 | 0.55179 | 0.46643 | |
\(R^{2}\) Score: | 0.81491 | 0.79612 | 0.69783 | 0.67474 | 0.52035 | 0.44584 |
Method | Param. size | Training time (s) | Prediction time (s) |
---|---|---|---|
Proposed TFT model | 31.9k | 12828.23 | 11.97 |
Attention LSTM | 34.2k | 147.65 | 0.40 |
Encoder–decoder LSTM | 4.5k | 102.73 | 0.24 |
LSTM | 31k | 215.05 | 0.38 |
ELM | – | 0.05 | 0.05 |