Skip to main content

2018 | OriginalPaper | Buchkapitel

5. Industrial Prediction Intervals with Data Uncertainty

verfasst von : Jun Zhao, Wei Wang, Chunyang Sheng

Erschienen in: Data-Driven Prediction for Industrial Processes and Their Applications

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Prediction intervals (PIs) construction is a comprehensive prediction technique that provides not only the point estimates of the industrial variables, but also the reliability of the prediction results indicated by an interval. Reviewing the conventional PIs construction methods (e.g., delta method, mean and variance-based estimation method, Bayesian method, and bootstrap technique), we provide some recently developed approaches in this chapter. Here, a bootstrapping-based ESN ensemble (BESNE) model is specially proposed to produce reliable PIs for industrial time series, in which a simultaneous training method based on Bayesian linear regression is developed. Besides, to cope with the error accumulation caused by the traditional iterative mode of time series prediction, a non-iterative granular ESN is also reported for PIs construction, where the network connections are represented by the interval-valued information granules. In addition, we present a mixed Gaussian kernel-based regression model to construct PIs, in which a gradient descent algorithm is derived to optimize the hyper-parameters of the mixed Gaussian kernel. In order to tackle the incomplete testing input problem, a kernel-based high order dynamic Bayesian network (DBN) model for industrial time series is then proposed, which directly deals with the missing points involved in the inputs. Finally, we provide some case studies to verify the effectiveness of these approaches.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Zapranis, A., & Livanis, E. (2005). Prediction intervals for neural network models. In Proceedings of the 9th WSEAS International Conference on Computers. World Scientific and Engineering Academy and Society (WSEAS) Zapranis, A., & Livanis, E. (2005). Prediction intervals for neural network models. In Proceedings of the 9th WSEAS International Conference on Computers. World Scientific and Engineering Academy and Society (WSEAS)
2.
Zurück zum Zitat De Veaux, R. D., Schumi, J., Schweinsberg, J., & Ungar, L. H. (1998). Prediction intervals for neural networks via nonlinear regression. Technometrics, 40(4), 273–282.MathSciNetCrossRef De Veaux, R. D., Schumi, J., Schweinsberg, J., & Ungar, L. H. (1998). Prediction intervals for neural networks via nonlinear regression. Technometrics, 40(4), 273–282.MathSciNetCrossRef
3.
Zurück zum Zitat Hwang, J. T. G., & Ding, A. A. (1997). Prediction intervals for artificial neural networks. Journal of the American Statistical Association, 92(438), 748–757.MathSciNetCrossRef Hwang, J. T. G., & Ding, A. A. (1997). Prediction intervals for artificial neural networks. Journal of the American Statistical Association, 92(438), 748–757.MathSciNetCrossRef
4.
Zurück zum Zitat Nix, D. A., & Weigend, A. S. (1994). Estimating the mean and variance of the target probability distribution. In Proceedings of the IEEE International Conference on Neural Networks, Orlando, FL (Vol. 1, pp. 55–60). Nix, D. A., & Weigend, A. S. (1994). Estimating the mean and variance of the target probability distribution. In Proceedings of the IEEE International Conference on Neural Networks, Orlando, FL (Vol. 1, pp. 55–60).
5.
Zurück zum Zitat Rivals, I., & Personnaz, L. (2000). Construction of confidence intervals for neural networks based on least squares estimation. Neural Networks, 13(4–5), 463.CrossRef Rivals, I., & Personnaz, L. (2000). Construction of confidence intervals for neural networks based on least squares estimation. Neural Networks, 13(4–5), 463.CrossRef
6.
Zurück zum Zitat Ding, A., & He, X. (2003). Backpropagation of pseudo-errors: Neural networks that are adaptive to heterogeneous noise. IEEE Transactions on Neural Networks, 14(2), 253–262.CrossRef Ding, A., & He, X. (2003). Backpropagation of pseudo-errors: Neural networks that are adaptive to heterogeneous noise. IEEE Transactions on Neural Networks, 14(2), 253–262.CrossRef
7.
Zurück zum Zitat Dybowski, R., & Roberts, S. (2000). Confidence intervals and prediction intervals for feed-forward neural networks. In R. Dybowski & V. Gant (Eds.), Clinical applications of artificial neural networks. Cambridge, U.K: Cambridge University Press. Dybowski, R., & Roberts, S. (2000). Confidence intervals and prediction intervals for feed-forward neural networks. In R. Dybowski & V. Gant (Eds.), Clinical applications of artificial neural networks. Cambridge, U.K: Cambridge University Press.
8.
Zurück zum Zitat Bishop, C. M. (1995). Neural networks for pattern recognition. London, UK: Oxford University Press.MATH Bishop, C. M. (1995). Neural networks for pattern recognition. London, UK: Oxford University Press.MATH
9.
Zurück zum Zitat MacKay, D. J. C. (1989). The evidence framework applied to classification networks. Neural Computation, 4(5), 720–736.CrossRef MacKay, D. J. C. (1989). The evidence framework applied to classification networks. Neural Computation, 4(5), 720–736.CrossRef
10.
Zurück zum Zitat Hagan, M., & Menhaj, M. (2002). Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5(6), 989–993.CrossRef Hagan, M., & Menhaj, M. (2002). Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks, 5(6), 989–993.CrossRef
11.
12.
Zurück zum Zitat Heskes, T. (1997). Practical confidence and prediction intervals. In T. P. M. Mozer & M. Jordan (Eds.), Neural information processing systems (Vol. 9, pp. 176–182). Cambridge, MA: MIT Press. Heskes, T. (1997). Practical confidence and prediction intervals. In T. P. M. Mozer & M. Jordan (Eds.), Neural information processing systems (Vol. 9, pp. 176–182). Cambridge, MA: MIT Press.
13.
Zurück zum Zitat Sheng, C., Zhao, J., Wang, W., et al. (2013). Prediction intervals for a noisy nonlinear time series based on a bootstrapping reservoir computing network ensemble. IEEE Transactions on Neural Networks & Learning Systems, 24(7), 1036–1048.CrossRef Sheng, C., Zhao, J., Wang, W., et al. (2013). Prediction intervals for a noisy nonlinear time series based on a bootstrapping reservoir computing network ensemble. IEEE Transactions on Neural Networks & Learning Systems, 24(7), 1036–1048.CrossRef
14.
Zurück zum Zitat Tibshirani, R. (1996). A comparison of some error estimates for neural network models. Neural Computation, 8(1), 152–163.MathSciNetCrossRef Tibshirani, R. (1996). A comparison of some error estimates for neural network models. Neural Computation, 8(1), 152–163.MathSciNetCrossRef
15.
Zurück zum Zitat Khosravi, A., Nahavandi, S., Creighton, D., et al. (2011). Comprehensive review of neural network-based prediction intervals and new advances. IEEE Transactions on Neural Networks, 22(9), 1341–1356.CrossRef Khosravi, A., Nahavandi, S., Creighton, D., et al. (2011). Comprehensive review of neural network-based prediction intervals and new advances. IEEE Transactions on Neural Networks, 22(9), 1341–1356.CrossRef
16.
Zurück zum Zitat Anguita, D., Ghio, A., Oneto, L., et al. (2012). In-sample and out-of-sample model selection and error estimation for support vector machines. IEEE Transactions on Neural Networks & Learning Systems, 23(9), 1390.CrossRef Anguita, D., Ghio, A., Oneto, L., et al. (2012). In-sample and out-of-sample model selection and error estimation for support vector machines. IEEE Transactions on Neural Networks & Learning Systems, 23(9), 1390.CrossRef
17.
Zurück zum Zitat Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York, USA: Chapman & Hall.CrossRef Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. New York, USA: Chapman & Hall.CrossRef
18.
Zurück zum Zitat Efron, B., & Tibshirani, R (1995). Cross-validation and the bootstrap: Estimating the error rate of a prediction rule. Dept. Stat., Stanford Univ., Stanford, CA, USA, Tech. Rep. TR-477. Efron, B., & Tibshirani, R (1995). Cross-validation and the bootstrap: Estimating the error rate of a prediction rule. Dept. Stat., Stanford Univ., Stanford, CA, USA, Tech. Rep. TR-477.
19.
Zurück zum Zitat Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40–79.MathSciNetCrossRef Arlot, S., & Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4, 40–79.MathSciNetCrossRef
20.
Zurück zum Zitat Efron, B., & Tibshirani, R. (1997). Improvements on cross-validation: The .632+ bootstrap method. Journal of the American Statistical Association, 92(438), 548–560.MathSciNetMATH Efron, B., & Tibshirani, R. (1997). Improvements on cross-validation: The .632+ bootstrap method. Journal of the American Statistical Association, 92(438), 548–560.MathSciNetMATH
21.
Zurück zum Zitat Xue, Y., Yang, L., & Haykin, S. (2007). Decoupled echo state networks with lateral inhibition. Neural Networks, 20(3), 365–376.CrossRef Xue, Y., Yang, L., & Haykin, S. (2007). Decoupled echo state networks with lateral inhibition. Neural Networks, 20(3), 365–376.CrossRef
22.
Zurück zum Zitat Sheng, C., Zhao, J., & Wang, W. (2017). Map-reduce framework-based non-iterative granular echo state network for prediction intervals construction. Neurocomputing, 222, 116–126.CrossRef Sheng, C., Zhao, J., & Wang, W. (2017). Map-reduce framework-based non-iterative granular echo state network for prediction intervals construction. Neurocomputing, 222, 116–126.CrossRef
23.
Zurück zum Zitat Dong, R., & Pedrycz, W. (2008). A granular time series approach to long-term forecasting and trend forecasting. Physica A: Statistical Mechanics and its Applications, 387(13), 3253–3270.CrossRef Dong, R., & Pedrycz, W. (2008). A granular time series approach to long-term forecasting and trend forecasting. Physica A: Statistical Mechanics and its Applications, 387(13), 3253–3270.CrossRef
24.
Zurück zum Zitat Song, M., & Pedrycz, W. (2013). Granular neural networks: Concepts and development schemes. IEEE Transactions on Neural Networks & Learning Systems, 24(4), 542–553.CrossRef Song, M., & Pedrycz, W. (2013). Granular neural networks: Concepts and development schemes. IEEE Transactions on Neural Networks & Learning Systems, 24(4), 542–553.CrossRef
25.
Zurück zum Zitat Cimino, A., Lazzerini, B., Marcelloni, F., et al. (2011). Granular data regression with neural networks, Fuzzy logic and applications. Lecture Notes in Computer Science, 6857, 172–179.CrossRef Cimino, A., Lazzerini, B., Marcelloni, F., et al. (2011). Granular data regression with neural networks, Fuzzy logic and applications. Lecture Notes in Computer Science, 6857, 172–179.CrossRef
26.
Zurück zum Zitat Jaeger, H., & Haas, H. (2004). Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304, 78–80.CrossRef Jaeger, H., & Haas, H. (2004). Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304, 78–80.CrossRef
27.
Zurück zum Zitat Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and echo state network approach. German National Research Center for Information Technology, GMD Rep. 159. Jaeger, H. (2002). Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and echo state network approach. German National Research Center for Information Technology, GMD Rep. 159.
28.
Zurück zum Zitat Zhao, J., Wang, W., Liu, Y., et al. (2011). A two-stage online prediction method for a blast furnace gas system and its application. IEEE Transactions on Control Systems Technology, 19(3), 507–520.CrossRef Zhao, J., Wang, W., Liu, Y., et al. (2011). A two-stage online prediction method for a blast furnace gas system and its application. IEEE Transactions on Control Systems Technology, 19(3), 507–520.CrossRef
29.
Zurück zum Zitat Zhao, J., Liu, Q., Wang, W., et al. (2012). Hybrid neural prediction and optimized adjustment for coke oven gas system in steel industry. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 439–450.CrossRef Zhao, J., Liu, Q., Wang, W., et al. (2012). Hybrid neural prediction and optimized adjustment for coke oven gas system in steel industry. IEEE Transactions on Neural Networks and Learning Systems, 23(3), 439–450.CrossRef
30.
Zurück zum Zitat Liu, Y., Liu, Q., Wang, W., et al. (2012). Data-driven based model for flow prediction of steam system in steel industry. Information Sciences, 193, 104–114.CrossRef Liu, Y., Liu, Q., Wang, W., et al. (2012). Data-driven based model for flow prediction of steam system in steel industry. Information Sciences, 193, 104–114.CrossRef
31.
Zurück zum Zitat Pedrycz, W., & Homenda, W. (2013). Building the fundamentals of granular computing: A principle of justifiable granularity. Applied Soft Computing, 13(10), 4209–4218.CrossRef Pedrycz, W., & Homenda, W. (2013). Building the fundamentals of granular computing: A principle of justifiable granularity. Applied Soft Computing, 13(10), 4209–4218.CrossRef
32.
Zurück zum Zitat Jones, J. A., Evans, D., & Kemp, S. E. (2007). A note on the Gamma test analysis of noisy input/output data and noisy time series. Physica D: Nonlinear Phenomena, 229(1), 1–8.MathSciNetCrossRef Jones, J. A., Evans, D., & Kemp, S. E. (2007). A note on the Gamma test analysis of noisy input/output data and noisy time series. Physica D: Nonlinear Phenomena, 229(1), 1–8.MathSciNetCrossRef
33.
Zurück zum Zitat Liitiainen, E., Verleysen, M., Corona, F., & Lendasse, A. (2009). Residual variance estimation in machine learning. Neurocomputing, 72(16–18), 3692–3703.CrossRef Liitiainen, E., Verleysen, M., Corona, F., & Lendasse, A. (2009). Residual variance estimation in machine learning. Neurocomputing, 72(16–18), 3692–3703.CrossRef
34.
35.
Zurück zum Zitat Khosravi, A., Nahavandi, S., Creighton, D., et al. (2011). Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Transactions on Neural Networks, 22(3), 337–346.CrossRef Khosravi, A., Nahavandi, S., Creighton, D., et al. (2011). Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Transactions on Neural Networks, 22(3), 337–346.CrossRef
36.
Zurück zum Zitat Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks (pp. 1942–1948). Piscataway: IEEE Service Center. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks (pp. 1942–1948). Piscataway: IEEE Service Center.
37.
Zurück zum Zitat Mackey, M. C., & Glass, L. (1977). Oscillation and chaos in physiological control systems. Science, 197(4300), 287–289.CrossRef Mackey, M. C., & Glass, L. (1977). Oscillation and chaos in physiological control systems. Science, 197(4300), 287–289.CrossRef
38.
Zurück zum Zitat De, B. K., De, B. J., Suykens, J. A., et al. (2011). Approximate confidence and prediction intervals for least squares support vector regression. IEEE Transactions on Neural Networks, 22(1), 110–120.CrossRef De, B. K., De, B. J., Suykens, J. A., et al. (2011). Approximate confidence and prediction intervals for least squares support vector regression. IEEE Transactions on Neural Networks, 22(1), 110–120.CrossRef
39.
Zurück zum Zitat Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer Press.MATH Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer Press.MATH
40.
Zurück zum Zitat Vapnik, V. (1995). The nature of statistical learning theory. New York: Springer.CrossRef Vapnik, V. (1995). The nature of statistical learning theory. New York: Springer.CrossRef
41.
Zurück zum Zitat Boyd, V., & Faybusovich, L. (2006). Convex optimization. IEEE Transactions on Automatic Control, 51(11), 1859–1859.CrossRef Boyd, V., & Faybusovich, L. (2006). Convex optimization. IEEE Transactions on Automatic Control, 51(11), 1859–1859.CrossRef
42.
Zurück zum Zitat Wright, W. A. (1999). Bayesian approach to neural-network modeling with input uncertainty. IEEE Transactions on Neural Networks, 10(6), 1261.CrossRef Wright, W. A. (1999). Bayesian approach to neural-network modeling with input uncertainty. IEEE Transactions on Neural Networks, 10(6), 1261.CrossRef
43.
Zurück zum Zitat Chen, L., Liu, Y., Zhao, J., Wang, W., & Liu, Q. (2016). Prediction intervals for industrial data with incomplete input using kernel-based dynamic Bayesian networks. Artificial Intelligence Review, 46, 307–326.CrossRef Chen, L., Liu, Y., Zhao, J., Wang, W., & Liu, Q. (2016). Prediction intervals for industrial data with incomplete input using kernel-based dynamic Bayesian networks. Artificial Intelligence Review, 46, 307–326.CrossRef
44.
Zurück zum Zitat Fung, R., & Chang, K. C. (1990). Weighting and integrating evidence for stochastic simulation in Bayesian networks. In P. P. Bonissone, M. Henrion, L. N. Kanal, & J. F. Lemmer (Eds.), Uncertainty in artificial intelligence (Vol. 5, pp. 208–219). North Holland: Elsevier. Fung, R., & Chang, K. C. (1990). Weighting and integrating evidence for stochastic simulation in Bayesian networks. In P. P. Bonissone, M. Henrion, L. N. Kanal, & J. F. Lemmer (Eds.), Uncertainty in artificial intelligence (Vol. 5, pp. 208–219). North Holland: Elsevier.
45.
Zurück zum Zitat Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.MathSciNetMATH Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.MathSciNetMATH
Metadaten
Titel
Industrial Prediction Intervals with Data Uncertainty
verfasst von
Jun Zhao
Wei Wang
Chunyang Sheng
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-94051-9_5