Skip to main content
Top
Published in: Artificial Intelligence Review 3/2016

01-10-2016

Prediction intervals for industrial data with incomplete input using kernel-based dynamic Bayesian networks

Authors: Long Chen, Ying Liu, Jun Zhao, Wei Wang, Quanli Liu

Published in: Artificial Intelligence Review | Issue 3/2016

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Reliable prediction intervals (PIs) construction for industrial time series is substantially significant for decision-making in production practice. Given the industrial data feature of high level noises and incomplete input, a high order dynamic Bayesian network (DBN)-based PIs construction method for industrial time series is proposed in this study. For avoiding to designate the amount and type of the basis functions in advance, a linear combination of kernel functions is designed to describe the relationships between the nodes in the network, and a learning method based on the scoring criterion—the sparse Bayesian score, is then reported to acquire suitable model parameters such as the weights and the variances. To verify the performance of the proposed method, two types of time series which are the classical Mackey-Glass data mixed by additive noises and a real-world industrial data are employed. The results indicate the effectiveness of our proposed method for the PIs construction of the industrial data with incomplete input.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
go back to reference Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkMATH Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkMATH
go back to reference Carvalho AM, Roos TT, Oliveira AL, MyllymÄaki P (2011) Discriminative learning of Bayesian networks via factorized conditional log-likelihood. J Mach Learn Res 12:2181–2210MathSciNetMATH Carvalho AM, Roos TT, Oliveira AL, MyllymÄaki P (2011) Discriminative learning of Bayesian networks via factorized conditional log-likelihood. J Mach Learn Res 12:2181–2210MathSciNetMATH
go back to reference Chickering DM (1996) Learning Bayesian networks is NP-complete. In: Fisher D, Lenz HJ (eds) Learning from data. Springer, New York, pp 121–130CrossRef Chickering DM (1996) Learning Bayesian networks is NP-complete. In: Fisher D, Lenz HJ (eds) Learning from data. Springer, New York, pp 121–130CrossRef
go back to reference Cooper GF, Herskovits E (1992) A Bayesian method for the induction of probabilistic networks from data. Mach Learn 9:309–347MATH Cooper GF, Herskovits E (1992) A Bayesian method for the induction of probabilistic networks from data. Mach Learn 9:309–347MATH
go back to reference Cruz-Ramírez N, Acosta-Mesa HG, Barrientos-Martínez RE et al (2006) How good are the Bayesian information criterion and the minimum description length principle for model selection? A Bayesian network analysis. In: Gelbukh A, Reyes-Garcia CA (eds) Advances in artificial intelligence. Springer, Heidelberg, pp 494–504 Cruz-Ramírez N, Acosta-Mesa HG, Barrientos-Martínez RE et al (2006) How good are the Bayesian information criterion and the minimum description length principle for model selection? A Bayesian network analysis. In: Gelbukh A, Reyes-Garcia CA (eds) Advances in artificial intelligence. Springer, Heidelberg, pp 494–504
go back to reference Daly R, Shen Q, Aitken S (2011) Learning Bayesian networks: approaches and issues. Knowl Eng Rev 26:99–157CrossRef Daly R, Shen Q, Aitken S (2011) Learning Bayesian networks: approaches and issues. Knowl Eng Rev 26:99–157CrossRef
go back to reference de Campos LM (2006) A scoring function for learning Bayesian networks based on mutual information and conditional independence tests. J Mach Learn Res 7:2149–2187MathSciNetMATH de Campos LM (2006) A scoring function for learning Bayesian networks based on mutual information and conditional independence tests. J Mach Learn Res 7:2149–2187MathSciNetMATH
go back to reference Fung R, Chang KC (1990) Weighting and integrating evidence for stochastic simulation in Bayesian networks. In: Bonissone PP, Henrion M, Kanal LN, Lemmer JF (eds) Uncertainty in Artificial Intelligence, 5. Elsevier, pp 208–219 Fung R, Chang KC (1990) Weighting and integrating evidence for stochastic simulation in Bayesian networks. In: Bonissone PP, Henrion M, Kanal LN, Lemmer JF (eds) Uncertainty in Artificial Intelligence, 5. Elsevier, pp 208–219
go back to reference Gámez JA, Mateo JL, Puerta JM (2011) Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood. Data Min Knowl Discov 22:106–148MathSciNetMATHCrossRef Gámez JA, Mateo JL, Puerta JM (2011) Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood. Data Min Knowl Discov 22:106–148MathSciNetMATHCrossRef
go back to reference Giordano F, La Rocca M, Perna C (2007) Forecasting nonlinear time series with neural network sieve bootstrap. Comput Stat Data Anal 51:3871–3884MathSciNetMATHCrossRef Giordano F, La Rocca M, Perna C (2007) Forecasting nonlinear time series with neural network sieve bootstrap. Comput Stat Data Anal 51:3871–3884MathSciNetMATHCrossRef
go back to reference Heckerman D, Geiger D, Chickering DM (1995) Learning Bayesian networks: the combination of knowledge and statistical data. Mach Learn 20:197–243MATH Heckerman D, Geiger D, Chickering DM (1995) Learning Bayesian networks: the combination of knowledge and statistical data. Mach Learn 20:197–243MATH
go back to reference Heckerman D (2008) A tutorial on learning with Bayesian networks. In: Holmes DE, Jain LC (eds) Innovations in Bayesian networks. Springer, Heidelberg, pp 33–82CrossRef Heckerman D (2008) A tutorial on learning with Bayesian networks. In: Holmes DE, Jain LC (eds) Innovations in Bayesian networks. Springer, Heidelberg, pp 33–82CrossRef
go back to reference Imoto S, Kim S, Goto T et al (2003) Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network. J Bioinform Comput Biol 1:231–252CrossRef Imoto S, Kim S, Goto T et al (2003) Bayesian network and nonparametric heteroscedastic regression for nonlinear modeling of genetic network. J Bioinform Comput Biol 1:231–252CrossRef
go back to reference Jaeger H (2002) Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD Report 159, German National Research Center for Information Technology Jaeger H (2002) Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD Report 159, German National Research Center for Information Technology
go back to reference Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304:78–80CrossRef Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304:78–80CrossRef
go back to reference Khosravi A, Nahavandi S, Creighton D et al (2011) Comprehensive review of neural network-based prediction intervals and new advances. IEEE Trans Neural Netw 22:1341–1356CrossRef Khosravi A, Nahavandi S, Creighton D et al (2011) Comprehensive review of neural network-based prediction intervals and new advances. IEEE Trans Neural Netw 22:1341–1356CrossRef
go back to reference Khosravi A, Nahavandi S, Creighton D (2011) Prediction interval construction and optimization for adaptive neurofuzzy inference systems. IEEE Trans Fuzzy Syst 19:983–988CrossRef Khosravi A, Nahavandi S, Creighton D (2011) Prediction interval construction and optimization for adaptive neurofuzzy inference systems. IEEE Trans Fuzzy Syst 19:983–988CrossRef
go back to reference Khosravi A, Nahavandi S, Creighton D et al (2011) Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Trans Neural Netw 22(3):337–346CrossRef Khosravi A, Nahavandi S, Creighton D et al (2011) Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Trans Neural Netw 22(3):337–346CrossRef
go back to reference Kim S, Imoto S, Miyano S (2004) Dynamic Bayesian network and nonparametric regression for nonlinear modeling of gene networks from time series gene expression data. Biosystems 75:57–65MATHCrossRef Kim S, Imoto S, Miyano S (2004) Dynamic Bayesian network and nonparametric regression for nonlinear modeling of gene networks from time series gene expression data. Biosystems 75:57–65MATHCrossRef
go back to reference Larrañaga P, Karshenas H, Bielza C et al (2013) A review on evolutionary algorithms in Bayesian network learning and inference tasks. Inf Sci 233:109–125MathSciNetMATHCrossRef Larrañaga P, Karshenas H, Bielza C et al (2013) A review on evolutionary algorithms in Bayesian network learning and inference tasks. Inf Sci 233:109–125MathSciNetMATHCrossRef
go back to reference Lee S, Bolic M, Groza VZ et al (2011) Confidence interval estimation for oscillometric blood pressure measurements using bootstrap approaches. IEEE Trans Instrum Meas 60:3405–3415CrossRef Lee S, Bolic M, Groza VZ et al (2011) Confidence interval estimation for oscillometric blood pressure measurements using bootstrap approaches. IEEE Trans Instrum Meas 60:3405–3415CrossRef
go back to reference Mencar C, Castellano G, Fanelli AM (2005) Deriving prediction intervals for neuro-fuzzy networks. Math Comput Model 42:719–726MATHCrossRef Mencar C, Castellano G, Fanelli AM (2005) Deriving prediction intervals for neuro-fuzzy networks. Math Comput Model 42:719–726MATHCrossRef
go back to reference Murphy KP (2002) Dynamic Bayesian networks: representation, inference and learning. University of California, Berkeley Murphy KP (2002) Dynamic Bayesian networks: representation, inference and learning. University of California, Berkeley
go back to reference Nix DA, Weigend AS (1994) Estimating the mean and variance of the target probability distribution. Computational Intelligence. In: 1994 IEEE World Congress on Computational Intelligence 1: 55–60 Nix DA, Weigend AS (1994) Estimating the mean and variance of the target probability distribution. Computational Intelligence. In: 1994 IEEE World Congress on Computational Intelligence 1: 55–60
go back to reference Njah H, Jamoussi S (2015) Weighted ensemble learning of Bayesian network for gene regulatory networks. Neurocomputing 150:404–416CrossRef Njah H, Jamoussi S (2015) Weighted ensemble learning of Bayesian network for gene regulatory networks. Neurocomputing 150:404–416CrossRef
go back to reference Papadopoulos G, Edwards PJ, Murray AF (2001) Confidence estimation methods for neural networks: a practical comparison. IEEE Trans Neural Netw 12:1278–1287CrossRef Papadopoulos G, Edwards PJ, Murray AF (2001) Confidence estimation methods for neural networks: a practical comparison. IEEE Trans Neural Netw 12:1278–1287CrossRef
go back to reference Regnier-Coudert O, McCall J (2012) An island model genetic algorithm for Bayesian network structure learning. In: 2012 IEEE Congress on Evolutionary Computation, 1–8 Regnier-Coudert O, McCall J (2012) An island model genetic algorithm for Bayesian network structure learning. In: 2012 IEEE Congress on Evolutionary Computation, 1–8
go back to reference Sheng C, Zhao J, Wang W et al (2013) Prediction intervals for a noisy nonlinear time series based on a bootstrapping reservoir computing network ensemble. IEEE Trans Neural Netw Learn Syst 24:1036–1048CrossRef Sheng C, Zhao J, Wang W et al (2013) Prediction intervals for a noisy nonlinear time series based on a bootstrapping reservoir computing network ensemble. IEEE Trans Neural Netw Learn Syst 24:1036–1048CrossRef
go back to reference Shrivastava NA, Panigrahi BK (2013) Point and prediction interval estimation for electricity markets with machine learning techniques and wavelet transforms. Neurocomputing 118:301–310CrossRef Shrivastava NA, Panigrahi BK (2013) Point and prediction interval estimation for electricity markets with machine learning techniques and wavelet transforms. Neurocomputing 118:301–310CrossRef
go back to reference Silander T, Roos T, MyllymÄaki P (2010) Learning locally mini-max optimal Bayesian networks. Int J Approx Reason 51:544–557MathSciNetCrossRef Silander T, Roos T, MyllymÄaki P (2010) Learning locally mini-max optimal Bayesian networks. Int J Approx Reason 51:544–557MathSciNetCrossRef
go back to reference Tipping ME (2005) Variational relevance vector machine. U.S. Patent Tipping ME (2005) Variational relevance vector machine. U.S. Patent
go back to reference Tipping ME, Faul AC (2003) Fast marginal likelihood maximisation for sparse Bayesian models. In: 2003 Proceedings of the ninth international workshop on artificial intelligence and statistics. 1 Tipping ME, Faul AC (2003) Fast marginal likelihood maximisation for sparse Bayesian models. In: 2003 Proceedings of the ninth international workshop on artificial intelligence and statistics. 1
go back to reference Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244MathSciNetMATH Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244MathSciNetMATH
go back to reference Zhang L, Luh PB, Kasiviswanathan K (2003) Energy clearing price prediction and confidence interval estimation with cascaded neural networks. IEEE Trans Power Syst 18:99–105CrossRef Zhang L, Luh PB, Kasiviswanathan K (2003) Energy clearing price prediction and confidence interval estimation with cascaded neural networks. IEEE Trans Power Syst 18:99–105CrossRef
go back to reference Zhang Y, Zhang W, Xie Y (2013) Improved heuristic equivalent search algorithm based on maximal information coefficient for Bayesian network structure learning. Neurocomputing 117:186–195CrossRef Zhang Y, Zhang W, Xie Y (2013) Improved heuristic equivalent search algorithm based on maximal information coefficient for Bayesian network structure learning. Neurocomputing 117:186–195CrossRef
Metadata
Title
Prediction intervals for industrial data with incomplete input using kernel-based dynamic Bayesian networks
Authors
Long Chen
Ying Liu
Jun Zhao
Wei Wang
Quanli Liu
Publication date
01-10-2016
Publisher
Springer Netherlands
Published in
Artificial Intelligence Review / Issue 3/2016
Print ISSN: 0269-2821
Electronic ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-016-9465-y

Other articles of this Issue 3/2016

Artificial Intelligence Review 3/2016 Go to the issue

Premium Partner