Skip to main content

2018 | OriginalPaper | Buchkapitel

7. Parameter Estimation and Optimization

verfasst von : Jun Zhao, Wei Wang, Chunyang Sheng

Erschienen in: Data-Driven Prediction for Industrial Processes and Their Applications

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The selection of parameters or hyper-parameters gives great impact on the performance of a data-driven model. This chapter introduces some commonly used parameter optimization and estimation methods, such as the gradient-based methods (e.g., gradient descend, Newton method, and conjugate gradient method) and the intelligent optimization ones (e.g., genetic algorithm, differential evolution algorithm, and particle swarm optimization). In particular, in this chapter, the conjugate gradient method is employed to optimize the hyper-parameters in a LSSVM model based on noise estimation, which enable to alleviate the impact of noise on the performance of the LSSVM. As for dynamic models, this chapter introduces nonlinear Kalman-filter methods for parameter estimation. The well-known ones include the extended Kalman-filter, the unscented Kalman-filter, and the cubature Kalman-filter. Here, a dual estimation model based on two Kalman-filters is illustrated, which simultaneously estimates the uncertainties of internal state and the output. Besides, the probabilistic methods for parameter estimation are also introduced, where a Bayesian model, especially a variational inference framework, is elaborated in details. In such a framework, a particular variational relevance vector machine (RVM) model based on automatic relevance determination kernel is introduced, which provides the approximated posterior distributions over the kernel parameters. Finally, we give some case studies by employing a number of industrial data.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Protter, M. H. (2014). Basic elements of real analysis. Springer. Protter, M. H. (2014). Basic elements of real analysis. Springer.
3.
Zurück zum Zitat Bottou, L. (1998). Online algorithms and stochastic approximations. Cambridge University Press. Bottou, L. (1998). Online algorithms and stochastic approximations. Cambridge University Press.
4.
Zurück zum Zitat Kiwiel, K. C. (2001). Convergence and efficiency of subgradient methods for quasiconvex minimization. Mathematical Programming, 90(1), 1–25.MathSciNetCrossRef Kiwiel, K. C. (2001). Convergence and efficiency of subgradient methods for quasiconvex minimization. Mathematical Programming, 90(1), 1–25.MathSciNetCrossRef
5.
Zurück zum Zitat Shanno, D. F. (1970). Conditioning of quasi-Newton methods for function minimization. Mathematics of Computation, 24(111), 647–656.MathSciNetCrossRef Shanno, D. F. (1970). Conditioning of quasi-Newton methods for function minimization. Mathematics of Computation, 24(111), 647–656.MathSciNetCrossRef
6.
Zurück zum Zitat Liu, D. C., & Nocedal, J. (1989). On the limited memory BFGS method for large scale optimization. Springer-Verlag New York Inc. Liu, D. C., & Nocedal, J. (1989). On the limited memory BFGS method for large scale optimization. Springer-Verlag New York Inc.
7.
8.
Zurück zum Zitat Malouf, R. (2002). A comparison of algorithms for maximum entropy parameter estimation (pp. 49–55). In Proc. Sixth Conf. on Natural Language Learning (CoNLL). Malouf, R. (2002). A comparison of algorithms for maximum entropy parameter estimation (pp. 49–55). In Proc. Sixth Conf. on Natural Language Learning (CoNLL).
9.
Zurück zum Zitat Andrew, G., & Gao, J. (2007). Scalable training of L-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning. Andrew, G., & Gao, J. (2007). Scalable training of L-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning.
10.
Zurück zum Zitat Knyazev, A. V., & Lashuk, I. (2008). Steepest descent and conjugate gradient methods with variable preconditioning. SIAM Journal on Matrix Analysis and Applications, 29(4), 1267.MathSciNetCrossRef Knyazev, A. V., & Lashuk, I. (2008). Steepest descent and conjugate gradient methods with variable preconditioning. SIAM Journal on Matrix Analysis and Applications, 29(4), 1267.MathSciNetCrossRef
11.
Zurück zum Zitat Hestenes, M. R., & Stiefel, E. L. (1952). Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 5(2), 409–432.MathSciNetCrossRef Hestenes, M. R., & Stiefel, E. L. (1952). Methods of conjugate gradients for solving linear systems. Journal of Research of the National Bureau of Standards, 5(2), 409–432.MathSciNetCrossRef
12.
Zurück zum Zitat Fletcher, R., & Reeves, C. (1964). Function minimization by conjugate gradients. Computer Journal, 7(1), 149–154.MathSciNetCrossRef Fletcher, R., & Reeves, C. (1964). Function minimization by conjugate gradients. Computer Journal, 7(1), 149–154.MathSciNetCrossRef
13.
Zurück zum Zitat Polak, B., & Ribiere, G. (1969). Note sur la convergence des methods de directions conjuguees. Rev Francaise Imformmat Recherche Opertionelle, 16(1), 35–43.MATH Polak, B., & Ribiere, G. (1969). Note sur la convergence des methods de directions conjuguees. Rev Francaise Imformmat Recherche Opertionelle, 16(1), 35–43.MATH
14.
Zurück zum Zitat Polyak, B. T. (1969). The conjugate gradient method in extreme problems. USSR Computational Mathematics and Mathematical Physics, 9(1), 94–112.CrossRef Polyak, B. T. (1969). The conjugate gradient method in extreme problems. USSR Computational Mathematics and Mathematical Physics, 9(1), 94–112.CrossRef
15.
Zurück zum Zitat Fletcher, R. (1987). Practical methods of optimization, Vol. 1: Unconstrained optimization (pp. 10–30). New York: Wiley. Fletcher, R. (1987). Practical methods of optimization, Vol. 1: Unconstrained optimization (pp. 10–30). New York: Wiley.
16.
Zurück zum Zitat Liu, Y., & Storey, C. (1991). Efficient generalized conjugate gradient algorithms, Part 1: Theory. Journal of Optimization Theory and Applications, 69(1), 129–137.MathSciNetCrossRef Liu, Y., & Storey, C. (1991). Efficient generalized conjugate gradient algorithms, Part 1: Theory. Journal of Optimization Theory and Applications, 69(1), 129–137.MathSciNetCrossRef
17.
Zurück zum Zitat Dai, H. Y., & Yuan, Y. (2000). A nonlinear conjugate gradient method with a strong global convergence property. SIAM Journal on Optimization, 10(1), 177–182.MathSciNetCrossRef Dai, H. Y., & Yuan, Y. (2000). A nonlinear conjugate gradient method with a strong global convergence property. SIAM Journal on Optimization, 10(1), 177–182.MathSciNetCrossRef
18.
Zurück zum Zitat Zhang, X. P., Zhao, J., Wei, W., et al. (2010). COG holder level prediction model based on least square support vector machine and its application. Control and Decision, 25(8), 1178–1183. Zhang, X. P., Zhao, J., Wei, W., et al. (2010). COG holder level prediction model based on least square support vector machine and its application. Control and Decision, 25(8), 1178–1183.
19.
Zurück zum Zitat Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press. Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press.
20.
Zurück zum Zitat Srinivas, M., & Patnaik, L. (1994). Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Transactions on System, Man and Cybernetics, 4(4), 656–667.CrossRef Srinivas, M., & Patnaik, L. (1994). Adaptive probabilities of crossover and mutation in genetic algorithms. IEEE Transactions on System, Man and Cybernetics, 4(4), 656–667.CrossRef
21.
Zurück zum Zitat Zhang, J., Chung, H., & Lo, W. L. (2007). Clustering-based adaptive crossover and mutation probabilities for genetic algorithms. IEEE Transactions on Evolutionary Computation, 11(3), 326–335.CrossRef Zhang, J., Chung, H., & Lo, W. L. (2007). Clustering-based adaptive crossover and mutation probabilities for genetic algorithms. IEEE Transactions on Evolutionary Computation, 11(3), 326–335.CrossRef
22.
Zurück zum Zitat Storn, R. (1995). Constrained optimization. Dr. Dobb’s Journal, 119–123. Storn, R. (1995). Constrained optimization. Dr. Dobb’s Journal, 119–123.
23.
Zurück zum Zitat Das, S., Abraham, A., & Konar, A. (2009). Differential evolution algorithm: Foundations and perspectives (Vol. 178, pp. 63–110). Das, S., Abraham, A., & Konar, A. (2009). Differential evolution algorithm: Foundations and perspectives (Vol. 178, pp. 63–110).
24.
Zurück zum Zitat Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization (pp. 1942–1948). In International Conference on Neural Networks. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization (pp. 1942–1948). In International Conference on Neural Networks.
25.
Zurück zum Zitat Eberhart, R., & Kennedy, J. (1995). A new optimizer using particle swarm theory (pp. 39–43). In International Symposium on Micro Machine and Human Science. Eberhart, R., & Kennedy, J. (1995). A new optimizer using particle swarm theory (pp. 39–43). In International Symposium on Micro Machine and Human Science.
26.
Zurück zum Zitat Eberhart, R. C., Shi, Y., & Kennedy, J. (2001). Swarm intelligence. Amsterdam: Elsevier. Eberhart, R. C., Shi, Y., & Kennedy, J. (2001). Swarm intelligence. Amsterdam: Elsevier.
27.
Zurück zum Zitat Shi, Y., & Eberhart, R. C. (1998). Parameter selection in particle swarm optimization (pp. 591–600). In Evolutionary Programming VI/: Proc. EP98. New York: Springer. Shi, Y., & Eberhart, R. C. (1998). Parameter selection in particle swarm optimization (pp. 591–600). In Evolutionary Programming VI/: Proc. EP98. New York: Springer.
28.
Zurück zum Zitat Shi, Y., & Eberhart, R. C. (1998). A modified particle swarm optimizer (pp. 69–73). In Proceedings of the IEEE International Conference on Evolutionary Computation. Piscataway, NJ: IEEE Press. Shi, Y., & Eberhart, R. C. (1998). A modified particle swarm optimizer (pp. 69–73). In Proceedings of the IEEE International Conference on Evolutionary Computation. Piscataway, NJ: IEEE Press.
29.
Zurück zum Zitat Kitayama, S., Arakawa, M., & Yamazaki, K. (2006). Penalty function approach for the mixed discrete nonlinear problems by particle swarm optimization. Structural and Multidisciplinary Optimization, 32(3), 191–202.MathSciNetCrossRef Kitayama, S., Arakawa, M., & Yamazaki, K. (2006). Penalty function approach for the mixed discrete nonlinear problems by particle swarm optimization. Structural and Multidisciplinary Optimization, 32(3), 191–202.MathSciNetCrossRef
30.
Zurück zum Zitat Li, D., Wang, B., Kita-Yama, S., Yamazaki, K., & Arakawa, M. (2005). Application of particle swarm optimization to the mixed discrete non-linear problems (pp. 315–324). In Artificial intelligence applications and innovations, USA, Vol. 187. Li, D., Wang, B., Kita-Yama, S., Yamazaki, K., & Arakawa, M. (2005). Application of particle swarm optimization to the mixed discrete non-linear problems (pp. 315–324). In Artificial intelligence applications and innovations, USA, Vol. 187.
31.
Zurück zum Zitat Kitayama, S., & Yasuda, K. (2006). A method for mixed integer programming problems by particle swarm optimization. Electrical Engineering in Japan, 157(2), 40–49.CrossRef Kitayama, S., & Yasuda, K. (2006). A method for mixed integer programming problems by particle swarm optimization. Electrical Engineering in Japan, 157(2), 40–49.CrossRef
32.
Zurück zum Zitat Chen, W. N., Zhang, J., Chung, H. S. H., et al. (2010). A novel set-based particle swarm optimization method for discrete optimization problems. IEEE Transactions on Evolutionary Computation, 14(2), 278–300.CrossRef Chen, W. N., Zhang, J., Chung, H. S. H., et al. (2010). A novel set-based particle swarm optimization method for discrete optimization problems. IEEE Transactions on Evolutionary Computation, 14(2), 278–300.CrossRef
33.
Zurück zum Zitat Gong, Y. J., Zhang, J., Liu, O., et al. (2012). Optimizing the vehicle routing problem with time windows: A discrete particle swarm optimization approach. IEEE Transactions on Systems Man & Cybernetics Part C, 42(2), 254–267.CrossRef Gong, Y. J., Zhang, J., Liu, O., et al. (2012). Optimizing the vehicle routing problem with time windows: A discrete particle swarm optimization approach. IEEE Transactions on Systems Man & Cybernetics Part C, 42(2), 254–267.CrossRef
34.
Zurück zum Zitat Robinson, D. G. (2005). Reliability analysis of bulk power systems using swarm intelligence (pp. 96–102). In Reliability and maintainability symposium, 2005. Proceedings. IEEE. Robinson, D. G. (2005). Reliability analysis of bulk power systems using swarm intelligence (pp. 96–102). In Reliability and maintainability symposium, 2005. Proceedings. IEEE.
35.
Zurück zum Zitat Pampara, G., Franken, N., & Engelbrecht, A. P. (2005). Combining particle swarm optimisation with angle modulation to solve binary problems (pp. 89–96). In The 2005 I.E. Congress on Evolutionary Computation, 2005. IEEE. Pampara, G., Franken, N., & Engelbrecht, A. P. (2005). Combining particle swarm optimisation with angle modulation to solve binary problems (pp. 89–96). In The 2005 I.E. Congress on Evolutionary Computation, 2005. IEEE.
36.
Zurück zum Zitat Wu, W. C., & Tsai, M. S. (2011). Application of enhanced integer coded particle swarm optimization for distribution system feeder reconfiguration. IEEE Transactions on Power Systems, 26(3), 1591–1599.CrossRef Wu, W. C., & Tsai, M. S. (2011). Application of enhanced integer coded particle swarm optimization for distribution system feeder reconfiguration. IEEE Transactions on Power Systems, 26(3), 1591–1599.CrossRef
37.
38.
Zurück zum Zitat Cerny, V. (1985). Thermodynamical approach to the travelling salesman problem: An efficient simulation algorithm. Journal of Optimization Theory and Applications, 45, 41–51.MathSciNetCrossRef Cerny, V. (1985). Thermodynamical approach to the travelling salesman problem: An efficient simulation algorithm. Journal of Optimization Theory and Applications, 45, 41–51.MathSciNetCrossRef
39.
Zurück zum Zitat Fleischer, M. A. (1995). Simulated annealing: Past, present, and future (pp. 155–161). In Proceedings of the 1995 Winter Simulation Conference, IEEE Press, Arlington, Virginia. Fleischer, M. A. (1995). Simulated annealing: Past, present, and future (pp. 155–161). In Proceedings of the 1995 Winter Simulation Conference, IEEE Press, Arlington, Virginia.
40.
Zurück zum Zitat Henderson, D., Jacobson, S. H., & Johnson, A. W. (2003). Handbook of metaheuristics. Boston: Kluwer. Henderson, D., Jacobson, S. H., & Johnson, A. W. (2003). Handbook of metaheuristics. Boston: Kluwer.
41.
Zurück zum Zitat Kumar, P. (2006). A survey of simulated annealing as a tool for single and multiobjective optimization. Journal of the Operational Research Society, 57(10), 1143–1160.CrossRef Kumar, P. (2006). A survey of simulated annealing as a tool for single and multiobjective optimization. Journal of the Operational Research Society, 57(10), 1143–1160.CrossRef
42.
Zurück zum Zitat Sastry, Y. (1971). Decomposition of the extended Kalman filter. IEEE Transactions on Automatic Control, 16(3), 260–261.MathSciNetCrossRef Sastry, Y. (1971). Decomposition of the extended Kalman filter. IEEE Transactions on Automatic Control, 16(3), 260–261.MathSciNetCrossRef
43.
Zurück zum Zitat Einicke, G. A. (2012). Smoothing, filtering and prediction: Estimating the past, present and future. Rijeka: Intech. Einicke, G. A. (2012). Smoothing, filtering and prediction: Estimating the past, present and future. Rijeka: Intech.
44.
Zurück zum Zitat Andreasen, M. M. (2013). Non-linear DSGE Models and the central difference Kalman Filter †. Journal of Applied Econometrics, 28(6), 929–955.MathSciNet Andreasen, M. M. (2013). Non-linear DSGE Models and the central difference Kalman Filter †. Journal of Applied Econometrics, 28(6), 929–955.MathSciNet
45.
Zurück zum Zitat Wan, E. A., & van der Menve, R. (2000). The unscented Kalman Filter for nonlinear estimation. In IEEE Conference on Symposium on Adaptive Systems for Signal Processing, Communications, and Control (AS-SPCC). Wan, E. A., & van der Menve, R. (2000). The unscented Kalman Filter for nonlinear estimation. In IEEE Conference on Symposium on Adaptive Systems for Signal Processing, Communications, and Control (AS-SPCC).
46.
Zurück zum Zitat Arasaratnam, I., & Haykin, S. (2009). Cubature Kalman filters. IEEE Transactions on Automatic Control, 54(6), 1254–1269.MathSciNetCrossRef Arasaratnam, I., & Haykin, S. (2009). Cubature Kalman filters. IEEE Transactions on Automatic Control, 54(6), 1254–1269.MathSciNetCrossRef
47.
Zurück zum Zitat Sheng, C., Zhao, J., Liu, Y., et al. (2012). Prediction for noisy nonlinear time series by echo state network based on dual estimation. Neurocomputing, 82(4), 186–195.CrossRef Sheng, C., Zhao, J., Liu, Y., et al. (2012). Prediction for noisy nonlinear time series by echo state network based on dual estimation. Neurocomputing, 82(4), 186–195.CrossRef
48.
Zurück zum Zitat Venayagamoorthy, G., & Shishir, B. (2009). Effects of spectral radius and settling time in the performance of echo state networks. Neural Networks, 22(7), 861.CrossRef Venayagamoorthy, G., & Shishir, B. (2009). Effects of spectral radius and settling time in the performance of echo state networks. Neural Networks, 22(7), 861.CrossRef
49.
Zurück zum Zitat Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). New York: Springer.MATH Bishop, C. M. (2006). Pattern recognition and machine learning (information science and statistics). New York: Springer.MATH
50.
Zurück zum Zitat Bernardo, J. M., & Smith, A. F. M. (1994). Bayesian theory. Chichester: Wiley.CrossRef Bernardo, J. M., & Smith, A. F. M. (1994). Bayesian theory. Chichester: Wiley.CrossRef
51.
Zurück zum Zitat Gull, S. F. (1989). Developments in maximum entropy data analysis. In J. Skilling (Ed.), Maximum entropy and Bayesian methods (pp. 53–71). Dordrecht: Kluwer.CrossRef Gull, S. F. (1989). Developments in maximum entropy data analysis. In J. Skilling (Ed.), Maximum entropy and Bayesian methods (pp. 53–71). Dordrecht: Kluwer.CrossRef
52.
Zurück zum Zitat MacKay, D. J. C. (1992). The evidence framework applied to classification networks. Neural Computation, 4(5), 720–736.CrossRef MacKay, D. J. C. (1992). The evidence framework applied to classification networks. Neural Computation, 4(5), 720–736.CrossRef
53.
Zurück zum Zitat Berger, J. O. (1985). Statistical decision theory and Bayesian analysis (2nd ed.). New York: Springer.CrossRef Berger, J. O. (1985). Statistical decision theory and Bayesian analysis (2nd ed.). New York: Springer.CrossRef
54.
Zurück zum Zitat Parisi, G. (1988). Statistical field theory. New York: Addison-Wesley.MATH Parisi, G. (1988). Statistical field theory. New York: Addison-Wesley.MATH
55.
56.
Zurück zum Zitat Bishop, C. M., & Tipping, M. E. (2000). Variational relevance vector machines. In Conference on uncertainty in artificial intelligence. Bishop, C. M., & Tipping, M. E. (2000). Variational relevance vector machines. In Conference on uncertainty in artificial intelligence.
57.
Zurück zum Zitat Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1(3), 211–244.MathSciNetMATH Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1(3), 211–244.MathSciNetMATH
58.
Zurück zum Zitat Vapnik, V. N. (1995). The nature of statistical learning theory. New York: Springer.CrossRef Vapnik, V. N. (1995). The nature of statistical learning theory. New York: Springer.CrossRef
59.
Zurück zum Zitat Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. MIT Press. Rasmussen, C., & Williams, C. (2006). Gaussian processes for machine learning. MIT Press.
60.
Zurück zum Zitat Zhao, J., Liu, Q., Pedrycz, W., et al. (2012). Effective noise estimation-based online prediction for byproduct gas system in steel industry. IEEE Transactions on Industrial Informatics, 8(4), 953–963.CrossRef Zhao, J., Liu, Q., Pedrycz, W., et al. (2012). Effective noise estimation-based online prediction for byproduct gas system in steel industry. IEEE Transactions on Industrial Informatics, 8(4), 953–963.CrossRef
61.
Zurück zum Zitat Zhao, Y., & Keong, K. C. (2004). Fast leave-one-out evaluation and improvement on inference for LS-SVM (pp. 1051–4651). In Proc. IEEE Int. Conf. Pattern Recognit., Cambridge, U.K. Zhao, Y., & Keong, K. C. (2004). Fast leave-one-out evaluation and improvement on inference for LS-SVM (pp. 1051–4651). In Proc. IEEE Int. Conf. Pattern Recognit., Cambridge, U.K.
62.
Zurück zum Zitat An, S., Liu, W., & Venkatesh, S. (2007). Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recognition, 40, 2154–2162.CrossRef An, S., Liu, W., & Venkatesh, S. (2007). Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression. Pattern Recognition, 40, 2154–2162.CrossRef
63.
Zurück zum Zitat Chi, M. V., Wong, P. K., & Li, Y. P. (2006). Prediction of automotive engine power and torque using least squares support vector machines and Bayesian inference. Engineering Applications of Artificial Intelligence, 19(3), 277–287.CrossRef Chi, M. V., Wong, P. K., & Li, Y. P. (2006). Prediction of automotive engine power and torque using least squares support vector machines and Bayesian inference. Engineering Applications of Artificial Intelligence, 19(3), 277–287.CrossRef
64.
Zurück zum Zitat Rubio, G., Pomares, H., Rojas, I., et al. (2009). Efficient optimization of the parameters of LS-SVM for regression versus cross-validation error (pp. 406–415). In International Conference on Artificial Neural Networks. Springer. Rubio, G., Pomares, H., Rojas, I., et al. (2009). Efficient optimization of the parameters of LS-SVM for regression versus cross-validation error (pp. 406–415). In International Conference on Artificial Neural Networks. Springer.
65.
Zurück zum Zitat Jones, A. J. (2004). New tools in non-linear modelling and prediction. Computational Management Science, 1(2), 109–149.CrossRef Jones, A. J. (2004). New tools in non-linear modelling and prediction. Computational Management Science, 1(2), 109–149.CrossRef
Metadaten
Titel
Parameter Estimation and Optimization
verfasst von
Jun Zhao
Wei Wang
Chunyang Sheng
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-94051-9_7