Skip to main content
Erschienen in: Network Modeling Analysis in Health Informatics and Bioinformatics 1/2016

01.12.2016 | Original Article

A review of automatic selection methods for machine learning algorithms and hyper-parameter values

verfasst von: Gang Luo

Erschienen in: Network Modeling Analysis in Health Informatics and Bioinformatics | Ausgabe 1/2016

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Machine learning studies automatic algorithms that improve themselves through experience. It is widely used for analyzing and extracting value from large biomedical data sets, or “big biomedical data,” advancing biomedical research, and improving healthcare. Before a machine learning model is trained, the user of a machine learning software tool typically must manually select a machine learning algorithm and set one or more model parameters termed hyper-parameters. The algorithm and hyper-parameter values used can greatly impact the resulting model’s performance, but their selection requires special expertise as well as many labor-intensive manual iterations. To make machine learning accessible to layman users with limited computing expertise, computer science researchers have proposed various automatic selection methods for algorithms and/or hyper-parameter values for a given supervised machine learning problem. This paper reviews these methods, identifies several of their limitations in the big biomedical data environment, and provides preliminary thoughts on how to address these limitations. These findings establish a foundation for future research on automatically selecting algorithms and hyper-parameter values for analyzing big biomedical data.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Adankon MM, Cheriet M (2009) Model selection for the LS-SVM. Application to handwriting recognition. Pattern Recognit 42(12):3264–3270CrossRefMATH Adankon MM, Cheriet M (2009) Model selection for the LS-SVM. Application to handwriting recognition. Pattern Recognit 42(12):3264–3270CrossRefMATH
Zurück zum Zitat Ali A, Caruana R, Kapoor A (2014) Active learning with model selection. In: Proceedings of AAAI’14, pp 1673–1679 Ali A, Caruana R, Kapoor A (2014) Active learning with model selection. In: Proceedings of AAAI’14, pp 1673–1679
Zurück zum Zitat Alpaydin E (2014) Introduction to machine learning, 3rd edn. The MIT Press, CambridgeMATH Alpaydin E (2014) Introduction to machine learning, 3rd edn. The MIT Press, CambridgeMATH
Zurück zum Zitat Bardenet R, Brendel M, Kégl B, Sebag M (2013) Collaborative hyperparameter tuning. In: Proceedings of ICML’13, pp 199–207 Bardenet R, Brendel M, Kégl B, Sebag M (2013) Collaborative hyperparameter tuning. In: Proceedings of ICML’13, pp 199–207
Zurück zum Zitat Bengio Y (2000) Gradient-based optimization of hyperparameters. Neural Comput 12(8):1889–1900CrossRef Bengio Y (2000) Gradient-based optimization of hyperparameters. Neural Comput 12(8):1889–1900CrossRef
Zurück zum Zitat Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13:281–305MathSciNetMATH Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13:281–305MathSciNetMATH
Zurück zum Zitat Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Proceedings of NIPS’11, pp 2546–2554 Bergstra J, Bardenet R, Bengio Y, Kégl B (2011) Algorithms for hyper-parameter optimization. In: Proceedings of NIPS’11, pp 2546–2554
Zurück zum Zitat Bergstra J, Yamins D, Cox DD (2013) Hyperopt: a Python library for optimizing the hyperparameters of machine learning algorithms. In: Proceedings of SciPy 2013, pp 13–20 Bergstra J, Yamins D, Cox DD (2013) Hyperopt: a Python library for optimizing the hyperparameters of machine learning algorithms. In: Proceedings of SciPy 2013, pp 13–20
Zurück zum Zitat Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, BelmontMATH Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, BelmontMATH
Zurück zum Zitat Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeCrossRefMATH Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, CambridgeCrossRefMATH
Zurück zum Zitat Brazdil P, Soares C, da Costa JP (2003) Ranking learning algorithms: using IBL and meta-learning on accuracy and time results. Mach Learn 50(3):251–277CrossRefMATH Brazdil P, Soares C, da Costa JP (2003) Ranking learning algorithms: using IBL and meta-learning on accuracy and time results. Mach Learn 50(3):251–277CrossRefMATH
Zurück zum Zitat Burnham KP, Anderson DR (2003) Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn. Springer, New YorkMATH Burnham KP, Anderson DR (2003) Model selection and multimodel inference: a practical information-theoretic approach, 2nd edn. Springer, New YorkMATH
Zurück zum Zitat Caruana R, Niculescu-Mizil A, Crew G, Ksikes A (2004) Ensemble selection from libraries of models. In: Proceedings of ICML’04 Caruana R, Niculescu-Mizil A, Crew G, Ksikes A (2004) Ensemble selection from libraries of models. In: Proceedings of ICML’04
Zurück zum Zitat Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M et al. (2006) Bigtable: a distributed storage system for structured data. In: Proceedings of OSDI’06, pp 205–218 Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M et al. (2006) Bigtable: a distributed storage system for structured data. In: Proceedings of OSDI’06, pp 205–218
Zurück zum Zitat Claeskens G, Hjort N (2008) Model selection and model averaging. Cambridge University Press, CambridgeCrossRefMATH Claeskens G, Hjort N (2008) Model selection and model averaging. Cambridge University Press, CambridgeCrossRefMATH
Zurück zum Zitat Cleophas TJ, Zwinderman AH (2013a) Machine learning in medicine. Springer, New YorkCrossRef Cleophas TJ, Zwinderman AH (2013a) Machine learning in medicine. Springer, New YorkCrossRef
Zurück zum Zitat Cleophas TJ, Zwinderman AH (2013b) Machine learning in medicine: Part 2. Springer, New YorkCrossRef Cleophas TJ, Zwinderman AH (2013b) Machine learning in medicine: Part 2. Springer, New YorkCrossRef
Zurück zum Zitat Cleophas TJ, Zwinderman AH (2013c) Machine learning in medicine: Part 3. Springer, New YorkCrossRef Cleophas TJ, Zwinderman AH (2013c) Machine learning in medicine: Part 3. Springer, New YorkCrossRef
Zurück zum Zitat Dean J, Ghemawat S (2004) MapReduce: simplified data processing on large clusters. In: Proceedings of OSDI’04, pp 137–150 Dean J, Ghemawat S (2004) MapReduce: simplified data processing on large clusters. In: Proceedings of OSDI’04, pp 137–150
Zurück zum Zitat Domhan T, Springenberg JT, Hutter F (2015) Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In: Proceedings of IJCAI’15, pp 3460–3468 Domhan T, Springenberg JT, Hutter F (2015) Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In: Proceedings of IJCAI’15, pp 3460–3468
Zurück zum Zitat Einbinder JS, Scully KW, Pates RD, Schubart JR, Reynolds RE (2001) Case study: a data warehouse for an academic medical center. J Healthc Inf Manag. 15(2):165–175 Einbinder JS, Scully KW, Pates RD, Schubart JR, Reynolds RE (2001) Case study: a data warehouse for an academic medical center. J Healthc Inf Manag. 15(2):165–175
Zurück zum Zitat Feurer M, Klein A, Eggensperger K, Springenberg J, Blum M, Hutter F (2015a) Efficient and robust automated machine learning. In: Proceedings of NIPS’15, pp 2944–2952 Feurer M, Klein A, Eggensperger K, Springenberg J, Blum M, Hutter F (2015a) Efficient and robust automated machine learning. In: Proceedings of NIPS’15, pp 2944–2952
Zurück zum Zitat Feurer M, Springenberg T, Hutter F (2015b) Initializing Bayesian hyperparameter optimization via meta-learning. In: Proceedings of AAAI’15, pp 1128–1135 Feurer M, Springenberg T, Hutter F (2015b) Initializing Bayesian hyperparameter optimization via meta-learning. In: Proceedings of AAAI’15, pp 1128–1135
Zurück zum Zitat Fürnkranz J, Petrak J (2001) An evaluation of landmarking variants. In: Proceedings ECML/PKDD Workshop on Integrating Aspects of Data Mining, Decision Support and Meta-Learning 2001, pp 57–68 Fürnkranz J, Petrak J (2001) An evaluation of landmarking variants. In: Proceedings ECML/PKDD Workshop on Integrating Aspects of Data Mining, Decision Support and Meta-Learning 2001, pp 57–68
Zurück zum Zitat Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB (2013) Bayesian data analysis, 3rd edn. Chapman and Hall/CRC, Boca RatonMATH Gelman A, Carlin JB, Stern HS, Dunson DB, Vehtari A, Rubin DB (2013) Bayesian data analysis, 3rd edn. Chapman and Hall/CRC, Boca RatonMATH
Zurück zum Zitat Gu B, Liu B, Hu F, Liu H (2001) Efficiently determining the starting sample size for progressive sampling. In: Proceedings of ECML’01, pp 192–202 Gu B, Liu B, Hu F, Liu H (2001) Efficiently determining the starting sample size for progressive sampling. In: Proceedings of ECML’01, pp 192–202
Zurück zum Zitat Guo XC, Yang JH, Wu CG, Wang CY, Liang YC (2008) A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing 71(16–18):3211–3215CrossRef Guo XC, Yang JH, Wu CG, Wang CY, Liang YC (2008) A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing 71(16–18):3211–3215CrossRef
Zurück zum Zitat Guyon I, Bennett K, Cawley GC, Escalante HJ, Escalera S, Ho TK, Macià N, Ray B, Saeed M, Statnikov AR, Viegas E (2015) Design of the 2015 ChaLearn AutoML challenge. In: Proceedings of IJCNN’15, pp 1–8 Guyon I, Bennett K, Cawley GC, Escalante HJ, Escalera S, Ho TK, Macià N, Ray B, Saeed M, Statnikov AR, Viegas E (2015) Design of the 2015 ChaLearn AutoML challenge. In: Proceedings of IJCNN’15, pp 1–8
Zurück zum Zitat Hendry DF, Doornik JA (2014) Empirical model discovery and theory evaluation: automatic selection methods in econometrics. The MIT Press, CambridgeCrossRef Hendry DF, Doornik JA (2014) Empirical model discovery and theory evaluation: automatic selection methods in econometrics. The MIT Press, CambridgeCrossRef
Zurück zum Zitat Hoffman MD, Shahriari B, de Freitas N (2014) On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In: Proceedings of AISTATS’14, pp 365–374 Hoffman MD, Shahriari B, de Freitas N (2014) On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In: Proceedings of AISTATS’14, pp 365–374
Zurück zum Zitat Hutter F, Hoos HH, Leyton-Brown K, Stützle T (2009) ParamILS: an automatic algorithm configuration framework. J Artif Intell Res 36:267–306MATH Hutter F, Hoos HH, Leyton-Brown K, Stützle T (2009) ParamILS: an automatic algorithm configuration framework. J Artif Intell Res 36:267–306MATH
Zurück zum Zitat Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration. In: Proceedings of LION’11, pp 507–523 Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration. In: Proceedings of LION’11, pp 507–523
Zurück zum Zitat Hutter F, Hoos H, Leyton-Brown K (2014) An efficient approach for assessing hyperparameter importance. In: Proceedings of ICML’14, pp 754–762 Hutter F, Hoos H, Leyton-Brown K (2014) An efficient approach for assessing hyperparameter importance. In: Proceedings of ICML’14, pp 754–762
Zurück zum Zitat John GH, Langley P (1996) Static versus dynamic sampling for data mining. In: Proceedings of KDD’96, pp 367–370 John GH, Langley P (1996) Static versus dynamic sampling for data mining. In: Proceedings of KDD’96, pp 367–370
Zurück zum Zitat Jovic A, Brkic K, Bogunovic N (2014) An overview of free software tools for general data mining. In: Proceedings of MIPRO’14, pp 1112–1117 Jovic A, Brkic K, Bogunovic N (2014) An overview of free software tools for general data mining. In: Proceedings of MIPRO’14, pp 1112–1117
Zurück zum Zitat Komer B, Bergstra J, Eliasmith C (2014) Hyperopt-sklearn: automatic hyperparameter configuration for scikit-learn. In: Proceedings of SciPy 2014, pp 33–39 Komer B, Bergstra J, Eliasmith C (2014) Hyperopt-sklearn: automatic hyperparameter configuration for scikit-learn. In: Proceedings of SciPy 2014, pp 33–39
Zurück zum Zitat Kraska T, Talwalkar A, Duchi JC, Griffith R, Franklin MJ, Jordan MI (2013) MLbase: a distributed machine-learning system. In: Proceedings of CIDR’13 Kraska T, Talwalkar A, Duchi JC, Griffith R, Franklin MJ, Jordan MI (2013) MLbase: a distributed machine-learning system. In: Proceedings of CIDR’13
Zurück zum Zitat Lacoste A, Larochelle H, Marchand M, Laviolette F (2014a) Sequential model-based ensemble optimization. In: Proceedings of UAI’14, pp 440–448 Lacoste A, Larochelle H, Marchand M, Laviolette F (2014a) Sequential model-based ensemble optimization. In: Proceedings of UAI’14, pp 440–448
Zurück zum Zitat Lacoste A, Marchand M, Laviolette F, Larochelle H (2014b) Agnostic Bayesian learning of ensembles. In: Proceedings of ICML’14, pp 611–619 Lacoste A, Marchand M, Laviolette F, Larochelle H (2014b) Agnostic Bayesian learning of ensembles. In: Proceedings of ICML’14, pp 611–619
Zurück zum Zitat Leite R, Brazdil P (2005) Predicting relative performance of classifiers from samples. In: Proceedings of ICML’05, pp 497–503 Leite R, Brazdil P (2005) Predicting relative performance of classifiers from samples. In: Proceedings of ICML’05, pp 497–503
Zurück zum Zitat Leite R, Brazdil P (2010) Active testing strategy to predict the best classification algorithm via sampling and metalearning. In: Proceedings of ECAI’10, pp 309–314 Leite R, Brazdil P (2010) Active testing strategy to predict the best classification algorithm via sampling and metalearning. In: Proceedings of ECAI’10, pp 309–314
Zurück zum Zitat Leite R, Brazdil P, Vanschoren J (2012) Selecting classification algorithms with active testing. In: Proceedings of MLDM’12, pp 117–131 Leite R, Brazdil P, Vanschoren J (2012) Selecting classification algorithms with active testing. In: Proceedings of MLDM’12, pp 117–131
Zurück zum Zitat Liu H, Motoda H (2013) Feature selection for knowledge discovery and data mining. Springer, New YorkMATH Liu H, Motoda H (2013) Feature selection for knowledge discovery and data mining. Springer, New YorkMATH
Zurück zum Zitat Luo G (2015) MLBCD: a machine learning tool for big clinical data. Health Inf Sci Syst 3:3CrossRef Luo G (2015) MLBCD: a machine learning tool for big clinical data. Health Inf Sci Syst 3:3CrossRef
Zurück zum Zitat Luo G (2016) Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction. Health Inf Sci Syst 4:2CrossRef Luo G (2016) Automatically explaining machine learning prediction results: a demonstration on type 2 diabetes risk prediction. Health Inf Sci Syst 4:2CrossRef
Zurück zum Zitat Luo G, Frey LJ (2016) Efficient execution methods of pivoting for bulk extraction of Entity–Attribute–Value-modeled data. IEEE J Biomed Health Inform. 20(2):644–654CrossRef Luo G, Frey LJ (2016) Efficient execution methods of pivoting for bulk extraction of Entity–Attribute–Value-modeled data. IEEE J Biomed Health Inform. 20(2):644–654CrossRef
Zurück zum Zitat Luo G, Nkoy FL, Gesteland PH, Glasgow TS, Stone BL (2014) A systematic review of predictive modeling for bronchiolitis. Int J Med Inform 83(10):691–714CrossRef Luo G, Nkoy FL, Gesteland PH, Glasgow TS, Stone BL (2014) A systematic review of predictive modeling for bronchiolitis. Int J Med Inform 83(10):691–714CrossRef
Zurück zum Zitat Luo G, Nkoy FL, Stone BL, Schmick D, Johnson MD (2015a) A systematic review of predictive models for asthma development in children. BMC Med Inform Decis Mak 15(1):99CrossRef Luo G, Nkoy FL, Stone BL, Schmick D, Johnson MD (2015a) A systematic review of predictive models for asthma development in children. BMC Med Inform Decis Mak 15(1):99CrossRef
Zurück zum Zitat Luo G, Stone BL, Sakaguchi F, Sheng X, Murtaugh MA (2015b) Using computational approaches to improve risk-stratified patient management: rationale and methods. JMIR Res Protoc. 4(4):e128CrossRef Luo G, Stone BL, Sakaguchi F, Sheng X, Murtaugh MA (2015b) Using computational approaches to improve risk-stratified patient management: rationale and methods. JMIR Res Protoc. 4(4):e128CrossRef
Zurück zum Zitat Luo G, Stone BL, Johnson MD, Nkoy FL (2016) Predicting appropriate admission of bronchiolitis patients in the emergency room: rationale and methods. JMIR Res Protoc. 5(1):e41CrossRef Luo G, Stone BL, Johnson MD, Nkoy FL (2016) Predicting appropriate admission of bronchiolitis patients in the emergency room: rationale and methods. JMIR Res Protoc. 5(1):e41CrossRef
Zurück zum Zitat Maron O, Moore AW (1993) Hoeffding races: accelerating model selection search for classification and function approximation. In: Proceedings of NIPS’93, pp 59–66 Maron O, Moore AW (1993) Hoeffding races: accelerating model selection search for classification and function approximation. In: Proceedings of NIPS’93, pp 59–66
Zurück zum Zitat Nadkarni PM (2011) Metadata-driven software systems in biomedicine: designing systems that can adapt to changing knowledge. Springer, New YorkCrossRef Nadkarni PM (2011) Metadata-driven software systems in biomedicine: designing systems that can adapt to changing knowledge. Springer, New YorkCrossRef
Zurück zum Zitat Nocedal J, Wright S (2006) Numerical optimization, 2nd edn. Springer, New YorkMATH Nocedal J, Wright S (2006) Numerical optimization, 2nd edn. Springer, New YorkMATH
Zurück zum Zitat Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH
Zurück zum Zitat Petrak J (2000) Fast subsampling performance estimates for classification algorithm selection. In: Proceedings of the ECML Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination 2000, pp 3–14 Petrak J (2000) Fast subsampling performance estimates for classification algorithm selection. In: Proceedings of the ECML Workshop on Meta-Learning: Building Automatic Advice Strategies for Model Selection and Method Combination 2000, pp 3–14
Zurück zum Zitat Pfahringer B, Bensusan H, Giraud-Carrier CG (2000) Meta-learning by landmarking various learning algorithms. In: Proceedings of ICML’00, pp 743–750 Pfahringer B, Bensusan H, Giraud-Carrier CG (2000) Meta-learning by landmarking various learning algorithms. In: Proceedings of ICML’00, pp 743–750
Zurück zum Zitat Provost FJ, Jensen D, Oates T (1999) Efficient progressive sampling. In: Proceedings of KDD’99, pp 23–32 Provost FJ, Jensen D, Oates T (1999) Efficient progressive sampling. In: Proceedings of KDD’99, pp 23–32
Zurück zum Zitat Roski J, Bo-Linn GW, Andrews TA (2014) Creating value in health care through big data: opportunities and policy implications. Health Aff (Millwood) 33(7):1115–1122CrossRef Roski J, Bo-Linn GW, Andrews TA (2014) Creating value in health care through big data: opportunities and policy implications. Health Aff (Millwood) 33(7):1115–1122CrossRef
Zurück zum Zitat Sabharwal A, Samulowitz H, Tesauro G (2016) Selecting near-optimal learners via incremental data allocation. In: Proceedings of AAAI’16 Sabharwal A, Samulowitz H, Tesauro G (2016) Selecting near-optimal learners via incremental data allocation. In: Proceedings of AAAI’16
Zurück zum Zitat Shahriari B, Swersky K, Wang Z, Adams RP, de Freitas N (2015) Taking the human out of the loop: a review of Bayesian optimization. Proc IEEE 104(1):148–175CrossRef Shahriari B, Swersky K, Wang Z, Adams RP, de Freitas N (2015) Taking the human out of the loop: a review of Bayesian optimization. Proc IEEE 104(1):148–175CrossRef
Zurück zum Zitat Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Proceedings of NIPS’12, pp 2960–2968 Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Proceedings of NIPS’12, pp 2960–2968
Zurück zum Zitat Soares C, Petrak J, Brazdil P (2001) Sampling-based relative landmarks: systematically test-driving algorithms before choosing. In: Proceedings of EPIA’01, pp 88–95 Soares C, Petrak J, Brazdil P (2001) Sampling-based relative landmarks: systematically test-driving algorithms before choosing. In: Proceedings of EPIA’01, pp 88–95
Zurück zum Zitat Sparks ER, Talwalkar A, Smith V, Kottalam J, Pan X, Gonzalez JE et al. (2013) MLI: an API for distributed machine learning. In: Proceedings of ICDM’13, pp 1187–1192 Sparks ER, Talwalkar A, Smith V, Kottalam J, Pan X, Gonzalez JE et al. (2013) MLI: an API for distributed machine learning. In: Proceedings of ICDM’13, pp 1187–1192
Zurück zum Zitat Sparks ER, Talwalkar A, Haas D, Franklin MJ, Jordan MI, Kraska T (2015) Automating model search for large scale machine learning. In: Proceedings of SoCC’15, pp 368–380 Sparks ER, Talwalkar A, Haas D, Franklin MJ, Jordan MI, Kraska T (2015) Automating model search for large scale machine learning. In: Proceedings of SoCC’15, pp 368–380
Zurück zum Zitat Steyerberg EW (2009) Clinical prediction models: a practical approach to development, validation, and updating. Springer, New YorkCrossRefMATH Steyerberg EW (2009) Clinical prediction models: a practical approach to development, validation, and updating. Springer, New YorkCrossRefMATH
Zurück zum Zitat Swersky K, Snoek J, Adams RP (2013) Multi-task Bayesian optimization. In: Proceedings of NIPS’13, 2004–2012 Swersky K, Snoek J, Adams RP (2013) Multi-task Bayesian optimization. In: Proceedings of NIPS’13, 2004–2012
Zurück zum Zitat Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of KDD’13, pp 847–855 Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of KDD’13, pp 847–855
Zurück zum Zitat van Rijn JN, Abdulrahman SM, Brazdil P, Vanschoren J (2015) Fast algorithm selection using learning curves. In: Proceedings of IDA’15, pp 298–309 van Rijn JN, Abdulrahman SM, Brazdil P, Vanschoren J (2015) Fast algorithm selection using learning curves. In: Proceedings of IDA’15, pp 298–309
Zurück zum Zitat Wang L, Feng M, Zhou B, Xiang B, Mahadevan S (2015) Efficient hyper-parameter optimization for NLP applications. In: Proceedings of EMNLP’15, 2112–2117 Wang L, Feng M, Zhou B, Xiang B, Mahadevan S (2015) Efficient hyper-parameter optimization for NLP applications. In: Proceedings of EMNLP’15, 2112–2117
Zurück zum Zitat White JM (2013) Bandit algorithms for website optimization. O’Reilly Media, Sebastopol White JM (2013) Bandit algorithms for website optimization. O’Reilly Media, Sebastopol
Zurück zum Zitat Wistuba M, Schilling N, Schmidt-Thieme L (2015a) Hyperparameter search space pruning—a new component for sequential model-based hyperparameter optimization. In: Proceedings of ECML/PKDD (2) 2015, pp 104–119 Wistuba M, Schilling N, Schmidt-Thieme L (2015a) Hyperparameter search space pruning—a new component for sequential model-based hyperparameter optimization. In: Proceedings of ECML/PKDD (2) 2015, pp 104–119
Zurück zum Zitat Wistuba M, Schilling N, Schmidt-Thieme L (2015b) Learning hyperparameter optimization initializations. In: Proceedings of DSAA’15, pp 1–10 Wistuba M, Schilling N, Schmidt-Thieme L (2015b) Learning hyperparameter optimization initializations. In: Proceedings of DSAA’15, pp 1–10
Zurück zum Zitat Witten IH, Frank E, Hall MA (2011) Data mining: practical machine learning tools and techniques, 3rd edn. Morgan Kaufmann, Burlington Witten IH, Frank E, Hall MA (2011) Data mining: practical machine learning tools and techniques, 3rd edn. Morgan Kaufmann, Burlington
Zurück zum Zitat Yogatama D, Mann G (2014) Efficient transfer learning method for automatic hyperparameter tuning. In: Proceedings of AISTATS’14, pp 1077–1085 Yogatama D, Mann G (2014) Efficient transfer learning method for automatic hyperparameter tuning. In: Proceedings of AISTATS’14, pp 1077–1085
Zurück zum Zitat Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I (2010) Spark: cluster computing with working sets. In: Proceedings of HotCloud 2010 Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I (2010) Spark: cluster computing with working sets. In: Proceedings of HotCloud 2010
Zurück zum Zitat Zhou Z (2012) Ensemble methods: foundations and algorithms. Chapman and Hall/CRC, Boca Raton Zhou Z (2012) Ensemble methods: foundations and algorithms. Chapman and Hall/CRC, Boca Raton
Metadaten
Titel
A review of automatic selection methods for machine learning algorithms and hyper-parameter values
verfasst von
Gang Luo
Publikationsdatum
01.12.2016
Verlag
Springer Vienna
Erschienen in
Network Modeling Analysis in Health Informatics and Bioinformatics / Ausgabe 1/2016
Print ISSN: 2192-6662
Elektronische ISSN: 2192-6670
DOI
https://doi.org/10.1007/s13721-016-0125-6

Weitere Artikel der Ausgabe 1/2016

Network Modeling Analysis in Health Informatics and Bioinformatics 1/2016 Zur Ausgabe

Premium Partner