Skip to main content
Top

2020 | OriginalPaper | Chapter

Learning a Formula of Interpretability to Learn Interpretable Formulas

Authors : Marco Virgolin, Andrea De Lorenzo, Eric Medvet, Francesca Randone

Published in: Parallel Problem Solving from Nature – PPSN XVI

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Many risk-sensitive applications require Machine Learning (ML) models to be interpretable. Attempts to obtain interpretable models typically rely on tuning, by trial-and-error, hyper-parameters of model complexity that are only loosely related to interpretability. We show that it is instead possible to take a meta-learning approach: an ML model of non-trivial Proxies of Human Interpretability (PHIs) can be learned from human feedback, then this model can be incorporated within an ML training process to directly optimize for interpretability. We show this for evolutionary symbolic regression. We first design and distribute a survey finalized at finding a link between features of mathematical formulas and two established PHIs, simulatability and decomposability. Next, we use the resulting dataset to learn an ML model of interpretability. Lastly, we query this model to estimate the interpretability of evolving solutions within bi-objective genetic programming. We perform experiments on five synthetic and eight real-world symbolic regression problems, comparing to the traditional use of solution size minimization. The results show that the use of our model leads to formulas that are, for a same level of accuracy-interpretability trade-off, either significantly more or equally accurate. Moreover, the formulas are also arguably more interpretable. Given the very positive results, we believe that our approach represents an important stepping stone for the design of next-generation interpretable (evolutionary) ML algorithms.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
2.
go back to reference Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)CrossRef Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)CrossRef
3.
go back to reference Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)MATH Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)MATH
4.
go back to reference Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984) Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984)
5.
go back to reference Burgess, T.F.: Guide to the design of questionnaires. A general introduction to the design of questionnaires for survey research. University of Leeds (2001) Burgess, T.F.: Guide to the design of questionnaires. A general introduction to the design of questionnaires for survey research. University of Leeds (2001)
6.
go back to reference Cano, A., Zafra, A., Ventura, S.: An interpretable classification rule mining algorithm. Inf. Sci. 240, 1–20 (2013)CrossRef Cano, A., Zafra, A., Ventura, S.: An interpretable classification rule mining algorithm. Inf. Sci. 240, 1–20 (2013)CrossRef
7.
go back to reference Chen, Q., Zhang, M., Xue, B.: Structural risk minimization-driven genetic programming for enhancing generalization in symbolic regression. IEEE Trans. Evol. Comput. 23(4), 703–717 (2018)CrossRef Chen, Q., Zhang, M., Xue, B.: Structural risk minimization-driven genetic programming for enhancing generalization in symbolic regression. IEEE Trans. Evol. Comput. 23(4), 703–717 (2018)CrossRef
8.
go back to reference Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)CrossRef Chouldechova, A.: Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2), 153–163 (2017)CrossRef
9.
go back to reference Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)CrossRef Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)CrossRef
10.
go back to reference Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7(Jan), 1–30 (2006)MathSciNetMATH Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7(Jan), 1–30 (2006)MathSciNetMATH
11.
13.
go back to reference Ekárt, A., Nemeth, S.Z.: Selection based on the Pareto nondomination criterion for controlling code growth in genetic programming. Genet. Program Evolvable Mach. 2(1), 61–73 (2001)MATHCrossRef Ekárt, A., Nemeth, S.Z.: Selection based on the Pareto nondomination criterion for controlling code growth in genetic programming. Genet. Program Evolvable Mach. 2(1), 61–73 (2001)MATHCrossRef
14.
go back to reference Evans, B.P., Xue, B., Zhang, M.: What’s inside the black-box? A genetic programming method for interpreting complex machine learning models. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1012–1020 (2019) Evans, B.P., Xue, B., Zhang, M.: What’s inside the black-box? A genetic programming method for interpreting complex machine learning models. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1012–1020 (2019)
15.
go back to reference Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017)CrossRef Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017)CrossRef
16.
go back to reference Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2018) Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2018)
17.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
18.
go back to reference Hein, D., Udluft, S., Runkler, T.A.: Interpretable policies for reinforcement learning by genetic programming. Eng. Appl. Artif. Intell. 76, 158–169 (2018)CrossRef Hein, D., Udluft, S., Runkler, T.A.: Interpretable policies for reinforcement learning by genetic programming. Eng. Appl. Artif. Intell. 76, 158–169 (2018)CrossRef
19.
go back to reference Holm, S.: A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6, 65–70 (1979)MathSciNetMATH Holm, S.: A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6, 65–70 (1979)MathSciNetMATH
21.
go back to reference Keijzer, M.: Scaled symbolic regression. Genet. Program Evolvable Mach. 5(3), 259–269 (2004)CrossRef Keijzer, M.: Scaled symbolic regression. Genet. Program Evolvable Mach. 5(3), 259–269 (2004)CrossRef
24.
go back to reference Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018) Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 31–57 (2018)
25.
go back to reference Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012) Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012)
26.
go back to reference Maruyama, M., Pallier, C., Jobert, A., Sigman, M., Dehaene, S.: The cortical representation of simple mathematical expressions. Neuroimage 61(4), 1444–1460 (2012)CrossRef Maruyama, M., Pallier, C., Jobert, A., Sigman, M., Dehaene, S.: The cortical representation of simple mathematical expressions. Neuroimage 61(4), 1444–1460 (2012)CrossRef
28.
go back to reference Meurer, A., et al.: SymPy: symbolic computing in Python. PeerJ Comput. Sci. 3, e103 (2017)CrossRef Meurer, A., et al.: SymPy: symbolic computing in Python. PeerJ Comput. Sci. 3, e103 (2017)CrossRef
29.
go back to reference Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetMATH Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetMATH
30.
go back to reference Poli, R., Langdon, W.B., McPhee, N.F., Koza, J.R.: A Field Guide to Genetic Programming. Lulu.com, Morrisville (2008) Poli, R., Langdon, W.B., McPhee, N.F., Koza, J.R.: A Field Guide to Genetic Programming. Lulu.com, Morrisville (2008)
32.
go back to reference Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018) Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. arXiv preprint arXiv:​1802.​07810 (2018)
33.
go back to reference Raymond, C., Chen, Q., Xue, B., Zhang, M.: Genetic programming with Rademacher complexity for symbolic regression. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2657–2664 (2019) Raymond, C., Chen, Q., Xue, B., Zhang, M.: Genetic programming with Rademacher complexity for symbolic regression. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2657–2664 (2019)
34.
go back to reference Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
36.
go back to reference Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)CrossRef Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)CrossRef
37.
go back to reference Sambo, A.S., Azad, R.M.A., Kovalchuk, Y., Indramohan, V.P., Shah, H.: Time control or size control? Reducing complexity and improving accuracy of genetic programming models. In: Hu, T., Lourenço, N., Medvet, E., Divina, F. (eds.) EuroGP 2020. LNCS, vol. 12101, pp. 195–210. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44094-7_13CrossRef Sambo, A.S., Azad, R.M.A., Kovalchuk, Y., Indramohan, V.P., Shah, H.: Time control or size control? Reducing complexity and improving accuracy of genetic programming models. In: Hu, T., Lourenço, N., Medvet, E., Divina, F. (eds.) EuroGP 2020. LNCS, vol. 12101, pp. 195–210. Springer, Cham (2020). https://​doi.​org/​10.​1007/​978-3-030-44094-7_​13CrossRef
38.
go back to reference Silva, S., Dignum, S., Vanneschi, L.: Operator equalisation for bloat free genetic programming and a survey of bloat control methods. Genet. Program Evolvable Mach. 13(2), 197–238 (2012)CrossRef Silva, S., Dignum, S., Vanneschi, L.: Operator equalisation for bloat free genetic programming and a survey of bloat control methods. Genet. Program Evolvable Mach. 13(2), 197–238 (2012)CrossRef
40.
go back to reference Squillero, G., Tonda, A.: Divergence of character and premature convergence: a survey of methodologies for promoting diversity in evolutionary optimization. Inf. Sci. 329, 782–799 (2016)CrossRef Squillero, G., Tonda, A.: Divergence of character and premature convergence: a survey of methodologies for promoting diversity in evolutionary optimization. Inf. Sci. 329, 782–799 (2016)CrossRef
41.
go back to reference Tran, B., Xue, B., Zhang, M.: Genetic programming for multiple-feature construction on high-dimensional classification. Pattern Recogn. 93, 404–417 (2019)CrossRef Tran, B., Xue, B., Zhang, M.: Genetic programming for multiple-feature construction on high-dimensional classification. Pattern Recogn. 93, 404–417 (2019)CrossRef
42.
go back to reference Vanneschi, L., Castelli, M., Silva, S.: Measuring bloat, overfitting and functional complexity in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 877–884 (2010) Vanneschi, L., Castelli, M., Silva, S.: Measuring bloat, overfitting and functional complexity in genetic programming. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 877–884 (2010)
43.
go back to reference Virgolin, M., Alderliesten, T., Witteveen, C., Bosman, P.A.N.: Improving model-based genetic programming for symbolic regression of small expressions. Accepted in Evolutionary Computation. ArXiv preprint arXiv:1904.02050 (2019) Virgolin, M., Alderliesten, T., Witteveen, C., Bosman, P.A.N.: Improving model-based genetic programming for symbolic regression of small expressions. Accepted in Evolutionary Computation. ArXiv preprint arXiv:​1904.​02050 (2019)
44.
go back to reference Virgolin, M., Alderliesten, T., Bosman, P.A.N.: Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, pp. 1084–1092. Association for Computing Machinery (2019) Virgolin, M., Alderliesten, T., Bosman, P.A.N.: Linear scaling with and within semantic backpropagation-based genetic programming for symbolic regression. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, pp. 1084–1092. Association for Computing Machinery (2019)
45.
go back to reference Virgolin, M., Alderliesten, T., Bosman, P.A.N.: On explaining machine learning models by evolving crucial and compact features. Swarm Evol. Comput. 53, 100640 (2020)CrossRef Virgolin, M., Alderliesten, T., Bosman, P.A.N.: On explaining machine learning models by evolving crucial and compact features. Swarm Evol. Comput. 53, 100640 (2020)CrossRef
46.
go back to reference Vladislavleva, E.J., Smits, G.F., Den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via Pareto genetic programming. IEEE Trans. Evol. Comput. 13(2), 333–349 (2008)CrossRef Vladislavleva, E.J., Smits, G.F., Den Hertog, D.: Order of nonlinearity as a complexity measure for models generated by symbolic regression via Pareto genetic programming. IEEE Trans. Evol. Comput. 13(2), 333–349 (2008)CrossRef
47.
go back to reference Wang, P., Tang, K., Weise, T., Tsang, E., Yao, X.: Multiobjective genetic programming for maximizing ROC performance. Neurocomputing 125, 102–118 (2014)CrossRef Wang, P., Tang, K., Weise, T., Tsang, E., Yao, X.: Multiobjective genetic programming for maximizing ROC performance. Neurocomputing 125, 102–118 (2014)CrossRef
48.
49.
go back to reference Watchareeruetai, U., Matsumoto, T., Takeuchi, Y., Kudo, H., Ohnishi, N.: Construction of image feature extractors based on multi-objective genetic programming with redundancy regulations. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 1328–1333. IEEE (2009) Watchareeruetai, U., Matsumoto, T., Takeuchi, Y., Kudo, H., Ohnishi, N.: Construction of image feature extractors based on multi-objective genetic programming with redundancy regulations. In: IEEE International Conference on Systems, Man and Cybernetics, pp. 1328–1333. IEEE (2009)
50.
go back to reference White, D.R., et al.: Better GP benchmarks: community survey results and proposals. Genet. Program Evolvable Mach. 14(1), 3–29 (2013)CrossRef White, D.R., et al.: Better GP benchmarks: community survey results and proposals. Genet. Program Evolvable Mach. 14(1), 3–29 (2013)CrossRef
51.
go back to reference Zhang, B.T., Mühlenbein, H.: Balancing accuracy and parsimony in genetic programming. Evol. Comput. 3(1), 17–38 (1995)CrossRef Zhang, B.T., Mühlenbein, H.: Balancing accuracy and parsimony in genetic programming. Evol. Comput. 3(1), 17–38 (1995)CrossRef
52.
go back to reference Zhao, H.: A multi-objective genetic programming approach to developing Pareto optimal decision trees. Decis. Support Syst. 43(3), 809–826 (2007)CrossRef Zhao, H.: A multi-objective genetic programming approach to developing Pareto optimal decision trees. Decis. Support Syst. 43(3), 809–826 (2007)CrossRef
53.
go back to reference Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc.: Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetMATHCrossRef Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc.: Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetMATHCrossRef
Metadata
Title
Learning a Formula of Interpretability to Learn Interpretable Formulas
Authors
Marco Virgolin
Andrea De Lorenzo
Eric Medvet
Francesca Randone
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-58115-2_6

Premium Partner