Skip to main content

2021 | OriginalPaper | Buchkapitel

Partially Monotonic Learning for Neural Networks

verfasst von : Joana Trindade, João Vinagre, Kelwin Fernandes, Nuno Paiva, Alípio Jorge

Erschienen in: Advances in Intelligent Data Analysis XIX

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In the past decade, we have witnessed the widespread adoption of Deep Neural Networks (DNNs) in several Machine Learning tasks. However, in many critical domains, such as healthcare, finance, or law enforcement, transparency is crucial. In particular, the lack of ability to conform with prior knowledge greatly affects the trustworthiness of predictive models. This paper contributes to the trustworthiness of DNNs by promoting monotonicity. We develop a multi-layer learning architecture that handles a subset of features in a dataset that, according to prior knowledge, have a monotonic relation with the response variable. We use two alternative approaches: (i) imposing constraints on the model’s parameters, and (ii) applying an additional component to the loss function that penalises non-monotonic gradients. Our method is evaluated on classification and regression tasks using two datasets. Our model is able to conform to known monotonic relations, improving trustworthiness in decision making, while simultaneously maintaining small and controllable degradation in predictive ability.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
3
In most real-world problems, including the ones illustrated in this paper, domain expertise is essential to distinguish between true and spurious monotonic relations.
 
Literatur
1.
Zurück zum Zitat Archer, N.P., Wang, S.: Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decis. Sci. 24(1), 60–75 (1993)CrossRef Archer, N.P., Wang, S.: Application of the back propagation neural network algorithm with monotonicity constraints for two-group classification problems. Decis. Sci. 24(1), 60–75 (1993)CrossRef
2.
Zurück zum Zitat Bartolj, T.: Testing monotonicity of variables. Master’s thesis, Faculty of Economics and Business Administration, Tilburg University (2010) Bartolj, T.: Testing monotonicity of variables. Master’s thesis, Faculty of Economics and Business Administration, Tilburg University (2010)
3.
Zurück zum Zitat Cano, J.R., Gutiérrez, P.A., Krawczyk, B., Woźniak, M., García, S.: Monotonic classification: an overview on algorithms, performance measures and data sets. Neurocomputing 341, 168–182 (2019)CrossRef Cano, J.R., Gutiérrez, P.A., Krawczyk, B., Woźniak, M., García, S.: Monotonic classification: an overview on algorithms, performance measures and data sets. Neurocomputing 341, 168–182 (2019)CrossRef
4.
Zurück zum Zitat Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)CrossRef Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)CrossRef
5.
Zurück zum Zitat Daniels, H., Velikova, M.: Monotone and partially monotone neural networks. IEEE Trans. Neural Netw. 21(6), 906–917 (2010)CrossRef Daniels, H., Velikova, M.: Monotone and partially monotone neural networks. IEEE Trans. Neural Netw. 21(6), 906–917 (2010)CrossRef
6.
Zurück zum Zitat Gupta, A., Shukla, N., Marla, L., Kolbeinsson, A., Yellepeddi, K.: How to incorporate monotonicity in deep networks while preserving flexibility? arXiv preprint arXiv:1909.10662 (2019) Gupta, A., Shukla, N., Marla, L., Kolbeinsson, A., Yellepeddi, K.: How to incorporate monotonicity in deep networks while preserving flexibility? arXiv preprint arXiv:​1909.​10662 (2019)
7.
Zurück zum Zitat Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)CrossRef Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)CrossRef
8.
Zurück zum Zitat Márquez-Neila, P., Salzmann, M., Fua, P.: Imposing hard constraints on deep networks: promises and limitations. arXiv preprint arXiv:1706.02025 (2017) Márquez-Neila, P., Salzmann, M., Fua, P.: Imposing hard constraints on deep networks: promises and limitations. arXiv preprint arXiv:​1706.​02025 (2017)
9.
Zurück zum Zitat Nguyen, A.P., Martínez, M.R.: Mononet: towards interpretable models by learning monotonic features. arXiv preprint arXiv:1909.13611 (2019) Nguyen, A.P., Martínez, M.R.: Mononet: towards interpretable models by learning monotonic features. arXiv preprint arXiv:​1909.​13611 (2019)
10.
Zurück zum Zitat Pathak, D., Krahenbuhl, P., Darrell, T.: Constrained convolutional neural networks for weakly supervised segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1796–1804 (2015) Pathak, D., Krahenbuhl, P., Darrell, T.: Constrained convolutional neural networks for weakly supervised segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1796–1804 (2015)
11.
Zurück zum Zitat Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1(5), 206–215 (2019)CrossRef Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1(5), 206–215 (2019)CrossRef
12.
Zurück zum Zitat Sill, J.: Monotonic networks. Adv. Neural Inf. Process. Syst. 10, 661–667 (1997) Sill, J.: Monotonic networks. Adv. Neural Inf. Process. Syst. 10, 661–667 (1997)
14.
Zurück zum Zitat Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020) Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 272–283 (2020)
15.
Zurück zum Zitat You, S., Ding, D., Canini, K., Pfeifer, J., Gupta, M.: Deep lattice networks and partial monotonic functions. In: Advances in Neural Information Processing Systems, pp. 2981–2989 (2017) You, S., Ding, D., Canini, K., Pfeifer, J., Gupta, M.: Deep lattice networks and partial monotonic functions. In: Advances in Neural Information Processing Systems, pp. 2981–2989 (2017)
16.
Zurück zum Zitat Zhu, H., Tsang, E.C., Wang, X.Z., Ashfaq, R.A.R.: Monotonic classification extreme learning machine. Neurocomputing 225, 205–213 (2017)CrossRef Zhu, H., Tsang, E.C., Wang, X.Z., Ashfaq, R.A.R.: Monotonic classification extreme learning machine. Neurocomputing 225, 205–213 (2017)CrossRef
Metadaten
Titel
Partially Monotonic Learning for Neural Networks
verfasst von
Joana Trindade
João Vinagre
Kelwin Fernandes
Nuno Paiva
Alípio Jorge
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-74251-5_2