Skip to main content
Top
Published in: International Journal of Data Science and Analytics 1/2023

02-08-2022 | Regular Paper

Attention-like feature explanation for tabular data

Authors: Andrei V. Konstantinov, Lev V. Utkin

Published in: International Journal of Data Science and Analytics | Issue 1/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

A new method for local and global explanation of the machine learning black-box model predictions by tabular data is proposed. It is implemented as a system called AFEX (Attention-like Feature EXplanation) and consisting of two main parts. The first part is a set of the one-feature neural subnetworks, which aim to get a specific representation for every feature in the form of a basis of shape functions. The subnetworks use shortcut connections with trainable parameters to improve the network training performance. The second part of AFEX produces shape functions of features as the weighted sum of the basis shape functions where weights are computed by using an attention-like mechanism. The most important advantage of AFEX is that it identifies pairwise interactions between features based on pairwise multiplications of shape functions corresponding to different features. A modification of AFEX with incorporating an additional surrogate model, which approximates the black-box model, is proposed. AFEX is trained end-to-end on a whole dataset only once such that it does not require to train neural networks again in the explanation stage. Numerical experiments with synthetic and real data illustrate AFEX. The corresponding code implementing the method is publicly available.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Muller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), 1312 (2019) Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Muller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), 1312 (2019)
2.
go back to reference Arya, V., Bellamy, R.K.E., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilovic, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K.R., Wei, D., Zhang, Y.: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012 (2019) Arya, V., Bellamy, R.K.E., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S.C., Houde, S., Liao, Q.V., Luss, R., Mojsilovic, A., Mourad, S., Pedemonte, P., Raghavendra, R., Richards, J., Sattigeri, P., Shanmugam, K., Singh, M., Varshney, K.R., Wei, D., Zhang, Y.: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:​1909.​03012 (2019)
4.
go back to reference Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2019)CrossRef Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2019)CrossRef
5.
go back to reference Liang, Y., Li, S., Yan, C., Li, M., Jiang, C.: Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing 419, 168–182 (2021)CrossRef Liang, Y., Li, S., Yan, C., Li, M., Jiang, C.: Explaining the black-box model: A survey of local interpretation methods for deep neural networks. Neurocomputing 419, 168–182 (2021)CrossRef
7.
go back to reference Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yua, B.: Interpretable Machine Learning: Definitions, Methods, and Applications. arXiv:1901.04592 (2019) Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yua, B.: Interpretable Machine Learning: Definitions, Methods, and Applications. arXiv:​1901.​04592 (2019)
8.
9.
go back to reference Zablocki, E., Ben-Younes, H., Perez, P., Cord, M.: Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges. arXiv:2101.05307 (2021) Zablocki, E., Ben-Younes, H., Perez, P., Cord, M.: Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges. arXiv:​2101.​05307 (2021)
11.
12.
go back to reference Poyiadzi, R., Renard, X., Laugel, T., Santos-Rodriguez, R., Detyniecki, M.: Understanding Surrogate Explanations: The Interplay Between Complexity, Fidelity and Coverage. arXiv:2107.04309 (2021) Poyiadzi, R., Renard, X., Laugel, T., Santos-Rodriguez, R., Detyniecki, M.: Understanding Surrogate Explanations: The Interplay Between Complexity, Fidelity and Coverage. arXiv:​2107.​04309 (2021)
13.
go back to reference Hastie, T., Tibshirani, R.: Generalized Additive Models, vol. 43. CRC Press, Boca Raton (1990)MATH Hastie, T., Tibshirani, R.: Generalized Additive Models, vol. 43. CRC Press, Boca Raton (1990)MATH
14.
go back to reference Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: A Unified Framework for Machine Learning Interpretability. arXiv:1909.09223 (2019) Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: A Unified Framework for Machine Learning Interpretability. arXiv:​1909.​09223 (2019)
15.
go back to reference Agarwal, R., Frosst, N., Zhang, X., Caruana, R., Hinton, G.E.: Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv:2004.13912 (2020) Agarwal, R., Frosst, N., Zhang, X., Caruana, R., Hinton, G.E.: Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv:​2004.​13912 (2020)
16.
go back to reference Yang, Z., Zhang, A., Sudjianto, A.: GAMI-Net: An Explainable Neural Networkbased on Generalized Additive Models with Structured Interactions. arXiv:2003.07132 (2020) Yang, Z., Zhang, A., Sudjianto, A.: GAMI-Net: An Explainable Neural Networkbased on Generalized Additive Models with Structured Interactions. arXiv:​2003.​07132 (2020)
18.
go back to reference Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 4765–4774 (2017) Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 4765–4774 (2017)
19.
go back to reference Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)MathSciNetMATH Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)MathSciNetMATH
20.
21.
go back to reference Watson, G.S.: Smooth regression analysis. Sankhya: The Indian Journal of Statistics, Series A, 359–372 (1964) Watson, G.S.: Smooth regression analysis. Sankhya: The Indian Journal of Statistics, Series A, 359–372 (1964)
22.
go back to reference Niu, Z., Zhong, G., Yu, H.: A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021)CrossRef Niu, Z., Zhong, G., Yu, H.: A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021)CrossRef
25.
go back to reference Hickmann, M.L., Wurzberger, F., Lochner, M.H.A., Töllich, J., Scherp, A.: Analysis of GraphSum’s Attention Weights to Improve the Explainability of Multi-Document Summarization. arXiv:2105.11908 (2021) Hickmann, M.L., Wurzberger, F., Lochner, M.H.A., Töllich, J., Scherp, A.: Analysis of GraphSum’s Attention Weights to Improve the Explainability of Multi-Document Summarization. arXiv:​2105.​11908 (2021)
26.
go back to reference Li, L., Zhang, Y., Chen, L.: Personalized transformer for explainable recommendation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021), Bangkok, Thailand, pp. 1–11 (2021) Li, L., Zhang, Y., Chen, L.: Personalized transformer for explainable recommendation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021), Bangkok, Thailand, pp. 1–11 (2021)
27.
go back to reference Patro, B.N., Anupriy, Namboodiri, V.P.: Explanation vs attention: A two-player game to obtain attention for vqa. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), pp. 11848–11855. Association for the Advancement of Artificial Intelligence (2020) Patro, B.N., Anupriy, Namboodiri, V.P.: Explanation vs attention: A two-player game to obtain attention for vqa. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), pp. 11848–11855. Association for the Advancement of Artificial Intelligence (2020)
28.
go back to reference Rojat, T., Puget, R., Filliat, D., Ser, J.D., Gelin, R., Diaz-Rodriguez, N.: Explainable Artificial Intelligence (XAI) on Time Series Data: A Survey. arXiv:2104.00950 (2021) Rojat, T., Puget, R., Filliat, D., Ser, J.D., Gelin, R., Diaz-Rodriguez, N.: Explainable Artificial Intelligence (XAI) on Time Series Data: A Survey. arXiv:​2104.​00950 (2021)
29.
31.
go back to reference Jain, S., Wallace, B.C.: Attention is not Explanation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 3543–3556 (2019) Jain, S., Wallace, B.C.: Attention is not Explanation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 3543–3556 (2019)
32.
go back to reference Serrano, S., Smith, N.A.: Is attention interpretable? In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2931–2951. Association for Computational Linguistics (2019) Serrano, S., Smith, N.A.: Is attention interpretable? In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2931–2951. Association for Computational Linguistics (2019)
33.
go back to reference Chang, C.-H., Caruana, R., Goldenberg, A.: NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning. arXiv:2106.01613 (2021) Chang, C.-H., Caruana, R., Goldenberg, A.: NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning. arXiv:​2106.​01613 (2021)
34.
go back to reference O’Neill, L., Angus, S., Borgohain, S., Chmait, N., Dowe, D.L.: Creating Powerful and Interpretable Models with Regression Networks. arXiv:2107.14417 (2021) O’Neill, L., Angus, S., Borgohain, S., Chmait, N., Dowe, D.L.: Creating Powerful and Interpretable Models with Regression Networks. arXiv:​2107.​14417 (2021)
36.
go back to reference Rabold, J., Deininger, H., Siebers, M., Schmid, U.: Enriching Visual with Verbal Explanations for Relational Concepts: Combining LIME with Aleph. arXiv:1910.01837v1 (2019) Rabold, J., Deininger, H., Siebers, M., Schmid, U.: Enriching Visual with Verbal Explanations for Relational Concepts: Combining LIME with Aleph. arXiv:​1910.​01837v1 (2019)
37.
go back to reference Kovalev, M.S., Utkin, L.V., Kasimov, E.M.: SurvLIME: a method for explaining machine learning survival models. Knowl.-Based Syst. 203, 106164 (2020)CrossRef Kovalev, M.S., Utkin, L.V., Kasimov, E.M.: SurvLIME: a method for explaining machine learning survival models. Knowl.-Based Syst. 203, 106164 (2020)CrossRef
39.
go back to reference Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. arXiv:2001.06216 (2020) Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks. arXiv:​2001.​06216 (2020)
40.
go back to reference Fong, R., Vedaldi, A.: Explanations for attributing deep neural network predictions. In: Explainable AI, vol. 11700, pp. 149–167. Springer, Cham (2019) Fong, R., Vedaldi, A.: Explanations for attributing deep neural network predictions. In: Explainable AI, vol. 11700, pp. 149–167. Springer, Cham (2019)
43.
go back to reference Jethani, N., Sudarshan, M., Covert, I., Lee, S.-I., Ranganath, R.: FastSHAP: Real-Time Shapley Value Estimation. arXiv:2107.07436 (2021) Jethani, N., Sudarshan, M., Covert, I., Lee, S.-I., Ranganath, R.: FastSHAP: Real-Time Shapley Value Estimation. arXiv:​2107.​07436 (2021)
44.
45.
go back to reference Benard, C., Biau, G., Veiga, S.D., Scornet, E.: SHAFF: Fast and consistent SHApley eFfect estimates via random Forests. arXiv:2105.11724 (2021) Benard, C., Biau, G., Veiga, S.D., Scornet, E.: SHAFF: Fast and consistent SHApley eFfect estimates via random Forests. arXiv:​2105.​11724 (2021)
46.
48.
go back to reference Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GPDR. Harvard J. Law Technol. 31, 841–887 (2017) Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GPDR. Harvard J. Law Technol. 31, 841–887 (2017)
49.
go back to reference Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 264–279 (2018) Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 264–279 (2018)
50.
go back to reference Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
51.
go back to reference Arrieta, A.B., Diaz-Rodriguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115 (2020)CrossRef Arrieta, A.B., Diaz-Rodriguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115 (2020)CrossRef
52.
go back to reference Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and Survey of Explanation Methods for Black Box Models. arXiv:2102.13076 (2021) Bodria, F., Giannotti, F., Guidotti, R., Naretto, F., Pedreschi, D., Rinzivillo, S.: Benchmarking and Survey of Explanation Methods for Black Box Models. arXiv:​2102.​13076 (2021)
53.
go back to reference Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(832), 1–34 (2019) Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(832), 1–34 (2019)
54.
go back to reference Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., Bian, J., Dou, D.: Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond. arXiv:2103.10689 (2021) Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., Bian, J., Dou, D.: Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond. arXiv:​2103.​10689 (2021)
55.
go back to reference Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)CrossRef Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)CrossRef
56.
go back to reference Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., Zhong, C.: Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges. arXiv:2103.11251 (2021) Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., Zhong, C.: Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges. arXiv:​2103.​11251 (2021)
58.
go back to reference Konstantinov, A.V., Utkin, L.V.: Interpretable machine learning with an ensemble of gradient boosting machines. Knowl.-Based Syst. 222(106993), 1–16 (2021) Konstantinov, A.V., Utkin, L.V.: Interpretable machine learning with an ensemble of gradient boosting machines. Knowl.-Based Syst. 222(106993), 1–16 (2021)
59.
go back to reference Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012) Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158. ACM (2012)
60.
go back to reference Zhang, X., Tan, S., Koch, P., Lou, Y., Chajewska, U., Caruana, R.: Axiomatic interpretability for multiclass additive models. In: In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 226–234. ACM (2019) Zhang, X., Tan, S., Koch, P., Lou, Y., Chajewska, U., Caruana, R.: Axiomatic interpretability for multiclass additive models. In: In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 226–234. ACM (2019)
61.
62.
go back to reference Nori, H., Caruana, R., Bu, Z., Shen, J.H., Kulkarni, J.: Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. arXiv:2106.09680 (2021) Nori, H., Caruana, R., Bu, Z., Shen, J.H., Kulkarni, J.: Accuracy, Interpretability, and Differential Privacy via Explainable Boosting. arXiv:​2106.​09680 (2021)
63.
go back to reference Guo, Y., Su, Y., Yang, Z., Zhang, A.: Explainable Recommendation Systems by Generalized Additive Models with Manifest and Latent Interactions. arXiv:2012.08196 (2020) Guo, Y., Su, Y., Yang, Z., Zhang, A.: Explainable Recommendation Systems by Generalized Additive Models with Manifest and Latent Interactions. arXiv:​2012.​08196 (2020)
64.
65.
67.
68.
go back to reference Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421. The Association for Computational Linguistics (2015) Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421. The Association for Computational Linguistics (2015)
69.
go back to reference Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, 2017, pp. 5998–6008 (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, 2017, pp. 5998–6008 (2017)
70.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
72.
go back to reference Dodge, Y.: The Concise Encyclopedia of Statistics. Springer, Cham (2008)MATH Dodge, Y.: The Concise Encyclopedia of Statistics. Springer, Cham (2008)MATH
73.
go back to reference Boulesteix, A.-L., Janitza, S., Hapfelmeier, A., Steen, K.V., Strobl, C.: Letter to the editor: on the term ‘interaction’ and related phrases in the literature on random forests. Brief. Bioinform. 16(2), 338–345 (2014)CrossRef Boulesteix, A.-L., Janitza, S., Hapfelmeier, A., Steen, K.V., Strobl, C.: Letter to the editor: on the term ‘interaction’ and related phrases in the literature on random forests. Brief. Bioinform. 16(2), 338–345 (2014)CrossRef
Metadata
Title
Attention-like feature explanation for tabular data
Authors
Andrei V. Konstantinov
Lev V. Utkin
Publication date
02-08-2022
Publisher
Springer International Publishing
Published in
International Journal of Data Science and Analytics / Issue 1/2023
Print ISSN: 2364-415X
Electronic ISSN: 2364-4168
DOI
https://doi.org/10.1007/s41060-022-00351-y

Other articles of this Issue 1/2023

International Journal of Data Science and Analytics 1/2023 Go to the issue

Premium Partner