Skip to main content

2021 | OriginalPaper | Buchkapitel

AI Explainability. A Bridge Between Machine Vision and Natural Language Processing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper attempts to present an appraisal review of explainable Artificial Intelligence research, with a focus on building a bridge between image processing community and natural language processing (NLP) community. The paper highlights the implicit link between the two disciplines as exemplified from the emergence of automatic image annotation systems, visual question-answer systems. Text-To-Image generation and multimedia analytics. Next, the paper identified a set of natural language processing fields where the visual-based explainability can boost the local NLP task. This includes, sentiment analysis, automatic text summarization, system argumentation, topical analysis, among others, which are highly expected to fuel prominent future research in the field.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Grice, H.P.: Logic and Conversation. In: Syntax and Semantics 3: Speech arts, pp. 41–58 (1975) Grice, H.P.: Logic and Conversation. In: Syntax and Semantics 3: Speech arts, pp. 41–58 (1975)
3.
Zurück zum Zitat Conati, C., Porayska-Pomsta, K., Mavrikis, M.: AI in education needs interpretable machine learning: lessons from open learner modelling. arXiv preprint arXiv:1807.00154 (2018) Conati, C., Porayska-Pomsta, K., Mavrikis, M.: AI in education needs interpretable machine learning: lessons from open learner modelling. arXiv preprint arXiv:​1807.​00154 (2018)
4.
Zurück zum Zitat Goodman, B., Flaxman, S.: European union regulations on algorithmic decision making and a “right to explanation.” AI Mag. 38(3), 50 (2017)CrossRef Goodman, B., Flaxman, S.: European union regulations on algorithmic decision making and a “right to explanation.” AI Mag. 38(3), 50 (2017)CrossRef
5.
Zurück zum Zitat Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In Proceedings of the 2018 CHI. Association for Computing Machinery, Montreal, Canada (2018). https://doi.org/10.1145/3173574.3174156 Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: an HCI research agenda. In Proceedings of the 2018 CHI. Association for Computing Machinery, Montreal, Canada (2018). https://​doi.​org/​10.​1145/​3173574.​3174156
7.
Zurück zum Zitat Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J. ICT Discoveries Spec. (1), 39–48 (2017) Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J. ICT Discoveries Spec. (1), 39–48 (2017)
8.
Zurück zum Zitat Lewis, D.: Causal Explanation. In: Philosophical Papers. Vol II. Oxford University Press, New York, Chapter Twenty two, pp. 214–240 (1986) Lewis, D.: Causal Explanation. In: Philosophical Papers. Vol II. Oxford University Press, New York, Chapter Twenty two, pp. 214–240 (1986)
11.
Zurück zum Zitat Arrieta, Del Ser et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019) Arrieta, Del Ser et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019)
12.
Zurück zum Zitat Gunning, D.: Explainable artificial intelligence (XAI ). Technical report, pp. 1–18 (2017) Gunning, D.: Explainable artificial intelligence (XAI ). Technical report, pp. 1–18 (2017)
13.
Zurück zum Zitat Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 30:31–30:57 (2018) Lipton, Z.C.: The mythos of model interpretability. Queue 16(3), 30:31–30:57 (2018)
14.
Zurück zum Zitat Kim, B., Doshi-Velez, F.: Introduction to interpretable machine learning. In Proceedings of the CVPR 2018 Tutorial on Interpretable Machine Learning for Computer Vision, Salt Lake City, UT, USA (2018) Kim, B., Doshi-Velez, F.: Introduction to interpretable machine learning. In Proceedings of the CVPR 2018 Tutorial on Interpretable Machine Learning for Computer Vision, Salt Lake City, UT, USA (2018)
16.
Zurück zum Zitat Maaten, L.Y.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)MATH Maaten, L.Y.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)MATH
17.
Zurück zum Zitat Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems. MIT Press: Cambridge, pp. 2280–2288 (2016) Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems. MIT Press: Cambridge, pp. 2280–2288 (2016)
18.
Zurück zum Zitat Hartigan, J.A., Wong, M.A.: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 28, 100–108 (1979) Hartigan, J.A., Wong, M.A.: A k-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 28, 100–108 (1979)
19.
Zurück zum Zitat Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)CrossRef
22.
Zurück zum Zitat Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Proc. Adv. Neural Inf. Process. Syst. (NIPS), 3387–3395 (2016) Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Proc. Adv. Neural Inf. Process. Syst. (NIPS), 3387–3395 (2016)
23.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier, In: Proceedings of 22nd ACM SIGKDD International Conference on Knowledge Discovery Data Mining (2016) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier, In: Proceedings of 22nd ACM SIGKDD International Conference on Knowledge Discovery Data Mining (2016)
24.
Zurück zum Zitat Lundberg, S.M., Lee, S.L.: A unified approach to interpreting model predictions. Proc. Adv. Neural Inf. Process. Syst. 4768–4777 (2017) Lundberg, S.M., Lee, S.L.: A unified approach to interpreting model predictions. Proc. Adv. Neural Inf. Process. Syst. 4768–4777 (2017)
25.
Zurück zum Zitat Cortez, P., Embrechts, M.J.: Opening black box data mining models using sensitivity analysis. In: Proceedings of IEEE Symposium on Computational Intelligence Data Mining (CIDM), pp. 341–348 (2011) Cortez, P., Embrechts, M.J.: Opening black box data mining models using sensitivity analysis. In: Proceedings of IEEE Symposium on Computational Intelligence Data Mining (CIDM), pp. 341–348 (2011)
27.
Zurück zum Zitat Green D.P., Kern, H.L.: Modeling heterogeneous treatment effects in large-scale experiments using Bayesian additive regression trees. In: Proceedings of Annual Summer Meeting Society for Political Methodology, pp. 1–40 (2010) Green D.P., Kern, H.L.: Modeling heterogeneous treatment effects in large-scale experiments using Bayesian additive regression trees. In: Proceedings of Annual Summer Meeting Society for Political Methodology, pp. 1–40 (2010)
30.
Zurück zum Zitat Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation, J. Comput. Graph. Statist. 24(1), 44–65 (2015) Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation, J. Comput. Graph. Statist. 24(1), 44–65 (2015)
31.
Zurück zum Zitat Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: ICML 1998, pp. 144–151 (1998) Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: ICML 1998, pp. 144–151 (1998)
32.
Zurück zum Zitat Robnik-Šikonja, M., Kononenko, L.: Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20(5), 589–600 (2008) Robnik-Šikonja, M., Kononenko, L.: Explaining classifications for individual instances. IEEE Trans. Knowl. Data Eng. 20(5), 589–600 (2008)
33.
Zurück zum Zitat Etchells, T.A., Lisboa, P.J.G.: Orthogonal search-based rule extraction (OSRE) for trained neural networks: a practical and efficient approach. IEEE Trans. Neural Netw. 17(2), 374–384 (2006) Etchells, T.A., Lisboa, P.J.G.: Orthogonal search-based rule extraction (OSRE) for trained neural networks: a practical and efficient approach. IEEE Trans. Neural Netw. 17(2), 374–384 (2006)
36.
Zurück zum Zitat Cortez, P., Embrechts, M.J.: Using sensitivity analysis and visualization techniques to open black box data mining models. Inf. Sci. 225, 1–7 (2013)CrossRef Cortez, P., Embrechts, M.J.: Using sensitivity analysis and visualization techniques to open black box data mining models. Inf. Sci. 225, 1–7 (2013)CrossRef
39.
Zurück zum Zitat Schetinin, V., et al.: Confident interpretation of Bayesian decision tree ensembles for clinical applications. IEEE Trans. Inf. Technol. Biomed. 11(3), 312 (2007)CrossRef Schetinin, V., et al.: Confident interpretation of Bayesian decision tree ensembles for clinical applications. IEEE Trans. Inf. Technol. Biomed. 11(3), 312 (2007)CrossRef
41.
Zurück zum Zitat Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 1–9 (2018) Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 1–9 (2018)
42.
Zurück zum Zitat García, S., Fernández, A., Herrera, F.: Enhancing the effectiveness and interpretability of decision tree and rule induction classifiers with evolutionary training set selection over imbalanced problems. Appl. Soft Comput. 9(4), 1304–1314 (2009)CrossRef García, S., Fernández, A., Herrera, F.: Enhancing the effectiveness and interpretability of decision tree and rule induction classifiers with evolutionary training set selection over imbalanced problems. Appl. Soft Comput. 9(4), 1304–1314 (2009)CrossRef
43.
Zurück zum Zitat Wang, F., Rudin, C.: Falling rule lists. In: Proceedings of 18th International Confrence on Artificial Intelligence on Statistics (AISTATS), San Diego, CA, USA: JMLR W&CP, pp. 1013–1022 (2015) Wang, F., Rudin, C.: Falling rule lists. In: Proceedings of 18th International Confrence on Artificial Intelligence on Statistics (AISTATS), San Diego, CA, USA: JMLR W&CP, pp. 1013–1022 (2015)
45.
Zurück zum Zitat Johansson, U., König, R., Niklasson, I.: The truth is in there—rule extraction from opaque models using genetic programming. In: Proceedings of FLAIRS Conference, pp. 658–663 (2004) Johansson, U., König, R., Niklasson, I.: The truth is in there—rule extraction from opaque models using genetic programming. In: Proceedings of FLAIRS Conference, pp. 658–663 (2004)
47.
Zurück zum Zitat Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)CrossRef Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)CrossRef
48.
Zurück zum Zitat Thiagarajan, J.J., Kailkhura, B., Sattigeri,P., Ramamurthy, K.N.: TreeView: peeking into deep neural networks via feature-space partitioning (2016). arXiv preprint arXiv:1611.07429 Thiagarajan, J.J., Kailkhura, B., Sattigeri,P., Ramamurthy, K.N.: TreeView: peeking into deep neural networks via feature-space partitioning (2016). arXiv preprint arXiv:​1611.​07429
49.
Zurück zum Zitat Wang, T., Rudin,C., Velez-Doshi, F., Liu, Y., Klamp, E., MacNeille, P.: Bayesian rule sets for interpretable classification. In: IEEE 16th International Conference on Data Mining (ICDM), pp. 1269–1274 (2016) Wang, T., Rudin,C., Velez-Doshi, F., Liu, Y., Klamp, E., MacNeille, P.: Bayesian rule sets for interpretable classification. In: IEEE 16th International Conference on Data Mining (ICDM), pp. 1269–1274 (2016)
50.
Zurück zum Zitat Boz, O.: Extracting decision trees from trained neural networks. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 456–461. ACM (2002) Boz, O.: Extracting decision trees from trained neural networks. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 456–461. ACM (2002)
51.
Zurück zum Zitat Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences (2016). arXiv:1605.01713 Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: learning important features through propagating activation differences (2016). arXiv:​1605.​01713
52.
Zurück zum Zitat Zhou, A.M., Gan, J.Q.: Low-level interpretability and high-level interpretability: a unified view of data-driven interpretable fuzzy system modelling. Fuzzy Sets Syst. 159(23), 3091–3131 (2008)MathSciNetCrossRef Zhou, A.M., Gan, J.Q.: Low-level interpretability and high-level interpretability: a unified view of data-driven interpretable fuzzy system modelling. Fuzzy Sets Syst. 159(23), 3091–3131 (2008)MathSciNetCrossRef
56.
Zurück zum Zitat Lu, J., Yang, J., Batra, D.: Parikh. Hierarchical question image co-attention for visual question answering. In: Advances in Neural Information Processing Systems (NIPS2016), pp 289–297 (2016) Lu, J., Yang, J., Batra, D.: Parikh. Hierarchical question image co-attention for visual question answering. In: Advances in Neural Information Processing Systems (NIPS2016), pp 289–297 (2016)
57.
Zurück zum Zitat Reed S. et al.: Generative adversarial text to image synthesis. In: ICML 2016, pp. 1060–1069 (2016) Reed S. et al.: Generative adversarial text to image synthesis. In: ICML 2016, pp. 1060–1069 (2016)
58.
Zurück zum Zitat Reed, S.E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., Lee, H.: Learning what and where to draw. In: NIPS, pp. 217–225 (2016) Reed, S.E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., Lee, H.: Learning what and where to draw. In: NIPS, pp. 217–225 (2016)
59.
Zurück zum Zitat Welinder, P., et al.: Caltech-UCSD Birds 200. Technical report. CNS-TR-2010- 001, California Institute of Technology (2010) Welinder, P., et al.: Caltech-UCSD Birds 200. Technical report. CNS-TR-2010- 001, California Institute of Technology (2010)
60.
Zurück zum Zitat Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, pp. 722–729 (2008) Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, pp. 722–729 (2008)
61.
62.
Zurück zum Zitat Lesk, M.: Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In: Proceedings of SIGDOC, pp. 24–26 (1986) Lesk, M.: Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In: Proceedings of SIGDOC, pp. 24–26 (1986)
64.
Zurück zum Zitat Navigli, R., Lapata, M.: Graph connectivity measures for unsupervised word sense disambiguation. In: IJCAI International Joint Conference on Artificial Intelligence, Hyderabad, India, pp. 1683–1688 (2007) Navigli, R., Lapata, M.: Graph connectivity measures for unsupervised word sense disambiguation. In: IJCAI International Joint Conference on Artificial Intelligence, Hyderabad, India, pp. 1683–1688 (2007)
65.
Zurück zum Zitat Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)MathSciNetCrossRef Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)MathSciNetCrossRef
66.
Zurück zum Zitat Cocarascu, O., Stylianou, A., Cyras K., Toni, F.: Data-empowered argumentation for dialectically explainable predictions. In: 24th European Conference on Artificial Intelligence – ECAI (2020) Cocarascu, O., Stylianou, A., Cyras K., Toni, F.: Data-empowered argumentation for dialectically explainable predictions. In: 24th European Conference on Artificial Intelligence – ECAI (2020)
67.
Zurück zum Zitat Tsytsarau, M., Palpanas, T.: Survey on mining subjective data on the web. Data Min. Knowl. Discov. 24, 478–514 (2012)CrossRef Tsytsarau, M., Palpanas, T.: Survey on mining subjective data on the web. Data Min. Knowl. Discov. 24, 478–514 (2012)CrossRef
68.
Zurück zum Zitat Arras, L., Horn, F., Montavon, G., Muller, K.R., Samek W.: Explaining predictions of non-linear classifiers in NLP. arXiv preprint arXiv:1606.07298 (2016) Arras, L., Horn, F., Montavon, G., Muller, K.R., Samek W.: Explaining predictions of non-linear classifiers in NLP. arXiv preprint arXiv:​1606.​07298 (2016)
69.
Zurück zum Zitat Zhang Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th ACM SIGIR, pp. 83–92 (2014) Zhang Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: Proceedings of the 37th ACM SIGIR, pp. 83–92 (2014)
70.
Zurück zum Zitat Sherstov, A.A., Stone, P.: Improving action selection in MDP’s via knowledge transfer. AAAI 5, 1024–1029 (2005) Sherstov, A.A., Stone, P.: Improving action selection in MDP’s via knowledge transfer. AAAI 5, 1024–1029 (2005)
71.
Zurück zum Zitat Blei, D.M., Lafferty, J.D.: TopicMmodels. Chapman & Hall/CRC (2009) Blei, D.M., Lafferty, J.D.: TopicMmodels. Chapman & Hall/CRC (2009)
72.
Zurück zum Zitat Lau, H.J., Newman, D., Baldwin, T.: Machine reading tea leaves: automatically evaluating topic coherence and topic model quality. In: EAC (2014) Lau, H.J., Newman, D., Baldwin, T.: Machine reading tea leaves: automatically evaluating topic coherence and topic model quality. In: EAC (2014)
74.
Zurück zum Zitat Hu, Y., Boyd-Graber, J., Satinoff, B., Smith, A.: Interactive topic modeling. Mach. Learn. 95, 423–469 (2013)MathSciNetCrossRef Hu, Y., Boyd-Graber, J., Satinoff, B., Smith, A.: Interactive topic modeling. Mach. Learn. 95, 423–469 (2013)MathSciNetCrossRef
76.
Zurück zum Zitat Mihalcea, R., Tarau, P.: Textrank: bringing order into text. In: Proceedings of the conference on empirical methods in natural language processing (2004) Mihalcea, R., Tarau, P.: Textrank: bringing order into text. In: Proceedings of the conference on empirical methods in natural language processing (2004)
77.
Zurück zum Zitat Gong, Y., Liu, X.: Generic text summarization using relevance measure and latent semantic analysis. In: Proceedings of the 24th ACM SIGIR, pp. 19– 25 (2001) Gong, Y., Liu, X.: Generic text summarization using relevance measure and latent semantic analysis. In: Proceedings of the 24th ACM SIGIR, pp. 19– 25 (2001)
78.
Zurück zum Zitat Steinberger, J., Jezek, K.: Using latent semantic analysis in text summarization and summary evaluation. Proc. ISIM 4, 93–100 (2004) Steinberger, J., Jezek, K.: Using latent semantic analysis in text summarization and summary evaluation. Proc. ISIM 4, 93–100 (2004)
79.
Zurück zum Zitat Hong, S., Yang, D., Choi, J., Lee, H.: Interpretable text-to-image synthesis with hierarchical semantic layout generation. In: Samek, et al. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Germany (2019). https://doi.org/10.1007/978-3-030-28954-6 Hong, S., Yang, D., Choi, J., Lee, H.: Interpretable text-to-image synthesis with hierarchical semantic layout generation. In: Samek, et al. (eds.): Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, Germany (2019). https://​doi.​org/​10.​1007/​978-3-030-28954-6
82.
Zurück zum Zitat Buchanan, B.G., Shortliffe, E.H.: Rule Based Expert Systems: The MYCIN Experiment of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, MA (1984) Buchanan, B.G., Shortliffe, E.H.: Rule Based Expert Systems: The MYCIN Experiment of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, MA (1984)
Metadaten
Titel
AI Explainability. A Bridge Between Machine Vision and Natural Language Processing
verfasst von
Mourad Oussalah
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-68796-0_19