Skip to main content
Erschienen in: Artificial Intelligence Review 1/2023

Open Access 22.06.2023

Human-centric and semantics-based explainable event detection: a survey

verfasst von: Taiwo Kolajo, Olawande Daramola

Erschienen in: Artificial Intelligence Review | Sonderheft 1/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In recent years, there has been a surge of interest in Artificial Intelligence (AI) systems that can provide human-centric explanations for decisions or predictions. No matter how good and efficient an AI model is, users or practitioners find it difficult to trust it if they cannot understand the AI model or its behaviours. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and semantics-based explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking in AI systems. This paper provides a survey on human-centric explainable AI, explainable event detection, and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the benefits of integrating semantics into explainable event detection. The findings from the survey show the current state of human-centric explainability, the potential of integrating semantics into explainable AI, the open problems, and the future directions which can guide researchers in the explainable AI domain.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Event detection requires automatically answering questions pertaining to events, such as when, where, what, and by whom (Panagiotou et al. 2016). Some of the known applications of event detection from social media include identification of first story/breaking news, anomaly detection (online abuse, online bullying, hate speech, fake news), emergency detection, security intelligence alert (crime, terrorism etc.), and event summarisation (profiling of upcoming events that pertains to place within a specific time window) (Giatrakos et al. 2017; Sreenivasulu and Sridevi 2018; Win and Aung 2018). The literature has shown a series of event detection models, but fewer attempts have been made to provide a human-centric explainable event detection (Evans et al. 2022; Khan et al. 2021). An explainable event contains information that embraces the 5W1H dimensions (Who did what, when, where, why and how) (Chen and Li 2019). An event is considered complete only when all the components of 5W1H can be deduced, and answers are provided to who did what, where, when, why and how (Miller 2019). An explainable event detection system can provide information that covers the 5W1H dimensions of an event in a human-comprehensible way (Chakman et al. 2020). Explainable event detection can be realised through the integration of Explainable AI (XAI) methods and semantics for event detection.
XAI provides relevant explanations that justify the decisions or predictions made by AI systems (Gunning et al. 2019; Hall et al. 2022). To put it more precisely, XAI is a developing area of AI research that supports a collection of instruments, methods, and algorithms that may produce superior interpretable, intuitive, and human-comprehensible justifications for AI actions (Das and Rad 2020). XAI has been identified as essential for adopting AI solutions in several real-world domains, including security, healthcare, business, and commerce (Arya et al. 2020). The transparency and explainability of AI results are as important as the accuracy of results. Hence XAI has received huge attention recently (Shin 2021). Although the potential of XAI to facilitate explainable event detection has been highlighted in the literature, it is still incapable of facilitating human-centric explanations (Khan et al. 2021).
The synergistic integration of XAI and semantic technologies have been identified as one of the most efficient ways to improve the explainability of AI and machine learning (ML) systems (Pesquita 2021). Studies have shown that human comprehensibility of the results of ML systems can be significantly enhanced by introducing additional knowledge aspects, such as domain knowledge, common sense knowledge, and case-based knowledge. This type of additional knowledge can be found in external sources such as ontologies, knowledge graphs, and open data/knowledge sources (Ammar and Shaban-Nejad 2020; Confalonieri et al. 2021; Donadello and Dragoni 2021; Ribeiro and Leite 2021). Thus, explainable event detection can be realised by integrating ML and semantic technologies. Integrating semantics into XAI can provide a human-centric explanation more understandably (Pesquita 2021). It will go a long way to capture the 5W1H dimensions required for a human-comprehensible explanation. A human-centric explanation is adaptive to the user’s context, understandable, appealing, and gives a basis for trust through provenance (Li et al. 2022). Like what obtains in other critical domains, the advent of human-centric explainable events will increase the uptake of event detection solutions by news agencies, security organisations, health and emergency units, and other relevant public organisations (Bhatt et al. 2020).
Many reviews/survey papers on XAI in various fields exist in the literature (Alicioglu and Sun 2022; Chaddad et al. 2023). However, none has focused on the application of XAI for event detection. In addition, none of the currently available event detection systems has captured the six dimensions of 5W1H to provide explanations and, at the same time, emphasise human-centricity. Thus, an understanding of approaches, methods, and efforts so far made in the area of XAI for event detection is needed. Thus, in this paper, we present a review of human-centric and semantics-based event detection. As a contribution, this paper reveals the state-of-the-art, challenges, and the way forward regarding human-centric explainable AI and semantic-based event detection, which is currently lacking in the literature.
The remaining parts of the paper are arranged as follows. The related work is presented in Sect. 2, while the survey’s methodology is described in Sect. 3. Section 4 presents the answers to the survey’s research questions, while the survey findings are discussed in Sect. 5. In Sect. 6, the future research directions and open issues are discussed. The paper’s conclusion is presented in Sect. 7.
This section summarises previous review papers on explainable AI (XAI).
Many researchers have conducted survey studies on XAI approaches. Islam et al. (2021) surveyed explainable AI approaches. The authors showcase and analyse popular XAI methods to provide meaningful insights on quantifying explainability and provide recommendations towards human-centred AI. Similarly, Chaddad et al. (2023) surveyed explainable AI techniques focusing on healthcare and related medical imaging applications. The authors provided a summary and category of XAI types, algorithms and challenging XAI problems. Guidotti et al. (2018) examined ways to explain black box models. The authors provided a classification of problems addressed in the literature concerning the explainability of black box models. Adadi and Berrada (2018) surveyed the existing approaches of XAI for black-box models and presented the trends surrounding XAI’s sphere and future research directions.
The summary of recent developments in explainable AI that bother on supervised learning techniques and its connection with artificial intelligence, along with future research directions, was provided by Dosilovic et al. (2018). For XAI methods, Alicioglu and Sun (2022) presented a survey of visual analytics. Visual analytics for XAI methods that can better interpret neural networks was the topic of discussion in the survey, which covered the current state, obstacles, and potential future paths. Saeed and Omlin (2023) conducted a meta-survey on XAI’s challenges and research directions. The discussion focused on the general difficulties and future directions of AI and XAI research concerning the machine learning life cycle.
While many research efforts focused on XAI techniques only, some surveys incorporated human-centricity in explaining AI. Liao and Varshney (2022) conducted a survey on human-centered explainable AI that focused on algorithms for user experience. The survey looked at human-centered approaches for designing, evaluating, and providing conceptual and methodological tools for explainable AI. Ehsan and Riedl (2020) investigated the perception of non-expert users on automatically generated rationales behind AI systems, focusing on human-centricity, confidence, understandability, and adequate justification. Rong et al. (2022) conducted a survey which provided a foundational work of user studies in XAI overview, a summary of the XAI design details, current XAI technology overview and potential paradigms of AI systems in understanding human context. Damfeh et al. (2022) examined various theoretical principles and paradigms to investigate human-centered AI concepts. According to the authors, there is an inherent need to strike a balance between increasing XAI systems and human involvement.
The scope of our survey does not just focus on XAI techniques and human-centered XAI, along with explainable event detection. We further explore how to integrate semantics into XAI systems. Semantics-based XAI can provide a human-centric explanation more understandably. Our survey covers three main aspects: human-centric explanation, explainable event detection, and semantic-based explainable event detection. The taxonomy of our survey is presented in Fig. 1, while the comparison of the related work is provided in Table 1.
Table 1
Comparison of Related Work
Reference
XAI
Domain
Human-centricity
Semantics
Guidotti et al. (2018)
Yes
Explainability of black box models
Yes
No
Adadi and Berrada (2018)
Yes
XAI for black-box models and trends in XAI research
No
No
Dosilovic et al. (2018)
Yes
Recent developments in XAI and supervised learning techniques
No
No
Ehsan and Riedl (2020)
Yes
XAI and the perception of non-expert users
Yes
No
Islam et al. (2021)
Yes
XAI approaches, Credit default prediction
Yes
No
Alicioglu and Sun (2022)
Yes
Visual analytics for XAI methods
Yes
No
Liao and Varshney (2022)
Yes
Human-centered XAI and user experience
Yes
No
Rong et al. (2022)
Yes
User studies in XAI
Yes
No
Damfeh et al. (2022)
Yes
Theoretical principles and paradigms for human-centered AI
Yes
No
Chaddad et al. (2023)
Yes
XAI approaches, Healthcare
Yes
No
Saeed and Omlin (2023)
Yes
XAI’s challenges and research directions
Yes
No
Our Survey
Yes
XAI approaches, Event detection
Yes
Yes

3 Methodology

This paper focuses on three main aspects: human-centric explanations, explainable event detection, and semantic-based explainable event detection. We gathered research papers on the topic of interest to realise our study’s objectives. The answers to the research questions posed are presented as results in Sect. 4. The research questions used in this survey are presented subsequently.

3.1 Research questions

Using the three main aspects of the survey, we asked some research questions, which are presented subsequently.

3.1.1 Human-centric explanations

1.
What are the characteristics of human-centric explanations?
 
2.
What is explainable AI?
 
3.
What is the state of XAI approaches or methods for Human-centric explanations?
 
4.
What are the evaluation metrics used for explainable AI?
 

3.1.2 Explainable event detection

1.
What is explainable event detection?
 
2.
Why is Explainable event detection important?
 
3.
Why is human-centricity important in explainable event detection?
 
4.
To what extent are the existing explainable event solutions addressed the human-centricity aspect?
 

3.1.3 Semantics-based explainable event detection

1.
What are semantics-based XAI and its capabilities?
 
2.
How do we integrate semantics into Explainable event detection?
 

4 Results

This section presents the answers to the research questions in Sect. 3.1.13.1.3.

4.1 Human-centric explanations

Artificial intelligence methods are becoming harder for users to explain due to their complexity (Sejr and Schneider-Kamp 2021). AI systems are increasingly algorithmically mediating our lives. The application of these systems has been extended to critical domains such as criminal justice, healthcare, automated-driving, finance, and more (Ehsan et al. 2021). AI technology has great potential to provide professionals with results, building the capacity to enhance decision-making (Alsagheer et al. 2021). Despite the swift recorded achievements of AI, the absence of a human-centered approach and the lack of explainability for practitioners while developing AI systems have remained obstacles to AI adoption. As a result, only a small portion of these achievements has been transferred from the laboratory to practice (Abdul et al. 2018; Arya et al. 2020). Take, for instance, autonomous drones that assist farmers. When, where, and why the drone decides to spray pesticides or water must be known by farmers.
Few human-centric applications and studies have been conducted despite the cross-disciplinary challenge of building XAI (Evans et al. 2022). In contrast to real-world scenarios, AI solutions are being implemented in a controlled setting (Okolo 2022). There is a need for AI systems to make the mechanics underlying their decision comprehensible to affected humans. XAI can speed up the adoption of AI solutions as crucial transparency and trust with potential users are fostered (Adadi and Berrada 2018).
The most identified valued propositions of model explanation attributed to different stakeholders covering decision makers, end-users, and researchers, amongst others (Arrieta et al. 2020; Bhatt et al. 2020) are:
Trust and confidence
It is difficult for a user who does not understand how models are being trained or evaluated to be confident about the model’s predictions. Non-technical users can only develop trust if they can comprehend the model and recognise the pattern from such a user’s domain.
Transferability
The technical users should be aware of the environment(s) in which a model has been tested and the patterns the model recognises to make predictions. Such knowledge will make the technical user determine the usable model settings. With trust and transferability, It is possible to identify the primary target of the model.
Informativeness and causality
The user’s comprehension of the underlying data and decision-making ability can benefit from explanations. Explanations coupled with the existing domain knowledge will make the causal effects in the domain knowledge be understood.
Fair and ethical decision making
Identifying who to hold responsible for the model decisions is challenging with black box models. This is because the internal rules of the black box are not even visible to the developer who wrote the code that generated the model. Understanding the model’s prediction patterns will help the decision-makers determine if such decisions are ethical and fair.
Model debugging, adjustment, and monitoring
Explanations are useful to the end-users and data scientists. Comprehension of the model by data scientists will help them explain the performance level and how to improve it. An explanation can also help the user tune and monitor the deployed model to ensure consistent performance (Burkart and Huber 2021; Sejr and Schneider-Kamp 2021).

4.1.1 Explainable AI

Researchers often confuse explainability with terms such as intelligibility, transparency, comprehensibility, and interpretability. Scholars often disagree with the scope and the intersection of these terminologies. An important distinction between explainability and interpretability is that explanation does not generally elucidate how a model works but provides users and practitioners with useful information in an accessible manner (Ehsan and Riedl 2020). While interpretability and explainability have human-centric properties, interpretability describes how a model works. In contrast, explainability bothers on what, why, and how the output/decision of the model is made (Hall et al. 2022).
Roscher et al. (2020) made a clear distinction between interpretability, explainability, and transparency: Transparency takes into account the AI/machine learning strategy, interpretability considers the AI alongside the data, and explainability takes into account the model, the data, and human involvement. Model transparency, design transparency, and algorithm transparency are all subcategories of transparency, which refers to the processes of building models. The transparency of the model structure, such as the number of layers, activation function, splitting criteria, and decision trees in the random forest, is referred to as model transparency. Design transparency is related to understandable, replicable, and well-motivated choices during the AI algorithm’s construction. The transparency of the algorithm is related to the uniqueness of the final solution. The user’s comprehension of the AI model is the focus of interpretability. According to Montavon et al. (2018), interpretability converts an abstract concept like a predicted class into an easily comprehendible domain. Interpretability methods reveal the crucial characteristics that are responsible for model prediction. Interpretation and additional contextual information are combined in explainability. The what, how, why, and causality questions are addressed (Miller 2019). A comprehensive comprehension of AI systems and their actions is one broad definition of explainable AI (Vaughan and Wallach 2020). According to Gunning et al. (2019), the true goal of XAI is to ensure that end users can see the results, which will help them make better decisions. With the definition of explainability, it can be inferred that the existing research has focused on interpretability and much work is still needed to achieve explainable AI models.
Users must occupy a central place and go beyond technology to comprehend contextual usage in developing AI systems (Inkpen et al. 2019). Keeping human-in-the-loop will give room to the creation of dynamic AI solutions (Syed et al. 2020). In addition, users must know the capacity of AI systems, what they can and cannot accomplish, the data on which it was trained, and for what it has been optimised (Ontika et al. 2022). Human values such as responsibility, transparency, trustworthiness, and fairness must be integrated into the design of AI solutions (Friedman and Hendry 2019). When humans are engaged in the design process of AI systems, the resulting systems will be safe, useful, ethical, reliable, fair, and adaptable (Bond et al. 2019; Liao and Varshney 2022).

4.1.2 Explainable AI techniques

XAI techniques fall into two main categories: ante-hoc and post-hoc. Ante-hoc explainability incorporates the explanation into the model itself, while post-hoc explainability attempts to generate explanations of the result of the model. Examples of ante-hoc explainability models include fuzzy inference systems, decision trees, and linear regression. Post-hoc explainability models are usually done with black box models such as neural networks (Guidotti et al. 2018). The choice between the two techniques is premised on explainability and performance. Unfortunately, the highest-performance AI models are the least explainable and vice versa (Kelly et al. 2019). According to Guidotti et al. (2018), both ante-hoc and post-hoc XAI techniques can be further subdivided into global model explanations, outcome explanations, and counterfactual inspection. Global model explanation focuses on the model’s overall logic. The distillation technique (Wood-Doughty et al. 2022) can be used to achieve a global model explanation. Outcome explanations aim at explaining a specific model output. Two main algorithms used for outcome explanations are LIME (Local Interpretable Model-agnostic Explanations) (Zafar and Khan 2021) and SHAP (Shapely Additive Explanations) (Lundberg and Lee 2017; Mangalathu et al. 2020). For deep neural network explanations, additional features to use for explanation are propagation, gradient (Linardatos et al. 2021), and occlusion (Kakogeorgiou and Karantzalos 2021). The counterfactual inspection provides an understanding of the model behaviour with alternative input. Techniques such as Partial Dependence Plot (PDP) (Szepannek and Lubke 2022) and Individual Conditional Expectation (ICE) (Rai 2020) can be used for counterfactual inspection. While any machine learning model’s explainability can be achieved with LIME and SHAP, other models like Deep Learning Important Features (DeepLIFT) (Shrikumar et al. 2017) as well as GRADient Class Activation Mapping (GRAD-CAM) (Selvaraju et al. 2017) are used for Deep Learning Models.
Explainability is now more than just trying to understand models; it’s now a crucial requirement for people to trust and use AI solutions in various fields (Liao and Varshney 2022). Value Sensitive Design (VSD) is an approach to XAI that assumes that designers should design all technologies principally and comprehensively, accounting for human values throughout the design process (Friedman and Hendry 2019). It has been argued that VSD will influence how solutions are designed in the future (Friedman et al. 2017; Umbrello and de Bellis 2018). A thorough understanding of XAI and hands-on expertise in XAI techniques are needed to make informed decisions (Gill et al. 2022). Figure 2 provides an overview of XAI approaches, and the description of the XAI approaches is presented in Table 2.
Table 2
Description of Explainable AI Approaches
Type of XAI Approaches for Explanations
Description
Ante-hoc/Intrinsic vs. Post-hoc
Intrinsic explainability incorporates explainability directly into their structures. Explainability is achieved by finding large coefficient features that play a significant role in the prediction. Ante-hoc models are more interpretable. Decision trees, K-nearest neighbours, and linear and logistic regression are interpretable. Useful for training a new model for which comprehension is essential. With post-hoc explainability, a second model is required to provide an explanation for the existing one. Support vector machines, ensemble algorithms, and neural networks are examples of models that are intrinsically uninterpretable. The post-hoc explanation can supply an explanation for the intrinsically interpretable models. Useful to leverage already trained or proven machine learning techniques.
Model-specific vs. Model-agnostic
Post-hoc explanations can be divided into model-specific and model-agnostic explanations. Model-specific is sometimes called white-box explanations because they provide explanations based on the model’s internals. Saliency maps are an example of a model-specific explanation. Saliency maps highlight features that are perceived to influence classification. However, model-specific explanations are designed for certain types of models. The model-agnostic explanation provides explanations that are decoupled from the model. Methods for model-agnostics are partial dependency plots (PDP) and individual conditional expectations. Both methods provide an explanation for the whole model using visual interactions of the model under investigation.
Surrogate
Another kind of post-hoc explanation that can be used to explain more complicated methods is surrogate. An illustration of a surrogate method that explains ensemble models is the Combined Multiple Models (CMM).
Global vs. Local
Global explanations try to explain the whole model. Familiar methods used are tree-based models. The local explanation provides explanations based on a region around a single prediction. This is much simpler than global explanations. Examples of popular methods in this category are Local Interpretable Model Explanation (LIME) and SHapley Additive exPlanations. However, SHAP is computationally costlier than LIME. DeepLIFT, GRAD-CAM are used specifically for Deep Learning models explanation.

4.1.3 Comparison of XAI techniques

A human-centric explanation is key when measuring or comparing XAI tools and techniques. Future AI must exhibit properties such as transparency, accountability, fairness, performance, trustworthiness, and causality (Shin 2021). Such properties will promote the comprehensibility of humans’ operability of specific applications. Supporting a wider acceptance requires explainability, encompassing more than interpretability (Hall et al. 2022).
Much research has been done to provide explainability, especially for black box models. Some of the existing explainable models include Local Interpretable Model Agnostic Explanations (LIME), GraphLIME, Deep Taylor Decomposition (DTD), Anchors, SHapley Additive exPlanations (SHAP), Prediction Difference Analysis (PDA), Layer-wise Relevance Propagation (LRP), Asymmetric Shapley Values (ASV), Explainable Graph Neural Network (XGNN), Break-Down, Testing with Concept Activation Vectors (TCAV), Shapely Flow, X-NeSyL, Integrated Gradients, Meaningful Perturbations, Causal Models, Textual Explanations of Visual Models, and more ((Holzinger et al. 2022a). This section compares XAI techniques in terms of the extent to which they have satisfied Human-centric explanations. Figure 3 presents popular XAI toolboxes. Table 3 presents the analysis of different XAI techniques and to what extent they cover human-centricity properties.
Table 3
Analysis of Different XAI Techniques and Properties of Human-centricity
XAI Techniques (Ante-hoc/ Post-hoc)
Model Specificity/
Explanation Scope
Idea
Strength
Weakness
Application/
(Target Audience)
Human-centricity
Trustworthiness
Fairness
Transferability
Rationale
Causality
LIME
(Post-hoc)
Model-agnostic/ Local
Uses surrogate-based explanation to explain a complex model prediction
No need for knowledge of the model’s internal. Offers an interpretable representation. Provides local fidelity
The quality of the explanation is directly proportional to the quality of the surrogate fit. High computational cost
Text and image analysis
(Domain Experts)
No
No
No
Yes
No
SHAP
(Post-hoc)
Model-agnostic/ Local or Global
Analyses a feature’s contribution to the model prediction to determine its significance, with features that do not contribute to the prediction receiving zero.
The model’s predictions are broken down additively into components related to specific features.
High computational complexity. SHAP values are symmetrical
Tabular data
(AI Layman)
Yes
No
No
Yes
No
Anchor (Ante-hoc)
Model-agnostic / Local
Finds a decision rule that sufficiently “anchors” a prediction
Can have high coverage and high precision. Can be applied to different domains
Computationally intensive. Unbalanced classification problem leads to trivial decision rules
Text, image, and tabular data analysis
(Domain Experts)
No
No
No
Yes
Yes
GraphLIME
(Post-hoc)
Non-linear model-agnostic/ Local
Computes K most representative features to explain its prediction and produces a non-linear interpretable model from the N-hop neighbourhood of the node.
Can filter useless features and select informative features as explanations
Provides explanations for only node features as it ignores graph structures, such as nodes and edges. Not suitable for graph classification problems
Graph analysis
(Domain Experts)
No
No
No
Yes
No
LRP
(Post-hoc)
Model specific/ Local or Global
Takes advantage of the network structure and redistributes the explanations from the model’s output to the input layer by layer.
Makes use of additional features of the model internal to provide a better explanation. High computational efficiency
Adaptation to novel model architectures is difficult
NLP, computer vision, meteorology, games, video, morphing, EEG/fMRI analysis
(Domain Experts)
No
No
No
Yes
No
DTD
(Post-hoc)
Model-agnostic/ Local
Employs the first order of Taylor expansion to redistribute the neural network’s output to the input variables layer-wise.
Computationally efficient
Adaptation to novel model architectures is difficult
Image analysis
(Domain Experts)
No
No
No
Yes
No
PDA
(Post-hoc)
Model-agnostic/ Local
Measures the change in prediction when the feature is unknown to determine its relevance.
Can map uncertainty in model prediction to model inputs
Computationally expensive. Can suffer from saturated classifiers
Image analysis
(Domain Experts)
No
No
No
Yes
No
TCAV
(Post-hoc)
Model-agnostic/ Global or Local
Explains how neural activations affect the absence or presence of a user-specific concept.
Usable by users without prior experience with machine learning
Not suitable for tabular, text data or shallower neural networks
Concept sensitivity in image, fairness analysis
(Domain Experts)
No
No
No
Yes
No
XGNN
(Post-hoc)
Model-agnostic/ Local
Applies reinforcement learning to obtain important graph generation for GNN model prediction
Operates on the model level, and no need to provide individual-level explanations
Absence of ground truth results in non-concrete explanations
Graph classification
(AI Layman)
Yes
No
No
Yes
Yes
ASV
(Post-hoc)
Model-agnostic/Global
Uses cause-effect relationship to redistribute attribution of features in a manner that the source feature has higher attribution that provides an effect on the model’s predictions as well as the other dependent features
Only the features that are consistent with the causal features are considered. Does not require model retraining for feature selection
Requires domain knowledge
Fairness analysis
(Domain Experts)
No
Yes
No
Yes
Yes
Break-Down (Post-hoc)
Model-agnostic/Local
Uses greedy heuristics to identify and visualise the model’s interactions to determine the final attributions based on single ordering
Variable contributions are provided in a concise way
The part of the prediction attributed to a variable depends on the order in which one sets the values of the explanatory variables
Tabular dataset
(AI Layman)
No
No
No
Yes
Yes
Shapley Flow
(Post-hoc)
Model-agnostic/Local
Uses a dependency structure between features for an explanation, as in ASV. However, attributions are assigned to the relationship between features, unlike ASV, where attributions are assigned to features themselves
Have a lot of information about the relationship’s structure between
features and explanation boundaries
Requires familiarity with the structure of dependencies and the knowledge of the background case, that is, reference observation
Graph
No
No
No
Yes
Yes
Textual Explanations of Visual Models
(Post-hoc)
Model-specific/
Finds the discriminative characteristics to generate explanations
More straightforward to analyse and verify than attribution maps
There is no way to verify that the generated explanation matches the domain expertise. Artefacts of data can negatively influence performance and quality of explanation
Text and image
No
No
No
Yes
No
Integrated Gradients
(post-hoc)
Model-agnostic/ Local
Uses sensitivity and implementation invariance properties to achieve the model’s prediction explanation
Computationally efficient. Makes use of gradient information at a few specific locations.
Requires baseline observation. Suitable for only differential models. Suffers from gradient shattering problem
Text and image analysis
No
No
No
Yes
No
Causal models
Model-agnostic/ Global
Uses reinforcement learning theory for the counterfactual explanation that provides causal chains up until the rewards-receiving state
The “what”, “how”, and “why” questions are taken care of
Applicable to a finite domain
Text
No
No
No
Yes
Yes
Meaningful Perturbations
Model-agnostic/ Local
Generates explanations based on the model’s reaction to a perturbed input sample
Very flexible
Computationally expensive
Text, image, and tabular data analysis
No
No
No
Yes
No
EXplainable Neural-Symbolic Learning
Model-agnostic/ Local
Aligns the symbolic knowledge of domain experts (the Knowledge Graph) with the neural network’s explanations, which correspond to the human classification method.
Boosts explainability and sometimes performance
Requires domain-specific knowledge
Text and image analysis
No
Yes
No
Yes
Yes
Saliency Maps
(Post-hoc)
Model-specific/ Local
calculates feature importance on gradients, visualises and emphasises important pixels that influence the final CNN decision
Can analyse the image regions that stood out across the whole dataset
Saturation problem
Image analysis (End users)
Yes
No
No
Yes
No
CAM
(Post-hoc)
Model-agnostic/ Local
A gradient-based explanation approach that makes use of global average pooling for class activation maps in CNN
Identification of important regions in an image. Can explain graph classification models.
It requires a specific CNN architecture without any fully connected layers. Cannot be applied directly to the classification of nodes.
Image analysis
No
No
No
Yes
No
DeepLift
(Post-hoc)
Model-agnostic/ Local
Propagates a reference (neutral or default) input to obtain a reference output. Allocates importance scores using the difference between the actual output and the reference
Can reveal dependencies
It is not implementation invariant, i.e. two identical models with different internal wiring could produce different outputs
Image analysis
No
No
No
Yes
Yes
Bayesian Rule Lists
(Ante-hoc)
Model specific/ Global
Creates IF-THEN rule sets to provide an explanation
Reduces model space by using pre-mined rules
Rules can overlap
Text
(End users)
Yes
No
No
Yes
Yes
From Table 3, no existing XAI models have captured the entire dimensions of human-centric event detection. In addition, most of the explanations provided by these XAI methods are logical axioms understandable by experts and not common users. More research efforts are still needed to provide explanations that are comprehensible to users.
Moreover, it can be observed that more research efforts are still needed to invent explainable AI methods that address causal dependencies, as few XAI methods currently capture causal relationships (Holzinger et al. 2020). Research efforts should also gear at XAI models that encourage contextual understanding and answer questions and counterfactuals such as “what-if”. This allows human-in-the-loop where conceptual knowledge and human experience are utilised in the AI process (Holzinger et al. 2021). The existing XAI methods barely scratch the ‘black box’ surface (by stressing on features or localities, for instance, within an image) and do not provide explanations understandable to humans. This is quite different from how humans reason, evaluate similarities, make decisions, draw an analogy or make associations (Angelov et al. 2021). The best AI algorithms still lack conceptual understanding, so there is room for the XAI research community to contribute to this open problem.

4.1.4 Evaluation of explainable AI

Explainable AI is still in the infancy stage. As such, there is no standard agreement on how human-centric explanations should be evaluated (Li et al. 2022) due to the subjectivity of the explainability concept, perception, and interest of users (Carvalho et al. 2019). Because the model’s inner workings are unknown, there is no ground truth for evaluating post-hoc explanations (Samek et al. 2019). Clearly defining evaluation goals and metrics is necessary to advance research on explainability. Additional efforts in this area are still required (Ribera and Lapedriza 2019). Most existing systems skip or provide an informal evaluation (Danilevsky et al. 2020). According to Mohseni et al. (2021), three human-centred evaluation methods for XAI are user satisfaction and trust, usefulness, and mental models. User interview allows for the measurement of user trust and satisfaction. The user’s performance can be used to determine the usefulness, for example, event detection with the aid of XAI systems. Mental models show how the user comprehends the system, and it can be measured by asking the user to predict the output of the model. Future research on human-centric XAI evaluations should center on investigating innovative methods and be effective for collecting subjective measures for explanation evaluation of user experiment designs (Zhou et al. 2021).
The only available tool for the quantitative evaluation of explanation methods is Quantus (Hedstrom et al. 2022). Despite the numerous explanation strategies that have been developed, it is necessary to quantify their quality and determine whether or not they achieve the established goals of explainability. More research efforts are still needed to fill this gap. The common evaluation metrics for explainable AI are Accuracy, Fidelity, Sparsity, Contrastivity, and Robustness (Li et al. 2022).
Accuracy refers to the proportion of correct explanations. Accuracy can be measured in two ways. First, you can use the percentage of the important number of features identified by the explainable method to the truly important number of features (Luo et al. 2020). However, due to the absence of dataset ground truth explanations, this is typically not possible in real-world situations. The accuracy measure is depicted as follows:
$$\text{Accuracy=}\frac{\text{1}}{\text{N}}\sum _{\text{i}\text{=1}}^{\text{N}}\frac{\text{|}{\text{s}}_{\text{i}}\text{|}}{\text{|}{\text{S}}_{\text{i}}{\text{|}}_{\text{gt}}}$$
where \(\left|{s}_{i}\right|\) represents the important features identified by the explainable method, \(|{S}_{i}{|}_{gt}\) is the truly important number of features, and \(N\) is the total number of samples. Second, you can derive accuracy measures through the perspective of the model’s prediction (Yu et al. 2022).
Fidelity measures how faithful the explanations provided are important to the prediction of the model (Yuan et al. 2021). The main idea behind fidelity is that removing salient features should degrade the performance of AI models, such as getting higher prediction error rates or lower classification accuracy. The formal definition of fidelity is given as follows:
$$Fidelity=\frac{1}{N}\sum _{i=1}^{N}\left(f\right({G}_{i}{)}_{{y}_{i}}-f\left({\hat{G}}_{i}{)}_{{y}_{i}}\right)$$
Sparsity measures the proportion of important features by the explanation methods (Pope et al., 2019) defined as follows:
$$Sparsity=\frac{1}{N}\sum _{i=1}^{N}\left(1-\frac{\left|{s}_{i}\right|}{|{S}_{i}{|}_{total}}\right)$$
where \(\left|{s}_{i}\right|\) represents the important features identified by the explainable method, and it is a subset of \(\left|{S}_{i}\right|\), where \(|{S}_{i}{|}_{total}\) is the total number of features in the model, and \(N\) is the total number of samples.
According to Pope et al. (2019), contrastivity is the ratio of the Hamming distance between binarised heat maps for negative and positive classes. Contrastivity is based on the assumption that an explanation method’s highlighted features should vary across classes.
Robustness looks at the consistency of explanations despite input perturbation/corruption, model manipulation, and adversarial attack (Zhang et al. 2021).

4.2 Explainable event detection

Social media, for example, account for a significant portion of human interaction (MacAvaney et al. 2019). Machine learning techniques are frequently the foundation of automatic event detection. Despite the superior performance of deep learning models (also known as black boxes), they lack transparency due to self-learning and intricate algorithms. This leads to a tradeoff between explainability and performance (Arrieta et al. 2020). This challenge necessitated XAI’s development to explain the black box models without sacrificing performance (Bunde 2021; Gunning and Aha 2019; Machlev et al. 2022).
Anomaly event detection, for instance, is a type of event detection that is context-dependent and can only be detected through interaction with end users, domain expertise, and algorithm insight. The explanation, interpretation and user involvement can provide a missing link to transform complex anomaly event detection algorithms into real-life applications (Sejr and Schneider-Kamp 2021).
Event detection is a good medium for reporting breaking news, terror attacks, instant outbreaks, communicable disease protests, election campaigns, etc. (Win and Aung 2018; Kolajo and Daramola 2017). An event has been largely defined as a major incident that occurred at a specific location and time (Panagiotou et al. 2016; Kolajo et al. 2022). From this definition, we can infer only four dimensions of an explainable event (5W1H). That is, what, who, where, and when. The definition cannot fathom the other two dimensions (why and how). This definition is not complete in the light of providing an explainable event. As it has been established earlier that for an event to be humanly understandable, it must have the six (5W1H) dimensions. A proper definition that fits into the 5W1H concept, as defined by Chen and Li (2019), is that an event is “An action or a series of actions or change that happens at a specific time due to specific reasons, with associated entities such as objects, humans, and locations.“ However, the issue with the social media feed from which the events are detected is incompleteness. Since there is no style restriction in this user-generated content, the information is usually incomplete. As such, it is difficult to infer the six dimensions from social media feeds without using external knowledge sources. Hence there is a need for semantics and ontologies to complement the information provided by social media (Ai et al. 2018).
A truly explainable event detection system must answer the 5W1H dimension question in a comprehensible human explanation (Chakman et al. 2020; Chen and Li 2019). Human comprehensible explanation of event detection from social media streams cannot be achieved without incorporating domain knowledge. In a broader sense, social media feed characteristics like short messages, grammatical and spelling errors, mixed languages, ambiguity, and improper sentence structure necessitate harnessing the potential of semantics and semantic web technologies for improved human comprehension (Ai et al. 2018; Cherkassky and Dhar 2015; Kolajo et al. 2020; Islam et al. 2021). Existing event detection systems that tried to provide explanations used limited information in the social media streams. None of the existing event detection systems has captured the six dimensions of 5W1H to provide explanations.

4.2.1 Formal definition of explainable event detection

An event is considered complete when all the 5W1H components can be deduced and answers the question of Who did what, when, where, why and how. We subsequently define these components according to Muhammed et al. (2021).
Definition 1
(Event e). An event is referred to as natural, political, social, or other occurrence phenomena at a specific location Le, and time Te, with semantical textual description Se, with one or more participants Pe discussing the cause Ce and the method Me. It is depicted as follows:
$$e=\left({L}_{e},{T}_{e},{S}_{e},{P}_{e},{C}_{e},{M}_{e}\right)$$
where Le is the spatial information about the location (where) of the event (for example, longitude-latitude pairs); Te is the temporal information about when the event occurs, such as the time it takes to create content; Se represents the textual semantic description of what occurred, such as a name, title, tag, or content description; Pe stands for participants (such as a person or organisation) who describe their participation in the event; Ce is a causal description is one that explains why the event occurred (or which event is causing it) and thus demonstrates the relationship between two events, with event1 being identified as the cause and event2 as the effect, and Me is the textual details about how an event was carried out.
Definition 2
(Spatial Dimension (Le)). An event spatial dimension Le defines the location where the event was detected using latitude (∅), longitude (λ) and altitude (h). Formally, it is represented as in Eq. 5:
L =< ∅, λ, h >.
Definition 3
(Temporal Dimension (Te)). It indicates the date/time when an event occurred. We understand that social media platforms usually have several timestamps, such as when an event was shared, uploaded, or modified. However, we want to stick to the specific time an event occurred in this paper because this will provide the actual time the event occurred.
Definition 4
(Semantic Dimension (Se)). The semantic dimension contains the concept of the description of the event detected. It is usually represented as a graph with three attributes as presented in Eq. 6:
$$G=(N,E,R)$$
where N is the collection of concepts represented by nodes, E is the collection of edges connecting nodes, and R is the collection of semantic relationships.
Definition 5
(Participant Dimension (Pe)). Event participant dimension Pe refers to an actor (e.g., person/organisation) participating during the event. Extracting a participant can be done by applying Named Entity Recognition on the content description.
Definition 6
(Causal/Reason Dimension (Ce)). A causal dimension Ce is a set of causal knowledge representing the causes of the effect. The causal dimension determines the relationships among events as cause and effect. Causal dimension is represented in Eq. 7:
$${C}_{e}={<E}_{i},{E}_{j},{R}_{n}>$$
where \({C}_{e}\) represents causal dimension; \({E}_{i}\) represents the causal event, \({E}_{j}\) represents the effect of the causal event (\({E}_{i}\)), and \({R}_{n}\) represents the relationship among events.
Definition 7
(Manner Dimension (Me)). A manner dimension Me is defined as a set of textual information representing how an event was performed using the method (or How) Mi. Manner dimension is represented in Eq. 8:
$${M}_{e}= <{E}_{i},{M}_{i}>$$

4.2.2 The essence of explainable event detection

No doubt, the existing machine learning tools/algorithms for event detection have recorded success. However, most machine learning approaches use latent features to detect events effectively. Still, they cannot explain why a piece of social media text is classified as an event (Shu et al. 2019). Capturing an explainable event’s six dimensions (5W1H) is important because new knowledge and insights are originally hidden from the users, and practitioners can be derived from such explanations. In addition, extracting explainable features can further improve the event detection performance and gain users’ trust.
Explainable event detection can generate human-understandable explanations without knowing the detailed knowledge of the underlying models for the event detection (Ribeiro et al. 2016). Explainable event detection can increase users’ comprehensibility and trust (Adadi and Berrada 2018).
The need for explainable systems, according to Samek et al. (2017), is to (1) verify the system or comprehend the rules governing the decision-making process to identify biases; (2) improve the system, avoid failures by doing a comparison of different models with the intended model and dataset; (3) extract the distilled knowledge from the system by learning from it; (4) comply with applicable legislation by responding to legal inquiries and informing those who the model decision will impact. By revealing the logic behind a model decision, an explanation can be used to prevent errors and ascertain the appropriate use of certain criteria. An explanation will force trade secrets to be revealed (Novelli et al. 2023).

4.2.3 Human-centric explainable event detection

Machine learning systems that can provide human-centered explanations for decisions or predictions have recently gained much attention. No matter how good and efficient a model is, it is difficult for the users or practitioners to trust the model if such users or practitioners cannot understand the model or its behaviours (Mishima and Yamana 2022). In event detection, explainability is crucial for practitioners and users to ensure that models are widely accepted and trusted. Human-centric explainable event detection will achieve trustworthiness, explainability, and reliability, which are currently lacking. Achieving such a human-centric event detection system will necessitate designing and developing more explainable models. Optimising models or regularisers would only be worthwhile if they can solve the human-centric task of providing explanations (Narayanan et al. 2018). Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable (Vemula 2022). Unfortunately, even the developer who wrote the code for the model does not understand why a decision was made and, therefore, cannot assure the user to trust such a model. This huge gap necessitates the development of explainable event detection models that are human-centric to promote trust and wider adoption.

4.3 Semantics-based explainable AI for event detection

The growing integration of AI capabilities across consumer applications and industries has led to a high demand for explainability. XAI is an emerging approach that promotes accountability, credibility and trust. It combines machine learning with explainability techniques to show how, where, when, who, what, and why a decision such as an event is detected (Ammar and Shaban-Nejad 2020). One of the ways to achieve this is by leveraging semantic information (Donadello and Dragoni 2021) or formal ontologies to provide contextual knowledge (Confalonieri et al. 2021; Ribeiro and Leite 2021). A language with human-understandable concepts and meaningful relationships between those concepts is required to justify the output of XAI. The result is a comprehensible description of the reasoning behind the outputs provided by the XAI system. In order words, an ontology can be used to define concepts and relations that can be used to convey justification of XAI outputs. However, presenting the internals of the XAI system in a human-understandable way requires mapping to the existing concepts in the ontology. Justification for XAI output can be achieved by using logic-based reasoning methods coupled with ontology and the observations made with respect to each mapped concept. When a piece of contextual information provided by ontology fortifies AI, the result is more trustworthiness. Such AI becomes easy to train with minimal maintenance (Battaglia et al. 2018). The requirement for semantic-based explainable event detection and possible ways semantic technology can be integrated into XAI for event detection are discussed subsequently.

4.3.1 Requirement for semantic-based explainable event detection

Many early AI systems used the expert system approach (i.e., a rule-based) approach to provide explanations that address what, why and how the decision of AI systems. However, such systems did not address user context when they generated user explanations. Today there are XAI (with deep learning approaches) that focus on explaining the underlying mechanisms of these black boxes. While this is appreciable, it is not enough to provide tailored, personalised, and trustworthy explanations to the AI systems’ users or consumers (Yang et al. 2022. Moreover, machine learning models use scores for their prediction. While using a score for prediction may be useful to gain some confidence level, it lacks context and therefore is inadequate for explanation without additional information. Semantic web and reasoning technology is well suited to fill this gap (Chari et al. 2020). AI systems need to include provenance to improve the confidence and trust of AI systems. The strength of the semantic web combined with AI will contribute significantly to XAI systems.
A user-centric semantic-based explanation should be (1) understandable: explanation should include capabilities that will define terms for unfamiliar terminologies; (2) appeal to the user: a resourceful explanation should appeal to the user’s current need and mental cognition; (3) adapt to users’ context: explanations should be tailored to the current user’s context and scenario by leveraging on the user information; and (4) include provenance: a property that is either absent or has not yet received the proper emphasis. Provenance aims to include domain knowledge utilised along with the method used in obtaining the knowledge (Chari et al. 2020).
Doran et al. (2017) presented a better perspective of explainability. They opine that rather than focusing solely on mathematical mappings, XAIs should provide justification or reason for their outputs. In addition, they argue that to produce explanations that humans can understand, truly XAI systems must use reasoning engines that can run on knowledge bases that contain explicit semantics. This notion is also supported by Chari et al. (2020). Using logical reasoning on the model’s output is inadequate as it performs an explanation guided only by the knowledge base axioms. No explicit link is used from the knowledge base concepts and model learned features in that case. Hence, logical reasoning must be tied to semantics to have a human-centric explanation as it links the model’s output and human concepts. Semantic-based XAI can present a sufficient explanation comprehensible by a human. We believe that human-comprehensible explanations cannot be achieved without domain knowledge and that data analysis alone is insufficient for full-fledge explainability. There is a need to integrate semantic web technologies with AI systems to provide explanations in natural language (Ai et al. 2018; Lecue 2020).
Linking explanations to ontologies (i.e., structured knowledge) has multiple advantages, such as enriched explanations with semantic information, facilitation of effective knowledge transmission to users, and provision of potential for customisation of the level of specificity and generality of explanations to specific user profiles (Hind 2019). Integrating a domain background knowledge, such as a knowledge graph with AI models, can provide more insightful, meaningful, and trustworthy explanations (Tiddi and Schlobach 2022). Semantic technologies provide easy access to web knowledge sources. In contrast, symbolic representations such as knowledge bases, ontologies, and graph databases formalise and capture data and knowledge for specific or general domain knowledge.
XAI systems aim to provide a link between the semantic and learned features. The connection between an AI system or, more specifically, a DNN model and its semantic features can be formalised by defining the comprehension axiom, presented as follows:
Definition 8
(Comprehension axiom). Given a First-Order Logic (FOL) language with the set of its predicate symbol, a comprehension axiom is of the form as presented in Eq. 9 (Donadello and Dragoni 2021):
$$\underset{i=1}{\overset{k}{\bigwedge }}{O}_{i}\left(x\right)\leftrightarrow \underset{i=1}{\overset{l}{\bigwedge }}{A}_{i}\left(x\right)$$
where \({\left\{O\right\}}_{1}^{n}\) is the set of output symbols of the AI model and \({\left\{A\right\}}_{1}^{m}\) is the corresponding semantic attributes or features. Let us see how this applies to an AI system’s main tasks, such as regression, multiclass, and multi-label classification.
Regression
The predicates computed by the model, for example, the asking price and the real value of a house for The semantic features are the properties of interest for buying a house.
Multiclass classification
represents a class, for example, pounded yam with okra soup for, the semantic features are the ingredients contained in the recognised dish.
Multi-label classification
can be part of the list of predicates computed by the model, for example, dinner and party. The semantic features are objects in the scene, such as tables, pizzas, persons, and balloons.

4.3.2 Integrating semantics into explainable event detection models

Integrating semantics into XAI aims to facilitate situational understanding by human analysts ((Holzinger et al. 2022b). Exploiting semantic relationships between model outputs and human-in-the-loop processes can improve generated explanations (Harbone et al. 2018). It has already been established that for AI solutions to have their full potential in terms of usability, such solutions must be explainable, requiring semantic context (Pesquita 2021). Semantic technologies and artefacts such as knowledge graphs and ontologies can provide a human-centric explanation.
While progress is being made in the AI community to address explainability issues, AI systems are still far from self-explainability. Self-explainability can adapt automatically to any machine learning algorithm, data, model, application, user, and context (Lecue 2020). Semantic representations and connections in the form of knowledge graphs such as DBpedia, OpenCyc, Wikidata, Freebase, YAGO, NELL, ConceptNet, WordNet, and Google Knowledge Graphs can be armed to move XAI closer to human comprehension (Tiddi and Schlobach 2022). Knowledge graphs natively and readily expose connections and relations, encode contexts, and support inference and causation (see Fig. 4). Integrating knowledge graphs with XAI systems can be of great potential (d’Amato 2020). Knowledge-driven structures can adapt to constraints, variables, and search space. Knowledge graphs can also capture knowledge from heterogeneous domains, which makes them a great candidate for explanation (Pakti et al. 2019). In addition, semantic descriptions inspired by knowledge graphs can bridge the missing gap from brute-force machine learning approaches on text analysis to improve explainability. This has yielded positive results on natural language processing tasks such as event extraction, relation extraction, or text classification (Ribeiro et al. 2018).
With knowledge graphs resources, event causality inference can be achieved. The causal relations among events can be inferred in many ways. The simplest approach is determining the likelihood that event y occurs after x has occurred (Zhao 2021). The causal-effect relationship can also be formulated as a classification task, where the cause-and-effect candidate events represent the inputs by incorporating contextual information from knowledge sources (Kruengkrai et al. 2017). Some other methods employ NLP techniques to identify causal relations, such as causal prepositions, connectives, and verbs (Cekinel and Karagoz 2022). Often, cause-and-effect identification using the above methods may result in low generalisation. One way to improve this is to adopt ontology or external knowledge bases to establish the underlying relationships among event candidates. The computation of similarities between two cause-and-effect pairs \(\left({c}_{i},{e}_{i}\right),\left({c}_{j},{e}_{j}\right)\)is given as:
$$\sigma \left(\left({c}_{i},{e}_{i}\right),\left({c}_{j},{e}_{j}\right)\right)=\frac{\sigma \left({c}_{i},{c}_{j}\right)+\sigma \left({e}_{i},{e}_{j}\right)}{2}$$
One method of incorporating explainability into XAI involves the following steps: ontology selection, semantic annotation, semantic integration, and semantic explanation. Ontology selection determines the optimal set of ontologies that adequately describe the data. Semantic annotation links the data to ontologies. Semantic integration establishes links between ontologies, while semantic explanation explores background knowledge afforded by the knowledge graphs (Pesquita 2021).

5 Findings from the Survey

AI has made considerable strides in the last ten years to solve a wide range of issues. This apparent success, however, has been accompanied by a rise in model complexity and the use of opaque black-box models (Saeed and Omlin 2023). These issues call for human-centric XAI that will enable end users to grasp, believe in, and effectively operate the emerging breed of AI systems. So far, finding from the literature reveal the need for more to be done in the aspects of developing robust and human-centric explainable AI, improved explainability, the evaluation metrics of XAI and explainable event detection.
Robust and human-centric explainable AI
The interpretability of existing research has received the majority of attention, and additional research is still required to develop a robust explainable AI. In-depth knowledge of XAI and practical experience with XAI methodologies are prerequisites for using AI to make well-informed decisions. So far, the explanations provided by XAI methods are only understandable by experts and not common users (Holzinger et al. 2021). More research efforts in providing human-comprehensible explanations for AI systems are needed.
Improved explainability
AI systems are still far from self-explaining, despite the AI community’s progress in addressing explainability issues. According to Lecue 2020, self-explainability can automatically adapt to any machine learning algorithm, data, model, application, user, or context. The synergistic incorporation of XAI and semantic innovations have been recognised as one of the most proficient ways of expanding the logic of simulated intelligence and AI (ML) frameworks (Pesquita 2021). Combining semantic layers from knowledge graphs and ontologies could lead to explainability in AI. The coordination of information and artificial intelligence results with informative diagrams and ontologies can act as foundational information for XAI applications. According to Pesquita (2021), this kind of integration can provide semantic contexts necessary for human-centric explanation. So far, a few XAI methods have captured causal relationships (Holzinger et al. 2020). Hence there is a need for explainable AI methods that address causal dependencies. XAI models that encourage contextual understanding and answer questions and counterfactuals such as “what-if” are currently lacking. Also, more XAI methods are needed to emulate how humans reason, evaluate similarities, make decisions, draw an analogy or make associations (Angelov et al. 2021). The existing XAI methods barely scratch the ‘black box’ surface (by stressing on features or localities, for instance, within an image) and do not provide explanations understandable to humans.
XAI evaluation
Reasonable artificial intelligence is still in the outset stage. As a result, there is no universally accepted method for evaluating human-centred explanations (Li et al. 2022) because of the subjectivity of the reasonableness idea, insight, and interest of clients (Carvalho et al. 2019). Most existing frameworks skip or give a casual assessment (Danilevsky et al. 2020; Mohseni et al. 2021) claim that three human-focused assessment techniques for XAI are client fulfilment and trust, helpfulness, and mental models. The trust and satisfaction of the user can be measured through user interviews. The client’s exhibition can be utilised to decide convenience, for instance, occasion discovery with the guide of XAI frameworks. By asking the user to predict the model’s output, mental models demonstrate how well the user comprehends the system. The primary focus of research efforts ought to be the creation and standardisation of evaluation metrics for the level of explanation quality produced by XAI systems. Future human-centric XAI evaluation research should center on innovative methods for collecting subjective measures for explanation evaluation and effective user experiment designs.
Explainable event detection
Various event detection models have been described in the literature, but less effort has been made to give a human-centric explainable event detection. None of the fewer attempts has covered the 5W1H dimensions required for human-centric explainable event detection (Evans et al. 2022; Khan et al. 2021). Still required is a truly explainable event detection framework that can provide a human-readable response to the 5W1H dimension question. A blueprint for a human-centric explainable event detection framework is currently unavailable. There is still no explainable event detection framework that is fair, transparent, trustworthy, reliable, and transferable. A standard human-centric event detection framework must have these essential characteristics and the properties of the 5WIH dimensions.

6 Open issues and future research directions

This section presents the open issues, future research directions for XAI and explainable event detection.

6.1 Need for human-centric explanation in event detection

While human-centric explainable AI is promising, theory-driven XAI now signals future promise (Antoniadi et al. 2021). There are still many unknown topics in social, cognitive, and behavioural theories. Various stakeholders must map out the domain and form the AI discourse (Langer et al. 2021). Implementing XAI’s philosophical, methodological, and technological human-centric views will be challenging. In other words, we still don’t know how to reliably translate the systems we design into real-world, socially and culturally situated AI systems (Saeed and Omlin 2023). Social sciences, psychology, law, and human-computer interaction researchers must collaborate for human-centric explainable AI to succeed (Adadi and Berrada 2018). It has been contested that VSD will influence how solutions are designed in the future.
Although there are several explanation techniques, the choice of explanation will rely on the purpose and audience of the explanation. The people who create AI systems, the decision-makers who use them, and those who are ultimately affected by the results of those decisions should all be considered (Arrieta et al. 2020). To effectively communicate with a wide range of audiences, precise and relevant explanations must be created (Belle and Papantonis 2021). Future AI must include qualities like transparency, trustworthiness, and comprehensibility to be widely used. A truly human-centric explainable event detection system should provide a human-readable response to the 5W1H dimension question. Future research directions should focus on harnessing human-centric properties required for explainable event detection.

6.2 Need for robust explainable approaches

The key issues that current explainability strategies address are those related to model interpretability and post-hoc interpretation of the model’s input data or conclusion. More knowledge or intuition is required to fully understand AI systems’ inputs, operations, and outputs. Building stronger model-specific techniques is one area that could get more attention in the future (Belle and Papantonis 2021). Exploring this avenue can produce ways to use a model’s unique properties to provide explanations, perhaps boosting fidelity and enabling greater examination of the model’s internal workings instead of only explaining its conclusion. This would presumably make it easier to develop effective algorithmic implementations because they wouldn’t need to use expensive approximations.
Investigating further linkages between AI and the semantic web or statistics, which would allow for using a variety of well-researched techniques, is another promising option to overcome robustness difficulties (Kumar et al. 2020). Additionally, knowledge graph-inspired semantic descriptions can fill the gaps left by brute-force machine learning techniques to increase explainability. There is a need for robust explainable event detection approaches that showcase human-in-the-loop properties for better and more understandable explanations.

6.3 Need for human-centric explainable event detection Framework

A general framework for detecting explainable events is required since it would direct the creation of comprehensive explainable strategies (Adadi and Berrada 2018). Future research should concentrate on end-to-end frameworks that can be explained from conception to implementation (Saeed and Omlin 2023).
Data quality communication should be considered when creating design elements (Markus et al. 2021). Knowledge infusion (Messina et al. 2022), rule extraction (He et al. 2020), approaches supporting explaining the training process (Gunning et al. 2019), explainability for models and model comparison (Chatzimparmpas et al. 2020), and interpretability for natural language processing (Madsen et al. 2022) should all be specified during the development stage. Human-machine teaming (Islam et al. 2021), security (Liang et al. 2021), machine-machine explanation (Weller 2019), privacy (Longo et al. 2020), planning (Adadi and Berrada 2018), and improving explanation with ontologies (Burkart and Huber 2021) should all be considered during the deployment stage.

6.4 Need for standard evaluation Metrics for Explanation Methods

The only available tool for the quantitative evaluation of explanation methods is Quantus (Hedstrom et al. 2022). Accuracy, Fidelity, Sparsity, Contrastivity, and Robustness are the typical assessment criteria for explainable AI (Li et al. 2022). Although many explanation strategies have been created, it is important to evaluate their effectiveness and see if they meet the predetermined standards for explainability. Further research is still required to close this gap.
In the past, much research has focused on creating new explainability techniques without considering if these methods will satisfy the stakeholders (Adadi and Berrada 2018). In reality, only a small percentage of studies discussing explainability approaches evaluated the proposed techniques. Future studies on human-centric XAI assessments should identify novel approaches that efficiently gather subjective measures to evaluate experimental user designs (Zhou et al. 2021).

7 Conclusion

In this paper, we have provided a comprehensive survey on the human-centric explainable AI, explainable event detection and semantics-based explainable event detection by answering some research questions that bother on the characteristics of human-centric explanations, the state of explainable AI, methods for human-centric explanations, the essence of human-centricity in explainable event detection, research efforts in explainable event solutions, and the how semantics can integrated into explainable event detection achieve a human-comprehensible event detection solutions. We have argued that semantics-based XAI can provide a human-centric explanation in a more comprehensible manner. To realise these objectives, we looked at the various papers that bother on the topics of interest. We found out that current explainability techniques mainly focus on model interpretability. There is a need for additional information or intuition to explain AI systems’ inputs, workings, and conclusions. In addition, none of the existing event detection systems has captured the six dimensions of 5W1H to provide explanations while simultaneously emphasising human-centricity.
While human-centric explainable AI is hugely promising, theory-driven XAI is just beginning to display signs of future potential. Many areas of social, cognitive, and behavioural theories are yet to be explored. This is still open for further research. There is a need to chart the domain and shape the discourse of AI from diverse stakeholders. The major challenge will lie in how to put into operation the human-centric perspectives of XAI at the conceptual, methodological, and technical levels. AI could be explained by combining semantic layers from knowledge graphs and ontologies. Integration of data and AI results with knowledge graphs and ontologies can serve as background knowledge for XAI applications. Such integration can provide semantic contexts necessary for a human-centric explanation, greatly impacting user and decision-makers’ adoption of event detection solutions. In other words, future AI must exhibit properties such as trustworthiness, transparency and explainability for such systems to be widely adopted. Making informed decisions from AI will premise on a thorough understanding of XAI and hands-on expertise in XAI techniques.

Acknowledgements

The work is supported by the National Research Foundation (NRF), South Africa, Cape Peninsula University of Technology, South Africa, and Federal University Lokoja, Nigeria.

Declarations

Financial interests

The authors declare that there is no financial interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
Zurück zum Zitat Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M (2018) Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–8) Montreal QC, Canada, April 21–26. https://doi.org/10.1145/3173574.3174156 Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M (2018) Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–8) Montreal QC, Canada, April 21–26. https://​doi.​org/​10.​1145/​3173574.​3174156
Zurück zum Zitat Ai Q, Azizi V, Chen X, Zhang Y (2018) Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9):137MathSciNetCrossRef Ai Q, Azizi V, Chen X, Zhang Y (2018) Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9):137MathSciNetCrossRef
Zurück zum Zitat Ammar N, Shaban-Nejad A (2020) Explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof of concept prototype development. JMIR Med Inf 8(11):e18752CrossRef Ammar N, Shaban-Nejad A (2020) Explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof of concept prototype development. JMIR Med Inf 8(11):e18752CrossRef
Zurück zum Zitat Arya V, Bellamy RKE, Chen P, Dhurandhar A, Hind M, Hoffman SC, Houde S, Liao QV, Luss R, Mojsilovic A, Mourad S, Pedemonte P, Raghavendra R, Richards J, Sattigeri P, Shanmugam K, Singh M, Varshney KR, Wei D, Zhang Y (2020) AI explainability 360: an extensible toolkit for understanding data and machine learning models. J Mach Learn Resour 21:1303 Arya V, Bellamy RKE, Chen P, Dhurandhar A, Hind M, Hoffman SC, Houde S, Liao QV, Luss R, Mojsilovic A, Mourad S, Pedemonte P, Raghavendra R, Richards J, Sattigeri P, Shanmugam K, Singh M, Varshney KR, Wei D, Zhang Y (2020) AI explainability 360: an extensible toolkit for understanding data and machine learning models. J Mach Learn Resour 21:1303
Zurück zum Zitat Belle V, Papantonis I (2021) Principles and practice of explainable machine learning. Mach Learn Front Big Data 4:688969CrossRef Belle V, Papantonis I (2021) Principles and practice of explainable machine learning. Mach Learn Front Big Data 4:688969CrossRef
Zurück zum Zitat Bhatt U, Xiang A, Sharma S, Weller A, Taly A, Jia Y, Ghosh J, Puri R, Moura JMF, Eckersley P (2020) Explainable machine learning in deployment. In FAT* ‘20, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648–657). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3351095.3375624 Bhatt U, Xiang A, Sharma S, Weller A, Taly A, Jia Y, Ghosh J, Puri R, Moura JMF, Eckersley P (2020) Explainable machine learning in deployment. In FAT* ‘20, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 648–657). New York, NY, USA: Association for Computing Machinery. https://​doi.​org/​10.​1145/​3351095.​3375624
Zurück zum Zitat Bond RR, Mulvenna M, Wang H (2019) Human-centered artificial intelligence: weaving UX into algorithmic decision making. RoCHI 2019: International Conference on Human-Computer Interaction (pp. 2–9). Bucharest, Romania Bond RR, Mulvenna M, Wang H (2019) Human-centered artificial intelligence: weaving UX into algorithmic decision making. RoCHI 2019: International Conference on Human-Computer Interaction (pp. 2–9). Bucharest, Romania
Zurück zum Zitat Bunde E (2021) AI-assisted and explainable hate speech detection for social media moderators – a design science approach. Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 1264–1273). 5–8 January, Grand Wailea, Maui, Hawaii Bunde E (2021) AI-assisted and explainable hate speech detection for social media moderators – a design science approach. Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 1264–1273). 5–8 January, Grand Wailea, Maui, Hawaii
Zurück zum Zitat Carvalho DV, Pareira EM, Cardoso JS (2019) Machine learning interpretability: a survey on methods and metrics. Electronics 8:832CrossRef Carvalho DV, Pareira EM, Cardoso JS (2019) Machine learning interpretability: a survey on methods and metrics. Electronics 8:832CrossRef
Zurück zum Zitat Chakman K, Swamy SD, Das A, Debbarma S (2020) 5W1H-Based semantic segmentation of tweets for event detection using BERT. In: Bhattacharjee A, Borgohain S, Soni B, Verma G, Gao XZ (eds) Machine Learning, Image Processing, Network Security and Data Sciences. MIND 2020. Communications in Computer and Information Science 1240:57–72. Springer, Singapore. https://doi.org/10.1007/978-981-15-6315-7_5 Chakman K, Swamy SD, Das A, Debbarma S (2020) 5W1H-Based semantic segmentation of tweets for event detection using BERT. In: Bhattacharjee A, Borgohain S, Soni B, Verma G, Gao XZ (eds) Machine Learning, Image Processing, Network Security and Data Sciences. MIND 2020. Communications in Computer and Information Science 1240:57–72. Springer, Singapore. https://​doi.​org/​10.​1007/​978-981-15-6315-7_​5
Zurück zum Zitat Chari S, Gruen DM, Seneviratne O, McGuinness DL (2020) Foundations of explainable knowledge-enabled systems. arXiv:2003.07520v1 [cs.AI] 17 Mar2020. Chari S, Gruen DM, Seneviratne O, McGuinness DL (2020) Foundations of explainable knowledge-enabled systems. arXiv:2003.07520v1 [cs.AI] 17 Mar2020.
Zurück zum Zitat Cherkassky V, Dhar S (2015) Interpretation of black-box predictive models. Measures of complexity. Springer, pp 267–286 Cherkassky V, Dhar S (2015) Interpretation of black-box predictive models. Measures of complexity. Springer, pp 267–286
Zurück zum Zitat Damfeh EA, Wayori BA, Appiahene P, Mensah J, Awarayi NS (2022) Human-centered artificial intelligence: a review. Int J Advancements Technol 13(8):1000202 Damfeh EA, Wayori BA, Appiahene P, Mensah J, Awarayi NS (2022) Human-centered artificial intelligence: a review. Int J Advancements Technol 13(8):1000202
Zurück zum Zitat Danilevsky M, Qian K, Aharonov R, Katsis Y, Kawas B, Sen P (2020) A survey of the state of explainable AI for natural language processing. Proc. 1st Conf. Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Int’l. Joint Conf Nat Lang Process 1: 447–459 Danilevsky M, Qian K, Aharonov R, Katsis Y, Kawas B, Sen P (2020) A survey of the state of explainable AI for natural language processing. Proc. 1st Conf. Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Int’l. Joint Conf Nat Lang Process 1: 447–459
Zurück zum Zitat Das A, Rad P (2020) Opportunities and challenges in explainable AI (XAI): a survey. arXiv:2006.11371v2 [cs.CV] 23 Jun 2020. Das A, Rad P (2020) Opportunities and challenges in explainable AI (XAI): a survey. arXiv:2006.11371v2 [cs.CV] 23 Jun 2020.
Zurück zum Zitat Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualisation of perspectives. In: Besold TR, Kutz O (eds) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017. Bari Italy. Doran D, Schulz S, Besold TR (2017) What does explainable AI really mean? A new conceptualisation of perspectives. In: Besold TR, Kutz O (eds) Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017. Bari Italy.
Zurück zum Zitat Dosilovic FK, Brcic M, Hlupic N (2018) Explainable artificial intelligence: a survey. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (pp. 0210–0215). Opatija, Croatia. doi: https://doi.org/10.23919/MIPRO.2018.8400040 Dosilovic FK, Brcic M, Hlupic N (2018) Explainable artificial intelligence: a survey. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (pp. 0210–0215). Opatija, Croatia. doi: https://​doi.​org/​10.​23919/​MIPRO.​2018.​8400040
Zurück zum Zitat Ehsan U, Riedl MO (2020) Human-centered explainable AI: towards a reflective sociotechnical approach. arXiv:2002.01092v2 [cs.HC] February 5 2020 Ehsan U, Riedl MO (2020) Human-centered explainable AI: towards a reflective sociotechnical approach. arXiv:2002.01092v2 [cs.HC] February 5 2020
Zurück zum Zitat Ehsan U, Wintersberger P, Liao QV, Mara M, Streit M, Wachter S, Riener A, Riedl MO (2021) Operationalizing Human-Centered Perspectives in Explainable AI. CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ‘21 Extended Abstracts), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA. https://doi.org/10.1145/3411763.3441342 Ehsan U, Wintersberger P, Liao QV, Mara M, Streit M, Wachter S, Riener A, Riedl MO (2021) Operationalizing Human-Centered Perspectives in Explainable AI. CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ‘21 Extended Abstracts), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA. https://​doi.​org/​10.​1145/​3411763.​3441342
Zurück zum Zitat Friedman B, Hendry DG (2019) Value sensitive design: shaping technology with moral imagination. MIT Press Friedman B, Hendry DG (2019) Value sensitive design: shaping technology with moral imagination. MIT Press
Zurück zum Zitat Giatrakos N, Artikis A, Deligiannakis A, Garofalakis M (2017) Complex event recognition in big data era. Proceedings of the VLDB Endowment 10(12):1996–1999 Giatrakos N, Artikis A, Deligiannakis A, Garofalakis M (2017) Complex event recognition in big data era. Proceedings of the VLDB Endowment 10(12):1996–1999
Zurück zum Zitat Gunning D, Aha D (2019) DARPA’s explainable Artificial Intelligence (XAI) Program. AI Magazine 40(2):44–58CrossRef Gunning D, Aha D (2019) DARPA’s explainable Artificial Intelligence (XAI) Program. AI Magazine 40(2):44–58CrossRef
Zurück zum Zitat Harbone D, Willis C, Tomsett R, Preece A (2018) Integrating learning and reasoning services for explainable information fusion. International Conference on Pattern Recognition and Artificial Intelligence, Montreal, Canada, 14–17 May Harbone D, Willis C, Tomsett R, Preece A (2018) Integrating learning and reasoning services for explainable information fusion. International Conference on Pattern Recognition and Artificial Intelligence, Montreal, Canada, 14–17 May
Zurück zum Zitat Hedstrom A, Weber L, Bareeva D, Motzkus F, Samek W, Lapuschkin S, Hohne MMC (2022) Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations. arXiv:2202.06861v1 [cs.LG] February 14 2022. Hedstrom A, Weber L, Bareeva D, Motzkus F, Samek W, Lapuschkin S, Hohne MMC (2022) Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations. arXiv:2202.06861v1 [cs.LG] February 14 2022.
Zurück zum Zitat Holzinger A, Carrington A, Mueller H (2020) Measuring the quality of explanations: the system causability scale (cs): comparing human and machine explanations. KI- Kunstliche Intelligenz (German Journal of Arti_cial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt 34(2):193–198 Holzinger A, Carrington A, Mueller H (2020) Measuring the quality of explanations: the system causability scale (cs): comparing human and machine explanations. KI- Kunstliche Intelligenz (German Journal of Arti_cial intelligence), Special Issue on Interactive Machine Learning, Edited by Kristian Kersting, TU Darmstadt 34(2):193–198
Zurück zum Zitat Holzinger A, Malle B, Saranti A, Pfeifer B (2021) Towards multi-modal causality with graph neural networks enabling information fusion for explainable AI. Inform Fusion 71(7):28–37CrossRef Holzinger A, Malle B, Saranti A, Pfeifer B (2021) Towards multi-modal causality with graph neural networks enabling information fusion for explainable AI. Inform Fusion 71(7):28–37CrossRef
Zurück zum Zitat Holzinger A, Saranti A, Molnar C, Biecek P, Samek W (2022a) Explainable AI methods – a brief overview. In: Holzinger A, Goebel R, Fong R, Moon T, Muller KR, Samek W (eds) xxAI – beyond explainable AI. xxAI 2020. Lecture Notes in Computer Science, vol 13200. Springer, Cham. https://doi.org/10.1007/978-3-031-04083-2_2.CrossRef Holzinger A, Saranti A, Molnar C, Biecek P, Samek W (2022a) Explainable AI methods – a brief overview. In: Holzinger A, Goebel R, Fong R, Moon T, Muller KR, Samek W (eds) xxAI – beyond explainable AI. xxAI 2020. Lecture Notes in Computer Science, vol 13200. Springer, Cham. https://​doi.​org/​10.​1007/​978-3-031-04083-2_​2.CrossRef
Zurück zum Zitat Islam SR, Eberle W, Ghafoor SK, Ahmed M (2021) Explainable artificial intelligence approaches: a survey. arXiv:2101.09429v1 [cs.AI]. Islam SR, Eberle W, Ghafoor SK, Ahmed M (2021) Explainable artificial intelligence approaches: a survey. arXiv:2101.09429v1 [cs.AI].
Zurück zum Zitat Kruengkrai C, Torisawa K, Hashimoto C, Kloetzer J, Oh J, Tanaka M (2017) Improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) (pp. 3466–3473). doi: https://doi.org/10.1609/aaai.v31i1.11005 Kruengkrai C, Torisawa K, Hashimoto C, Kloetzer J, Oh J, Tanaka M (2017) Improving event causality recognition with multiple background knowledge sources using multi-column convolutional neural networks. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) (pp. 3466–3473). doi: https://​doi.​org/​10.​1609/​aaai.​v31i1.​11005
Zurück zum Zitat Kumar IE, Venkatasubramanian S, Scheidegger C, Friedler S (2020) Problems with Shapley-Value-Based explanations as feature importance measures. ICML Kumar IE, Venkatasubramanian S, Scheidegger C, Friedler S (2020) Problems with Shapley-Value-Based explanations as feature importance measures. ICML
Zurück zum Zitat Langer M, Oster D, Speith T, Hermanns H, Kastner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)? – a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473MathSciNetCrossRef Langer M, Oster D, Speith T, Hermanns H, Kastner L, Schmidt E, Sesing A, Baum K (2021) What do we want from explainable artificial intelligence (XAI)? – a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif Intell 296:103473MathSciNetCrossRef
Zurück zum Zitat Li Y, Zhou J, Verma S, Chen F (2022) A survey of explainable graph neural networks: taxonomy and evaluation metrics. arXiv:2207.12599v1 [cs.LG] July 26 2022. Li Y, Zhou J, Verma S, Chen F (2022) A survey of explainable graph neural networks: taxonomy and evaluation metrics. arXiv:2207.12599v1 [cs.LG] July 26 2022.
Zurück zum Zitat Liao V, Varshney KR (2022) Human-centered explainable AI (XAI): from algorithms to user experiences. arXiv:2110.10790v5 [cs.AI] April 19 2022 Liao V, Varshney KR (2022) Human-centered explainable AI (XAI): from algorithms to user experiences. arXiv:2110.10790v5 [cs.AI] April 19 2022
Zurück zum Zitat Longo L, Goebel R, Lecue F, Kieseberg P, Holzinger A (2020) Explainable artificial intelligence: concepts, applications, research challenges and visions. Machine learning and knowledge extraction. Springer, Cham, pp 1–16 Longo L, Goebel R, Lecue F, Kieseberg P, Holzinger A (2020) Explainable artificial intelligence: concepts, applications, research challenges and visions. Machine learning and knowledge extraction. Springer, Cham, pp 1–16
Zurück zum Zitat Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777). Red Hook, NY, USA: Curran Associates Inc Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4768–4777). Red Hook, NY, USA: Curran Associates Inc
Zurück zum Zitat Luo D, Cheng W, Xu D, Yu W, Zong B, Chen H, Zhang X (2020) Parametrised explainer for graph neural network. Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 19620–19631). Red Hook, NY, USA: Curran Associates Inc Luo D, Cheng W, Xu D, Yu W, Zong B, Chen H, Zhang X (2020) Parametrised explainer for graph neural network. Proceedings of the 34th International Conference on Neural Information Processing Systems (pp. 19620–19631). Red Hook, NY, USA: Curran Associates Inc
Zurück zum Zitat MacAvaney S, Yao HR, Yang E, Russell K, Goharian N, Frieder O (2019) Hate speech detection: challenges and solutions. PLoS ONE 14(8):1–16CrossRef MacAvaney S, Yao HR, Yang E, Russell K, Goharian N, Frieder O (2019) Hate speech detection: challenges and solutions. PLoS ONE 14(8):1–16CrossRef
Zurück zum Zitat Messina P, Pino P, Parra D, Soto A, Besa C, Uribe S, Andía M, Tejos C, Prieto C, Capurro D (2022) A survey on deep learning and explainability for automatic report generation from medical images. ACM Comput Surv 54(10s). https://doi.org/10.1145/3522747 Messina P, Pino P, Parra D, Soto A, Besa C, Uribe S, Andía M, Tejos C, Prieto C, Capurro D (2022) A survey on deep learning and explainability for automatic report generation from medical images. ACM Comput Surv 54(10s). https://​doi.​org/​10.​1145/​3522747
Zurück zum Zitat Mishima K, Yamana NH (2022) A survey on explainable fake news detection. IEICE TRANS INF & SYST E105–D(7):1249–1257CrossRef Mishima K, Yamana NH (2022) A survey on explainable fake news detection. IEICE TRANS INF & SYST E105–D(7):1249–1257CrossRef
Zurück zum Zitat Muhammed S, Getahun F, Chbeir R (2021) 5W1H aware framework for representing and detecting real events from multimedia digital ecosystems. In: Bellatreche L, Dumas M, Karras P, Matulevicius R (eds) Advances in Databases and Information Systems 2021. Lecture Notes in Computer Science 12843, 57–70. Springer. https://doi.org/10.1007/978-3-030-82472-3_6 Muhammed S, Getahun F, Chbeir R (2021) 5W1H aware framework for representing and detecting real events from multimedia digital ecosystems. In: Bellatreche L, Dumas M, Karras P, Matulevicius R (eds) Advances in Databases and Information Systems 2021. Lecture Notes in Computer Science 12843, 57–70. Springer. https://​doi.​org/​10.​1007/​978-3-030-82472-3_​6
Zurück zum Zitat Narayanan M, Chen E, He J, Kim B, Gershman S, Doshi-Velez F (2018) How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv:1802.00682v1 [cs.AI] February 2 2018. Narayanan M, Chen E, He J, Kim B, Gershman S, Doshi-Velez F (2018) How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv:1802.00682v1 [cs.AI] February 2 2018.
Zurück zum Zitat Ontika NN, Syed HA, Sabmannshausen SM, Harper RHR, Chen Y, Park SY, …, Pipek V (2022) Exploring human-centered AI in healthcare: diagnosis, explainability, and trust. Proceedings of the 20th European Conference on Computer Supported Cooperative Work: The International Venue on Practice-centered Computing on the Design of Cooperation Technologies - Workshops, Reports of the European Society for Socially Embedded Technologies (ISSN 2510–2591). doi: https://doi.org/10.48340/ecscw2022_ws06 Ontika NN, Syed HA, Sabmannshausen SM, Harper RHR, Chen Y, Park SY, …, Pipek V (2022) Exploring human-centered AI in healthcare: diagnosis, explainability, and trust. Proceedings of the 20th European Conference on Computer Supported Cooperative Work: The International Venue on Practice-centered Computing on the Design of Cooperation Technologies - Workshops, Reports of the European Society for Socially Embedded Technologies (ISSN 2510–2591). doi: https://​doi.​org/​10.​48340/​ecscw2022_​ws06
Zurück zum Zitat Panagiotou N, Katakis I, Gunopulos D (2016) Detecting events in online social networks: definitions, trends and challenges. In: Michaelis S (ed) Solving large Scale Learning Tasks: Challenges and Algorithms. Springer, Cham, pp 42–84CrossRef Panagiotou N, Katakis I, Gunopulos D (2016) Detecting events in online social networks: definitions, trends and challenges. In: Michaelis S (ed) Solving large Scale Learning Tasks: Challenges and Algorithms. Springer, Cham, pp 42–84CrossRef
Zurück zum Zitat Pesquita C (2021) Towards semantic integration for explainable artificial intelligence in biomedical domain. Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2021) 5:747–753. doi: https://doi.org/10.5220/0010389707470753 Pesquita C (2021) Towards semantic integration for explainable artificial intelligence in biomedical domain. Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2021) 5:747–753. doi: https://​doi.​org/​10.​5220/​0010389707470753​
Zurück zum Zitat Pope PE, Kolouri S, Rostami M, Martin CE, Hoffmann H (2019) Explainability methods for graph convolutional neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10772–10781) Pope PE, Kolouri S, Rostami M, Martin CE, Hoffmann H (2019) Explainability methods for graph convolutional neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10772–10781)
Zurück zum Zitat Ribeiro MS, Leite J (2021) Aligning artificial neural networks and ontologies towards explainable AI. Association for the Advancement of Artificial Intelligence (AAAI-21). Tech Track 6 35(6):4932–4940 Ribeiro MS, Leite J (2021) Aligning artificial neural networks and ontologies towards explainable AI. Association for the Advancement of Artificial Intelligence (AAAI-21). Tech Track 6 35(6):4932–4940
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144) Ribeiro MT, Singh S, Guestrin C (2016) Why should I trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144)
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high precision model-agnostic explanations. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eight AAAI Symposium on Educational Advances in Artificial Intelligence (pp. 1527–1535), February 2–7, Louisiana, New Orleans, USA Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high precision model-agnostic explanations. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eight AAAI Symposium on Educational Advances in Artificial Intelligence (pp. 1527–1535), February 2–7, Louisiana, New Orleans, USA
Zurück zum Zitat Ribera M, Lapedriza A (2019) Can we do better explanations? A proposal of user-centered explainable AI. In Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, 7 pages. March 20, New York NY, USA: ACM Ribera M, Lapedriza A (2019) Can we do better explanations? A proposal of user-centered explainable AI. In Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, 7 pages. March 20, New York NY, USA: ACM
Zurück zum Zitat Rong Y, Leemann T, Nguyen T, Fiedler L, Qian P, Unhelkar V, Seidel T, Kasneci G, Kasneci E (2022) Towards human-centered explainable AI: user studies for model explanations. arXiv:2210.11584v2 [cs.AI]. Rong Y, Leemann T, Nguyen T, Fiedler L, Qian P, Unhelkar V, Seidel T, Kasneci G, Kasneci E (2022) Towards human-centered explainable AI: user studies for model explanations. arXiv:2210.11584v2 [cs.AI].
Zurück zum Zitat Samek W, Montavon G, Vedaldi A, Hansen LK, Müller KR (eds) (2019) (eds.) Explainable AI: interpreting, explaining and visualising deep learning. Lecture Notes in Artificial Intelligence, Lect. Notes Computer State-of-the-Art Surveys; Springer: Berlin/Heidelberg, Germany. ISBN 978-3-030-28953-9 Samek W, Montavon G, Vedaldi A, Hansen LK, Müller KR (eds) (2019) (eds.) Explainable AI: interpreting, explaining and visualising deep learning. Lecture Notes in Artificial Intelligence, Lect. Notes Computer State-of-the-Art Surveys; Springer: Berlin/Heidelberg, Germany. ISBN 978-3-030-28953-9
Zurück zum Zitat Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localisation. IEEE international conference on computer vision (pp. 618–26) Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localisation. IEEE international conference on computer vision (pp. 618–26)
Zurück zum Zitat Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. International Conference On Machine Learning. PMLR; 2017, p. 3145–53 Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences. International Conference On Machine Learning. PMLR; 2017, p. 3145–53
Zurück zum Zitat Shu K, Cui L, Wang S, Lee D, Liu H (2019) dEFEND: explainable fake news detection. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 395–405). New York NY, USA: Association for Computing Machinery Shu K, Cui L, Wang S, Lee D, Liu H (2019) dEFEND: explainable fake news detection. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 395–405). New York NY, USA: Association for Computing Machinery
Zurück zum Zitat Sreenivasulu M, Sridevi M (2018) A survey on event detection methods on various social media. In: Sa P, Bakshi S, Hatzilygeroudis I, Sahoo M (eds) Findings in Intelligent Computing Techniques. Advances in Intelligent Systems 709:87–93. Singapore: Springer Sreenivasulu M, Sridevi M (2018) A survey on event detection methods on various social media. In: Sa P, Bakshi S, Hatzilygeroudis I, Sahoo M (eds) Findings in Intelligent Computing Techniques. Advances in Intelligent Systems 709:87–93. Singapore: Springer
Zurück zum Zitat Syed HA, Schorch M, Pipek V (2020) Disaster learning aid: a chatbot-centric approach for improved organisational disaster resilience. Proceedings of the 17th Information Systems for Response and Management (ISCRAM 2020) (pp. 448–457). Blacksburg, VA, USA Syed HA, Schorch M, Pipek V (2020) Disaster learning aid: a chatbot-centric approach for improved organisational disaster resilience. Proceedings of the 17th Information Systems for Response and Management (ISCRAM 2020) (pp. 448–457). Blacksburg, VA, USA
Zurück zum Zitat Umbrello S, de Bellis AF (2018) A value-sensitive design approach to intelligent agents. In: Yampolskiy RV (ed) Artificial Intelligence Safety and Security. Chapman and Hall/CRC Umbrello S, de Bellis AF (2018) A value-sensitive design approach to intelligent agents. In: Yampolskiy RV (ed) Artificial Intelligence Safety and Security. Chapman and Hall/CRC
Zurück zum Zitat Vaughan JW, Wallach H (2020) A human-centered agenda for intelligible machine learning. In: Pelillo M, Scantamburlo T (eds) Machines we trust: perspectives on dependable AI. The MIT Press, London Vaughan JW, Wallach H (2020) A human-centered agenda for intelligible machine learning. In: Pelillo M, Scantamburlo T (eds) Machines we trust: perspectives on dependable AI. The MIT Press, London
Zurück zum Zitat Vemula S (2022) Human-centered explainable artificial intelligence for anomaly detection in quality inspection: a collaborative approach to bridge the gap between humans and AI. Dissertation, University of the Incarnate Word. https://athenaeum.uiw.edu/uiw_etds/397 Vemula S (2022) Human-centered explainable artificial intelligence for anomaly detection in quality inspection: a collaborative approach to bridge the gap between humans and AI. Dissertation, University of the Incarnate Word. https://​athenaeum.​uiw.​edu/​uiw_​etds/​397
Zurück zum Zitat Win SSM, Aung TN (2018) Automated text annotation for social media data during natural disasters. Adv Sci Technol Eng Syst J 3(2):119–127CrossRef Win SSM, Aung TN (2018) Automated text annotation for social media data during natural disasters. Adv Sci Technol Eng Syst J 3(2):119–127CrossRef
Zurück zum Zitat Yuan H, Yu H, Wang J, Li K, Ji S (2021) On explainability of graph neural networks via subgraph explorations. arXiv:2102.05152v2 [cs.LG]. Yuan H, Yu H, Wang J, Li K, Ji S (2021) On explainability of graph neural networks via subgraph explorations. arXiv:2102.05152v2 [cs.LG].
Zurück zum Zitat Zhang Y, Defazio D, Ramesh A (2021) RelEx: a model-agnostic relational model explainer. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 1042–1049). New York NY, USA: Association for Computing Machinery Zhang Y, Defazio D, Ramesh A (2021) RelEx: a model-agnostic relational model explainer. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 1042–1049). New York NY, USA: Association for Computing Machinery
Metadaten
Titel
Human-centric and semantics-based explainable event detection: a survey
verfasst von
Taiwo Kolajo
Olawande Daramola
Publikationsdatum
22.06.2023
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence Review / Ausgabe Sonderheft 1/2023
Print ISSN: 0269-2821
Elektronische ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-023-10525-0

Weitere Artikel der Sonderheft 1/2023

Artificial Intelligence Review 1/2023 Zur Ausgabe

Premium Partner