Symbolic-neural rule based reasoning and explanation

https://doi.org/10.1016/j.eswa.2015.01.068Get rights and content

Highlights

  • Neurules, a type of neuro-symbolic rules, retain naturalness and modularity.

  • Reasoning and explanation mechanisms for neurules are presented.

  • Two types of inference processes: a connectionism-oriented and symbolism-oriented.

  • Symbolism-oriented inference more efficient than connectionism-oriented.

  • More efficient and natural explanation compared to connectionist expert systems.

Abstract

In this paper, we present neurule-based inference and explanation mechanisms. A neurule is a kind of integrated rule, which integrates a symbolic rule with neurocomputing: each neurule is considered as an adaline neural unit. Thus, a neurule base consists of a number of autonomous adaline units (neurules), expressed in a symbolic oriented syntax. There are two inference processes for neurules: the connectionism-oriented process, which gives pre-eminence to neurocomputing approach, and the symbolism-oriented process, which gives pre-eminence to a symbolic backwards chaining like approach. Symbolism-oriented process is proved to be more efficient than the connectionism-oriented one, in terms of the number of required computations (56.47–63.88% average reduction) and the mean runtime gain (59.95–64.89% in average), although both require almost the same average number of input values. The neurule-based explanation mechanism provides three types of explanations: ‘how’ a conclusion was derived, ‘why’ a value for a specific input variable was asked from the user and ‘why-not’ a variable has acquired a specific value. As shown by experiments, the neurule-based explanation mechanism is superior to that provided by known connectionist expert systems, another neuro-symbolic integration category. It provides less in number (64.38–69.28% average reduction) and more natural explanation rules, thus increasing efficiency (mean runtime gain of 56.65–56.73% in average) and comprehensibility of explanations.

Introduction

Most of the intelligent methods have advantages as well as disadvantages. Research in artificial intelligence (AI) has shown that approaches integrating (or combining) two or more intelligent methods may provide benefits (Hatzilygeroudis, 2011a, Tweedale and Jain, 2014). This is accomplished by exploiting the advantages of the integrated methods to overcome their disadvantages. Complementarity in advantages and disadvantages of the combined methods is usually the basis to the success of such integrations. Example types of popular integrations, among others, involve neuro-symbolic approaches, integrating neural networks with symbolic methods (Garcez D’Avila and Lamb, 2011, Hatzilygeroudis and Prentzas, 2004a), neuro-fuzzy approaches, integrating neural networks with fuzzy methods (Evans and Kennedy, 2014, Lin et al., 2012, Zhang et al., 2015), approaches combining neural networks and genetic algorithms (Huang, Li, & Xiao, 2015) and approaches combining case-based reasoning with rule-based reasoning (Prentzas & Hatzilygeroudis, 2007) or other intelligent methods (Chuang and Huang, 2011, Prentzas and Hatzilygeroudis, 2009).

A number of neuro-symbolic formalisms have been introduced during last decade (Garcez D’Avila et al., 2002, Garcez D’Avila et al., 2009, Hatzilygeroudis and Prentzas, 2004a). Combinations of symbolic rules (of propositional type) and neural networks constitute a large proportion of neuro-symbolic approaches (Gallant, 1993, Hatzilygeroudis and Prentzas, 2000, Hatzilygeroudis and Prentzas, 2001, Holldobler and Kalinke, 1994, Towell and Shavlik, 1994). The efforts integrating rules and neural networks may yield effective formalisms by exploiting the complementary advantages and disadvantages of the integrated components (Hatzilygeroudis & Prentzas, 2004a). Symbolic rule-based systems possess positive aspects such as naturalness and modularity of the rule base, interactive reasoning process and ability to explain reasoning results. Neural networks lack the naturalness and modularity of symbolic rules and it is also difficult (or impossible) to provide explanations. Explanations are crucial in certain domains such as medicine and finance. However, symbolic rules have disadvantages such as, difficulty in acquiring rules from the experts (known as the ‘knowledge acquisition bottleneck’), inability to draw conclusions when there are missing values in the input data and possible problems in cases of unexpected input values or combinations of them. On the other hand, neural networks provide generalization, representation of complex and imprecise knowledge and knowledge acquisition from training examples.

Neurules are a type of integrated rules combining symbolic rules and neurocomputing (Hatzilygeroudis and Prentzas, 2000, Hatzilygeroudis and Prentzas, 2001, Prentzas and Hatzilygeroudis, 2011). Neurules belong to neuro-symbolic representations resulting in a uniform, seamless combination of the two integrated components. Most of the existing such approaches give pre-eminence to connectionism. As a consequence, they do not offer important advantages of symbolic rules, like naturalness and modularity, and also do not provide interactive inference and explanation. Neurules follow a different direction by giving priority to the symbolic than the connectionist framework. Therefore, the knowledge base exhibits characteristics such as naturalness and modularity, to a large degree. Furthermore, neurule-based systems provide interactive inference and explanation.

Integration in neurules involves all knowledge representation aspects: syntax, semantics and reasoning. Hybridism in syntax and semantics has been presented in most of our past works on neurules and for the sake of completeness is briefly presented here too. Reasoning via neurules can be done via two different inference processes. The one gives pre-eminence to neurocomputing, namely connectionism-oriented inference, whereas the other to symbolic reasoning, namely symbolism-oriented inference. Both inference processes are integrated in their nature. Connectionism-oriented inference process has been presented in Hatzilygeroudis and Prentzas (2010) and compared to two alternative inference mechanisms used in connectionist expert systems (Gallant, 1993, Ghalwash, 1998), a type of neuro-symbolic systems. An initial version of the symbolism-oriented inference has been presented in Hatzilygeroudis and Prentzas (2000).

In this paper, we present an improved symbolism-oriented inference process. Improvement refers to the number of required computations to produce conclusions, the ability to work with any order of neurule conditions and the ability to work with two different sets of discrete values for representing ‘true’, ‘false’ and ‘unknown’ states. We present experimental results comparing the performance of the new symbolism-oriented process with the connectionism-oriented one.

However, the main contribution of this paper is the introduction of an explanation mechanism for neurule-based inference. The explanation mechanism provides three types of explanations: ‘how’, ‘why’ and ‘why-not’. We also present experimental results comparing the ‘how’ explanation mechanism with the corresponding mechanism used in connectionist expert systems.

This paper is structured as follows. Section 2 briefly discusses related work. Section 3 presents neurules. Section 4 discusses the two alternative inference processes. Section 5 presents the explanation mechanism. Section 6 presents explanation examples. Section 7 presents experimental results involving inference and explanation. Section 8 concludes.

Section snippets

Related work

The objective of our work is to remain on the symbolic ground and incorporate techniques from the connectionist approach into propositional type symbolic rules to improve their representation capabilities and performance, without significantly reducing features, like naturalness and modularity, or sacrificing functionalities, like interactive inference and explanation. Many attempts based on the connectionist ground, which simulate or translate symbolic processes within a neural network, have

Neurules: syntax and semantics

Neurules are a kind of integrated rules. The form of a neurule is depicted in Fig. 1a. Each condition Ci is assigned a number sfi, called its significance factor. Moreover, each neurule itself is assigned a number sf0, called its bias factor. Internally, each neurule is considered as an adaline unit neural (Fig. 1b). The inputs Ci (i = 1,  , n) of the unit are the conditions of the neurule. The weights of the unit are the significance factors of the neurule and its bias is the bias factor of the

Integrated inference engine

The inference engine associated with neurules implements the way neurules co-operate to derive conclusions. It can support two alternative integrated inference processes. The one gives pre-eminence to neurocomputing, and is called connectionism-oriented inference process, whereas the other to symbolic reasoning, and is called symbolism-oriented inference process. In the connectionism-oriented process, the choice of the next rule to be considered is based on a neurocomputing measure, but the

Explanation mechanism

The explanation mechanism is used to provide some type of explanation related to an inference session. More specifically, the explanation mechanism associated to neurules provides the following types of explanation:

  • How: explanation of how a conclusion has been derived.

  • Why-not: explanation of why an inferable (intermediate or output) variable has not acquired a specific value.

  • Why: explanation of why is required to give value for a particular input variable.

In this section, the processes

Explanation examples

In the following, we provide certain examples concerning the aforementioned explanation processes. The explanation examples are based on results produced by the symbolism-oriented inference for the (partial) neurule base shown in Table 1.

Let us suppose that the following values are given for the input variables (in the order specified): (pain, night-pain), (fever, no-fever), (antinflam-reaction, none), (gender, man) and (age, 30) and stored in WM. Given these inputs, neurules R4, R5 and R6 are

Experimental results and discussion

In this section, experimental results regarding the performance of neurules are presented with the help of a number of conducted experiments. These refer to the following:

  • The symbolism-oriented process is compared to the connectionism-oriented process for the alternative sets of input values, {1, −1, 0} and {1, 0, 0.5}.

  • For the set {1, 0, 0.5}, alternative versions of the symbolism-oriented inference process are compared.

  • The neurule-based explanation mechanism is compared to the explanation mechanism

Conclusions

In this paper, we present inference and explanation mechanisms for neurules, a type of hybrid rules integrating symbolic rules with neurocomputing. Each neurule is considered as an adaline neural unit expressed in symbolic oriented syntax. An attractive feature of neurules is that compared to other similar neuro-symbolic approaches, like the one followed in connectionist expert systems, they retain the modularity and to some degree the naturalness of symbolic rules; also they support

References (36)

  • C.-L. Chuang et al.

    A hybrid neural network approach for credit scoring

    Expert Systems

    (2011)
  • S.I. Gallant

    Neural network learning and expert systems

    (1993)
  • A.S. Garcez D’Avila et al.

    Neural-symbolic learning systems: Foundations and applications

  • A.S. Garcez D’Avila et al.

    Cognitive algorithms and systems: Reasoning and knowledge representation

  • A.S. Garcez D’Avila et al.

    Neural-symbolic cognitive reasoning

    (2009)
  • A.Z. Ghalwash

    A recency inference engine for connectionist knowledge bases

    Applied Intelligence

    (1998)
  • I. Hatzilygeroudis et al.

    Neurules: Improving the performance of symbolic rules

    International Journal on Artificial Intelligence Tools

    (2000)
  • I. Hatzilygeroudis et al.

    Constructing modular hybrid knowledge bases for expert systems

    International Journal on Artificial Intelligence Tools

    (2001)
  • Cited by (0)

    1

    The names of the authors appear in alphabetic order.

    2

    Tel.: +30 2610 996937.

    View full text