Skip to main content

2020 | Buch

Artificial Intelligence for Customer Relationship Management

Keeping Customers Informed

insite
SUCHEN

Über dieses Buch

This research monograph brings AI to the field of Customer Relationship Management (CRM) to make a customer experience with a product or service smart and enjoyable. AI is here to help customers to get a refund for a canceled flight, unfreeze a banking account or get a health test result. Today, CRM has evolved from storing and analyzing customers’ data to predicting and understanding their behavior by putting a CRM system in a customers’ shoes. Hence advanced reasoning with learning from small data, about customers’ attitudes, introspection, reading between the lines of customer communication and explainability need to come into play.

Artificial Intelligence for Customer Relationship Management leverages a number of Natural Language Processing (NLP), Machine Learning (ML), simulation and reasoning techniques to enable CRM with intelligence. An effective and robust CRM needs to be able to chat with customers, providing desired information, completing their transactions and resolving their problems. It introduces a systematic means of ascertaining a customers’ frame of mind, their intents and attitudes to determine when to provide a thorough answer, a recommendation, an explanation, a proper argument, timely advice and promotion or compensation. The author employs a spectrum of ML methods, from deterministic to statistical to deep, to predict customer behavior and anticipate possible complaints, assuring customer retention efficiently.

Providing a forum for the exchange of ideas in AI, this book provides a concise yet comprehensive coverage of methodologies, tools, issues, applications, and future trends for professionals, managers, and researchers in the CRM field together with AI and IT professionals.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction to Volume 1 and Volume 2
Abstract
This chapter is an Introduction to both volumes of this Artificial Intelligence for Customer Relationship Management book: Volume 1 “Keeping Customers Informed” and Volume 2 “Solving Customer Problems”. We analyze AI adoption in Customer Relationship Management (CRM), briefly survey current trends, introduce AI CRM companies and discuss what kind of machine learning (ML) is best to support CRM. We explore where CRM is today and identify a lack of intelligence as a major bottleneck of current CRM systems. A hint to the reader is given on how to navigate this book.
Boris Galitsky
Chapter 2. Distributional Semantics for CRM: Making Word2vec Models Robust by Structurizing Them
Abstract
Distributional Semantics has become extremely popular in natural language processing (NLP) and language learning tasks. We subject it to critical evaluation and observe its limitations in expressing semantic relatedness of words. We spot-check representation of meaning by the Distributional Semantics and also obtain an anecdotal evidence about various systems employing it. To overcome the revealed limitations of the Distributional Semantics, we propose to use it on top of a linguistic structure, not in a stand-alone mode. In a phrase similarity task, only when phrases are aligned and syntactic, semantic and entity-based map is established, we assess word2vec similarity between aligned words. We add the Distributional Semantics feature to the results of syntactic, semantic and entity/attribute-based generalizations and observe an improvement in the similarity assessment task. Structurized word2vec improves the similarity assessment task performed by the integrated syntactic, abstract meaning representation (AMR) and entity comparison baseline system by more than 10%.
Boris Galitsky
Chapter 3. Employing Abstract Meaning Representation to Lay the Last-Mile Toward Reading Comprehension
Abstract
We propose a Machine reading comprehension (MRC) method based on the Abstract Meaning Representation (AMR) framework and a universal graph alignment algorithm. We combine syntactic, semantic and entity-based graph representations of a question to match it with a combined representation of an answer. The alignment algorithm is applied for combining various representations of the same text as well as for matching (generalization) of two different texts such as a question and an answer. We explore a number of Question Answering (Q/A) configurations and select a scenario where the proposed AMR generalization-based algorithm AMRG detects and rectifies the errors of a traditional neural MRC. When the state-of-the-art neural MRC is applied and delivers the correct answer in almost 90% of cases, the proposed AMRG verifies each answer and if it determines that it is incorrect, attempts to find a correct one. This error-correction scenario boosts the state-of-the-art performance of a neural MRC by at least 4%.
Boris Galitsky
Chapter 4. Summarized Logical Forms for Controlled Question Answering
Abstract
Providing a high-end question answering (Q/A) requires a special knowledge representation technique, which is based on phrasing-independent partial formalization of the essential information contained in domain answers. We introduce Summarized Logical forms (SLFs), the formalized expressions which contain the most important knowledge extracted from answers to be queried. Instead of indexing the whole answer, we index only its SLFs as a set of expressions for most important topics in this answer. By doing that, we achieve substantially higher Q/A precision since foreign answers will not be triggered. We also introduce Linked SLFs connected by entries of a domain-specific ontology to increase the coverage of a knowledge domain. A methodology of constructing SLFs and Linked SLFs is outlined and the achieved Q/A accuracy is evaluated. Meta-programming issues of matching query representations (QRs) against SLFs and domain taxonomy are addressed as well.
Boris Galitsky
Chapter 5. Summarized Logical Forms Based on Abstract Meaning Representation and Discourse Trees
Abstract
We propose a way to build summarized logical forms (SLFs) automatically relying on semantic and discourse parsers, as well as syntactic and semantic generalization. Summarized logical forms represent the main topic and the essential part of answer content and are designed to be matched with a logical representation of a question. We explore the possibilities of building SLFs from Abstract Meaning Representation (AMR) of sentences supposed to be the most important, by selecting the AMR subgraphs. In parallel, we leverage discourse analysis of answer paragraphs to highlight more important elementary discourse units (EDUs) to convert into SLFs (less important EDUs are not converted into SLFs). The third source of SLFs is a pair-wise generalization of answers with each other. Proposed methodology is designed to improve question answering (Q/A) precision to avoid matching questions with less important, uninformative parts of a given answer (to avoid delivering foreign answers). A stand-alone evaluation of each of the three sources of SLFs is conducted as well as an assessment of a hybrid SLF generation system. We conclude that indexing the most important text fragments of an answer such as SLF instead of the whole answer improves the Q/A precision by almost 10% as measured by F1 and 8% as measured by NDCG.
Boris Galitsky
Chapter 6. Acquiring New Definitions of Entities
Abstract
We focus on understanding natural language (NL) definitions of new entities via the ones available in the current version of an ontology. Once a definition is acquired from a user or from documents, this and other users can rely on the newly defined entity to formulate new questions. We first explore how to automatically build a taxonomy from a corpus of documents or from the web, and use this taxonomy to filter out irrelevant answers. We then develop a logical form approach to building definitions from text and outline an algorithm to build a step-by-step definition representation that is consistent with the current ontology. A nonmonotonic reasoning-based method is also proposed to correct semantic representations of a query. In addition, we focus on a special case of definitions such as NL descriptions of algorithm for the NL programming paradigm. Finally, search relevance improvement is evaluated based on acquired taxonomy and ontology.
Boris Galitsky
Chapter 7. Inferring Logical Clauses for Answering Complex Multi-hop Open Domain Questions
Abstract
We enable a conventional Question Answering (Q/A) system with formal reasoning that relies on clauses learned from texts and also on the complex question decomposition into simple queries that can run against various sources, from local index to intranet to the web. We integrate the reasoning, multi-hop querying and machine reading comprehension (MRC) so that a value can be extracted from a search result of one simple query and substituted into another simple query, till the answer is obtained via the recomposition of simple queries. The integrated approach boosts the Q/A performance in a number of domains and approaches the state-of-the-art for some of them, designed to train a deep learning (DL) Q/A system to answer complex, multi-hop queries. We assess the contribution of two reasoning components: ontology building from text, and entity association, as well as multi-hop query decomposer and MRC and observe that these components are necessary and complement each other answering complex questions. We apply a similar multi-hop framework to the problem of a natural language (NL) access to a database. The conclusion is that the proposed architecture with the focus on formal reasoning is well suited for industrial applications where performance is guaranteed in cases of no or limited training sets.
Boris Galitsky
Chapter 8. Managing Customer Relations in an Explainable Way
Abstract
We explore how to validate the soundness of textual explanations in a domain-independent manner. We further assess how people perceive explanations of their opponents and what are the factors determining whether explanations are acceptable or not. We discover that what we call a hybrid discourse tree (hybrid DT) determines the acceptability of explanation. A complete DT is a sum of a traditional DT for a paragraph of actual text and an imaginary DT for a text about entities used in but not explicitly defined in the actual text. Logical validation of explanation is evaluated: we confirm that a representation of an explanation chain via complete DTs is an adequate tool to validate the explanation soundness. We then proceed to a Decision Support scenario where a human expert and a machine learning (ML) system need to make decision together, explaining how they used the parameters for their decisions. We refer to this scenario as bi-directional since the involved decision-makers need to negotiate the decision providing explanations. We also address the explanations in behavioral scenarios that involve conflicting agents. In these scenarios, implicit or explicit conflict can be caused by contradictory agents’ interests, as communicated in their explanations for why they behaved in a particular way, by a lack of knowledge of the situation, or by a mixture of explanations of multiple factors. We argue that in many cases to assess the plausibility of explanations, we must analyze two following components and their interrelations: (1) explanation at the actual object level (explanation itself) and (2) explanation at the higher level (meta-explanation). Comparative analysis of the roles of both is conducted to assess the plausibility of how agents explain the scenarios of their interactions. Object-level explanation assesses the plausibility of individual claims by using a traditional approach to handle an argumentative structure of a dialogue (Galitsky and Kuznetsov in International Conference on Conceptual Structures, pp. 282–296, 2008a). Meta-explanation links the structure of a current scenario with that of previously learned scenarios of multi-agent interaction. The scenario structure includes agents’ communicative actions and argumentation defeat relations between the subjects of these actions. We also define a ratio between object-level and meta-explanation as the relative accuracy of plausibility assessment based on the former and latter sources. We then observe that groups of scenarios can be clustered based on this ratio; hence, such a ratio is an important parameter of human behavior associated with explaining something to other humans.
Boris Galitsky
Chapter 9. Recognizing Abstract Classes of Text Based on Discourse
Abstract
The problem of classifying shorter and longer texts into abstract classes is formulated and its application areas for CRM are proposed. These classes include reasoning patterns expressed in text such as object-level versus metalanguage, document styles such as public versus containing sensitive information, containing a description of a problem versus a solution for a problem, instructions on how to solve it. The commonality between the tasks of classification into such classes is that keyword analysis is insufficient and an accent to higher-level (discourse) representation is necessary. To do that, we define thicket kernels as an extension of parse tree kernels from the level of individual sentences toward the level of paragraphs to classify texts at a high level of abstraction (Galitsky et al. in Graph structures for knowledge representation and reasoning, pp 39–57, 2014). We build a set of extended trees for a paragraph of text from the individual parse trees for sentences. It is performed based on anaphora and rhetorical structure relations between the phrases in different sentences. Tree kernel learning is applied to extended trees to take advantage of additional discourse-related information. We evaluate our approach in the security-related domain of the design documents. These are the documents which contain a formal well-structured presentation on how a system is built. Design documents need to be differentiated from product requirements, architectural, general design notes, templates, research results and other types of documents, which can share the same keywords. We also evaluate classification in the literature domain, classifying text in Kafka’s novel “The Trial” as metalanguage versus novel’s description in scholarly studies (a mixture of metalanguage and language-object).
Boris Galitsky
Chapter 10. Conversational Explainability
Abstract
In this Chapter, we focus on an interface that makes explainability of a machine learning (ML) system convincing and effective. We propose a conversational interface to the decision log of an abstract ML system that enumerates the decision steps and features employed to arrive at a given decision. Conversational content augments the decision log with background knowledge to better handle questions and provide complete answers. As a result, the users who can chat with the decision-making system (such as a loan application approval) develop a substantially higher trust in it than in a black-box ML or conventional report-based explanation. Conversational explainability (CE) allows the users to get into as much details as they wish concerning the decision process. The proposed CE system delivers meaningful explanation in 65% cases, whereas a conventional, report-based explanations in 49% of cases (for the same decision sessions).
Boris Galitsky
Metadaten
Titel
Artificial Intelligence for Customer Relationship Management
verfasst von
Boris Galitsky
Copyright-Jahr
2020
Electronic ISBN
978-3-030-52167-7
Print ISBN
978-3-030-52166-0
DOI
https://doi.org/10.1007/978-3-030-52167-7

Neuer Inhalt