Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the Third International Workshop on Chatbot Research and Design, CONVERSATIONS 2019, held in Amsterdam, The Netherlands, in November 2019. The 18 revised full papers presented in this volume were carefully reviewed and selected from 31 submissions.
The papers are grouped in the following topical sections: user and communication studies user experience and design, chatbots for collaboration, chatbots for customer service, and chatbots in education.



User and Communication Studies


Conversational Agents in Healthcare: Using QCA to Explain Patients’ Resistance to Chatbots for Medication

Complete information is very important to the accuracy of diagnosis in healthcare. Therefore, the idea to use conversational agents recording relevant information and providing it to healthcare facilities is of rising interest. A promising use case of the involvement of conversational agents is medication, as this data is often fragmented or incomplete. The paper at hand examines the hindrances in the way of patients sharing their medication list with a chatbot. Basing on established theories and using fuzzy-set qualitative comparative analysis (QCA), we identify bundles of factors that influence patients lacking willingness to interact with a chatbot. Those typologies of patients can be used to address these hindrances specifically, providing useful insights for theory and healthcare facilities.
Lea Müller, Jens Mattke, Christian Maier, Tim Weitzel

An Approach for Ex-Post-Facto Analysis of Knowledge Graph-Driven Chatbots – The DBpedia Chatbot

As chatbots are gaining popularity for simplifying access to information and community interaction, it is essential to examine whether these agents are serving their intended purpose and catering to the needs of their users. Therefore, we present an approach to perform an ex-post-facto analysis over the logs of knowledge base-driven dialogue systems. Using the DBpedia Chatbot as our case study, we inspect three aspects of the interactions, (i) user queries and feedback, (ii) the bot’s response to these queries, and (iii) the overall flow of the conversations. We discuss key implications based on our findings. All the source code used for the analysis can be found at https://​github.​com/​dice-group/​DBpedia-Chatlog-Analysis.
Rricha Jalota, Priyansh Trivedi, Gaurav Maheshwari, Axel-Cyrille Ngonga Ngomo, Ricardo Usbeck

Privacy Concerns in Chatbot Interactions

Chatbots are increasingly used in a commercial context to make product- or service-related recommendations. By doing so, they collect personal information of the user, similar to other online services. While privacy concerns in an online (website-) context are widely studied, research in the context of chatbot-interaction is lacking. This study investigates the extent to which chatbots with human-like cues influence perceptions of anthropomorphism (i.e., attribution of human-like characteristics), privacy concerns, and consequently, information disclosure, attitudes and recommendation adherence. Findings show that a human-like chatbot leads to more information disclosure, and recommendation adherence mediated by higher perceived anthropomorphism and subsequently, lower privacy concerns in comparison to a machine-like chatbot. This result does not hold in comparison to a website; human-like chatbot and website were perceived as equally high in anthropomorphism. The results show the importance of both mediating concepts in regards to attitudinal and behavioral outcomes when interacting with chatbots.
Carolin Ischen, Theo Araujo, Hilde Voorveld, Guda van Noort, Edith Smit

User Experience and Design


Creating Humanlike Chatbots: What Chatbot Developers Could Learn from Webcare Employees in Adopting a Conversational Human Voice

Currently, conversations with chatbots are perceived as unnatural and impersonal. One way to enhance the feeling of humanlike responses is by implementing an engaging communication style (i.e., Conversational Human Voice (CHV); Kelleher 2009) which positively affects people’s perceptions of the organization. This communication style contributes to the effectiveness of online communication between organizations and customers (i.e., webcare), and is of high relevance to chatbot design and development. This project aimed to investigate how insights on the use of CHV in organizations’ messages and the perceptions of CHV can be implemented in customer service automation. A corpus study was conducted to investigate which linguistic elements are used in organizations’ messages. Subsequently, an experiment was conducted to assess to what extent linguistic elements contribute to the perception of CHV. Based on these two studies, we investigated whether the amount of CHV can be identified automatically. These findings could be used to design humanlike chatbots that use a natural and personal communication style like their human conversation partner.
Christine Liebrecht, Charlotte van Hooijdonk

The Conversational Agent “Emoty” Perceived by People with Neurodevelopmental Disorders: Is It a Human or a Machine?

This research explores the anthropomorphic perception of Emoty by its target user. Emoty is a Conversational Agent specifically designed as an emotional facilitator and trainer for individuals with neurodevelopmental disorders (NDD). NDD is a group of conditions that are characterized by severe deficits in the cognitive, emotional, and motor areas and produce severe impairments in communication and social functioning. Our application promotes skills of emotion expression and recognition and was developed in cooperation with psychologists and therapists as a supporting tool for regular interventions.
We conducted an empirical study with 19 people with NDD. We observed their behavior while interacting with the system and recorded the commentaries they made and the questions they asked when the session was over. Starting from this, we discovered a twofold nature of Emoty: for some aspects, it is perceived more like a machine, but for some others, it is more human-like. In this regard, we discussed some relevant points about gender, fallibility, interaction, and sensitivity of the agent, and we paved the ground towards a better understanding of the perception of people with NDD concerning Conversational Technology.
Fabio Catania, Eleonora Beccaluva, Franca Garzotto

Gender Bias in Chatbot Design

A recent UNESCO report reveals that most popular voice-based conversational agents are designed to be female. In addition, it outlines the potentially harmful effects this can have on society. However, the report focuses primarily on voice-based conversational agents and the analysis did not include chatbots (i.e., text-based conversational agents). Since chatbots can also be gendered in their design, we used an automated gender analysis approach to investigate three gender-specific cues in the design of 1,375 chatbots listed on the platform We leveraged two gender APIs to identify the gender of the name, a face recognition API to identify the gender of the avatar, and a text mining approach to analyze gender-specific pronouns in the chatbot’s description. Our results suggest that gender-specific cues are commonly used in the design of chatbots and that most chatbots are – explicitly or implicitly – designed to convey a specific gender. More specifically, most of the chatbots have female names, female-looking avatars, and are described as female chatbots. This is particularly evident in three application domains (i.e., branded conversations, customer service, and sales). Therefore, we find evidence that there is a tendency to prefer one gender (i.e., female) over another (i.e., male). Thus, we argue that there is a gender bias in the design of chatbots in the wild. Based on these findings, we formulate propositions as a starting point for future discussions and research to mitigate the gender bias in the design of chatbots.
Jasper Feine, Ulrich Gnewuch, Stefan Morana, Alexander Maedche

Conversational Web Interaction: Proposal of a Dialog-Based Natural Language Interaction Paradigm for the Web

This paper lays the foundation for a new delivery paradigm for web-accessible content and functionality, i.e., conversational interaction. Instead of asking users to read text, click through links and type on the keyboard, the vision is to enable users to “speak to a website” and to obtain natural language, spoken feedback. The paper describes how state-of-the-art chatbot technology can enable a dialog between the user and the website, proposes a reference architecture for the automated inference of site-specific chatbots able to mediate between the user and the website, and discusses open challenges and research questions. The envisioned, bidirectional dialog paradigm advances current screen reader technology and aims to benefit both regular users in eyes-free usage scenarios as well as visually impaired users in everyday scenarios.
Marcos Baez, Florian Daniel, Fabio Casati

Chatbots for Collaboration


Designing Chatbots for Guiding Online Peer Support Conversations for Adults with ADHD

This paper presents an exploratory study of how conversational interfaces can be used to facilitate peer support among adults with ADHD participating in an online self-help program. Peer support is an important feature of group therapy, but it is often disregarded in online programs. Using a research-through-design approach, a low fidelity chatbot prototype named Terabot was designed and evaluated. The goal of Terabot is to guide participants through peer support conversations related to a specific exercise in an online self-help program. The prototype was evaluated and refined through two Wizard of Oz (WoOz) trials. Based on design workshops, analysis of chat logs, data from online questionnaires and an evaluation interview, the findings indicate that the concept of a chatbot guiding a peer support conversation between adults with ADHD who participate in an online self-help program is a promising approach. We believe that a chatbot can help establishing structure, predictability and encouragement in a peer support conversation. This study contributes with experiences from an exploration of how to design conversational interfaces for peer support in mental health care, how conversational interfaces can facilitate peer support through structuring conversation, and how the WoOz approach can be used to inform the design of chatbots.
Oda Elise Nordberg, Jo Dugstad Wake, Emilie Sektnan Nordby, Eivind Flobak, Tine Nordgreen, Suresh Kumar Mukhiya, Frode Guribye

Towards Chatbots to Support Bibliotherapy Preparation and Delivery

Chatbots have become an increasingly popular choice for organisations in delivering services to users. Chatbots are beginning to become popular in mental health applications and are being seen as an accessible strategy in delivering mental health support alongside clinical/therapy treatment. The aim of this exploratory study was to investigate initial perceptions of the use and acceptance/adoption of text based chatbots by bibliotherapy facilitators and to also investigate perceptions of bibliotherapy facilitators using a bibliotherapy support chatbot through usability testing. Interviews were conducted to explore the relationship of bibliotherapy facilitators with chatbots, facilitators were then asked to complete a usability study using an early prototype chatbot. Post interviews were also conducted after usability study to discuss further improvements and requirements. Analysis of the interview transcripts reveal that bibliotherapy facilitators were keen on using chatbots to guide them in preparing for their bibliotherapy session for preparation and delivery. Facilitators stated that the reliability of the chatbot was a concern in relation to chatbot content and it is important to ensure that meaningful and quick conversational exchanges are designed. Analysis of the interview transcripts also stressed the importance of encoding a personality into the chatbot along with appropriate content to effectively guide facilitators. Facilitators stressed that appropriate onboarding and affordance measures should be integrated into the system to ensure that users are able to correctly interact chatbot and to understand the purpose of the chatbot as well as using personalisation for meaningful conversational exchanges.
Patrick McAllister, James Kerr, Michael McTear, Maurice Mulvenna, Raymond Bond, Karen Kirby, Joseph Morning, Danni Glover

CivicBots – Chatbots for Supporting Youth in Societal Participation

Supporting young people to participate in societal development is an important factor in achieving sustainable future. Digital solutions can be designed to help youth participate in civic activities, such as city planning and legislation. To this end, we are using human-centered approach to study how digital tools can help youth discuss their ideas on various societal issues. Chatbots are conversational agents that have potential to trigger and support thought processes, as well as online activities. In this context, we are exploring how chatbots – which we call CivicBots – can be used to support youth (16–27 years) in societal participation. We created three scenarios for CivicBots and evaluated them with the youth in an online survey (N = 54). Positive perceptions of the youth concerning CivicBots suggest that CivicBots can advance equality and they may be able to reach youth better than a real person. On the negative side, CivicBots may cause unpleasant interactions by their over-proactive behaviour, and trustworthiness is affected by fears that the bot does not respect user’s privacy, or that it provides biased or limited information about societally important issues.
Kaisa Väänänen, Aleksi Hiltunen, Jari Varsaluoma, Iikka Pietilä

Using Theory of Mind to Assess Users’ Sense of Agency in Social Chatbots

The technological advancements in the field of chatbot research is booming. Despite this, it is still difficult to assess which social characteristics a chatbot needs to have for the user to interact with it as if it had a mind of its own. Review studies have highlighted that the main cause is the low number of research papers dedicated to this question, and the lack of a consistent protocol within the papers that do address it. In the current paper, we suggest the use of a Theory of Mind task to measure the implicit social behaviour users exhibit towards a text-based chatbot. We present preliminary findings suggesting that participants adapt towards this basic chatbot significantly more than when they conduct the task alone (p < .017). This task is quick to administer and does not require a second chatbot for comparison, making it an efficient universal task. With it, a database could be built with scores of all existing chatbots, allowing fast and efficient meta-analyses to discover which characteristics make the chatbot appear more ‘human’.
Evelien Heyselaar, Tibor Bosse

Application Area: Chatbots for Customer Service


Exploring Age Differences in Motivations for and Acceptance of Chatbot Communication in a Customer Service Context

This qualitative interview study explores age differences in perceptions of chatbot communication in a customer service context. Socioemotional selectivity theory and research into technology acceptance suggest that older adults may differ from younger adults in motivations to use chatbots, and in perceived complexity and security of this chatbot communication. The in-depth interviews with older adults (54–81 years; N = 7) and younger adults (19–30 years; N = 7) revealed that both groups were aligned in their prime motivation: They used chatbots to get their (simple) customer queries answered in a fast and convenient manner. However, they seemed to differ in their need for additional human contact. In both age groups, there were participants for whom it was easy to communicate with chatbots, and the two groups were united in their frustrations when the chatbot did not understand and answer their queries. They were aligned as well in the difficulty they experienced in assessing the security of the chatbot. The two age groups may differ in the factors that contribute to perceived ease of use and perceived security. Directions for future research and implications for the implementation of chatbots for customer service are discussed.
Margot J. van der Goot, Tyler Pilgrim

Improving Conversations: Lessons Learnt from Manual Analysis of Chatbot Dialogues

Analysing and improving chatbot dialogues – so-called chatbot ‘training’ – is key to the successful implementation and maintenance of chatbots for customer service. Nevertheless, the details of this practice and what service providers may learn from the analysis of such dialogues is not investigated in current research on chatbots. As a first step towards bridging this gap in existing knowledge, we present a study of the qualitative analysis of chatbots dialogues in the context of the customer service department of a large telecom provider. In total 406 dialogues, randomly sampled from all chatbot dialogues during a four-week period, were included in the analysis. The analysis concerned the chatbot’s ability to resolve customers’ requests, the quality in the chatbot dialogues, and suggestions for improvements of the chatbot knowledge base generated through the analysis. The findings shed light on characteristics of successful and unsuccessful chatbot dialogues and the kind of improvements that may be derived from such analysis. On the basis of the findings we summarize implications for theory and practice, and suggest future research.
Knut Kvale, Olav Alexander Sell, Stig Hodnebrog, Asbjørn Følstad

Conversational Repair in Chatbots for Customer Service: The Effect of Expressing Uncertainty and Suggesting Alternatives

Due to the complexity of natural language, chatbots are prone to misinterpreting user requests. Such misinterpretations may lead the chatbot to provide answers that are not adequate responses to user request – so called false positives – potentially leading to conversational breakdown. A promising repair strategy in such cases is for the chatbot to express uncertainty and suggest likely alternatives in cases where prediction confidence falls below threshold. However, little is known about how such repair affects chatbot dialogues. We present findings from a study where a solution for expressing uncertainty and suggesting likely alternatives was implemented in a live chatbot for customer service. Chatbot dialogues (N = 700) were sampled at two points in time – immediately before and after implementation – and compared by conversational quality. Preliminary analyses suggest that introducing such a solution for conversational repair may substantially reduce the proportion of false positives in chatbot dialogues. At the same time, expressing uncertainty and suggesting likely alternatives does not seem to strongly affect the dialogue process and the likelihood of reaching a successful outcome. Based on the findings, we discuss theoretical and practical implications and suggest directions for future research.
Asbjørn Følstad, Cameron Taylor

Working Together with Conversational Agents: The Relationship of Perceived Cooperation with Service Performance Evaluations

Conversational agents are gradually being deployed by organizations in service settings to communicate with and solve problems together with consumers. The current study investigates how consumers’ perceptions of cooperation with conversational agents in a service context are associated with their perceptions about agents’ anthropomorphism, social presence, the quality of the information provided by an agent, and the agent service performance. An online experiment was conducted in which participants performed a service-oriented task with the assistance of conversational agents developed specifically for the study and evaluated the performance and attributes of the agents. The results suggest a direct positive link between perceiving a conversational agent as cooperative and perceiving it to be more anthropomorphic, with higher levels of social presence and providing better information quality. Moreover, the results also show that the link between perceiving an agent as cooperative and the agent’s service performance is mediated by perceptions of the agent’s anthropomorphic cues and the quality of the information provided by the agent.
Guy Laban, Theo Araujo

Application Area: Chatbots in Education


Chatbots for the Information Acquisition at Universities – A Student’s View on the Application Area

Chatbots are currently widely used in many different application areas. Especially for topics relevant at the workplace, e.g., customer support or information acquisition, they represent a new type of natural language-based human-computer interface. Nonetheless, chatbots in university settings have received only limited attention, e.g., providing organizational support about studies or for courses and examinations. This branch of research is just emerging in the scientific community. Therefore, we conducted a questionnaire-based survey among 166 students of various disciplines and educational levels at a German university. By doing so, we wanted to survey (1) the requirements implementing a chatbot as well as (2) relevant topics and corresponding questions that chatbots should address. In addition, our findings indicate that chatbots are suitable for the university context and that many students are willing to use chatbots.
Raphael Meyer von Wolff, Jonas Nörtemann, Sebastian Hobert, Matthias Schumann

A Configurable Agent to Advance Peers’ Productive Dialogue in MOOCs

Chatbot technology can greatly contribute towards the creation of personalized and engaging learning activities. Still, more experimentation is needed on how to integrate and use such agents in real world educational settings and, especially, in large-scale learning environments such as MOOCs. This paper presents the prototype design of a teacher-configurable conversational agent service, aiming to scaffold synchronous collaborative activities in MOOCs. The architecture of the conversational agent system is followed by a pilot evaluation study, which was conducted in the context of postgraduate computer science course on Learning Analytics. The preliminary study findings reveal an overall favorable student opinion as regards the ease of use and user acceptance of the system.
Stergios Tegos, Stavros Demetriadis, Georgios Psathas, Thrasyvoulos Tsiatsos

Small Talk Conversations and the Long-Term Use of Chatbots in Educational Settings – Experiences from a Field Study

In this paper, we analyze the use of small talk conversations based on a dialogue analysis of a long-term field study in which university students regularly interacted with a chatbot during a 3-month period of time in an educational setting. In particular, we analyze (1) how often the students engage with small talk topics during the field study, and (2) whether a larger amount of small talk conversations correlates with the students’ engagement in learning activities within our chatbot-based learning system, i.e., if engaging in small talk conversations correlates to a more intensive use of the chatbot during our field test. Our results suggest that small talk conversations might play an important role in the design of our chatbot as students who chat about small talk topics also frequently chat about learning-related topics. Nevertheless, the overall impact of small talk capabilities of chatbots should not be overestimated.
Sebastian Hobert, Florian Berens


Weitere Informationen

Premium Partner