ABSTRACT
This study aims to explore the feasibility of a text-based virtual agent as a new survey method to overcome the web survey's common response quality problems, which are caused by respondents' inattention. To this end, we conducted a 2 (platform: web vs. chatbot) × 2 (conversational style: formal vs. casual) experiment. We used satisficing theory to compare the responses' data quality. We found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chatbot survey thus resulted in higher-quality data. Moreover, when a casual conversational style is used, the participants were less likely to satisfice-although such effects were only found in the chatbot condition. These results imply that conversational interactivity occurs when a chat interface is accompanied by messages with effective tone. Based on an analysis of the qualitative responses, we also showed that a chatbot could perform part of a human interviewer's role by applying effective communication strategies.
Supplemental Material
- National Information Society Agency. 2011. Third Standardization of Korean Internet Addiction Proneness Scale. Seoul, Korea: National Information Society Agency.Google Scholar
- M Astrid, Laura Hoffmann, Jennifer Klatt, and Nicole C Krämer. 2011. Quid pro quo? Reciprocal self-disclosure and communicative accomodation towards a virtual interviewer. In International Workshop on Intelligent Virtual Agents. Springer, 183--194. Google ScholarDigital Library
- Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77--101.Google Scholar
- Susan E Brennan. 1998. The grounding problem in conversations with and through computers. Social and cognitive approaches to interpersonal communication (1998), 201--225.Google Scholar
- Gillian Cameron, David Cameron, Gavin Megaw, Raymond Bond, Maurice Mulvenna, Siobhan O'Neill, Cherie Armour, and Michael McTear. 2017. Towards a chatbot for digital counselling. In Proceedings of the 31st British Computer Society Human Computer Interaction Conference. BCS Learning & Development Ltd., 24. Google ScholarDigital Library
- Heloisa Candello, Claudio Pinhanez, and Flavio Figueiredo. 2017. Typefaces and the Perception of Humanness in Natural Language Chatbots. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3476--3487. Google ScholarDigital Library
- Justine Cassell and Timothy Bickmore. 2003. Negotiated collusion: Modeling social language and its relationship effects in intelligent agents. User modeling and user-adapted interaction 13, 1--2 (2003), 89-- 132. Google ScholarDigital Library
- Ann Colley, Zazie Todd, Matthew Bland, Michael Holmes, Nuzibun Khanom, and Hannah Pike. 2004. Style and content in e-mails and letters to male and female friends. Journal of Language and Social Psychology 23, 3 (2004), 369--378.Google ScholarCross Ref
- Frederick G Conrad and Michael F Schober. 2000. Clarifying question meaning in a household telephone survey. Public Opinion Quarterly 64, 1 (2000), 1--28.Google ScholarCross Ref
- Frederick G Conrad, Michael F Schober, Matt Jans, Rachel A Orlowski, Daniel Nielsen, and Rachel Levenstein. 2015. Comprehension and engagement in survey interviews with virtual agents. Frontiers in psychology 6 (2015), 1578.Google Scholar
- Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR mental health 4, 2 (2017).Google Scholar
- Scott Fricker, Mirta Galesic, Roger Tourangeau, and Ting Yan. 2005. An experimental comparison of web and telephone surveys. Public Opinion Quarterly 69, 3 (2005), 370--392.Google ScholarCross Ref
- Dirk Heerwegh and Geert Loosveldt. 2008. Face-to-face versus web surveying in a high-internet-coverage population: Differences in response quality. Public Opinion Quarterly 72, 5 (2008), 836--846.Google ScholarCross Ref
- Kenneth Heller, John D Davis, and Roger A Myers. 1966. The effects of interviewer style in a standardized interview. Journal of Consulting Psychology 30, 6 (1966), 501.Google ScholarCross Ref
- Ramon Henson, Charles F Cannell, and Sally Lawson. 1976. Effects of interviewer style on quality of reporting in a survey interview. The Journal of Psychology 93, 2 (1976), 221--227.Google ScholarCross Ref
- Allyson L Holbrook, Melanie C Green, and Jon A Krosnick. 2003. Telephone versus face-to-face interviewing of national probability samples with long questionnaires: Comparisons of respondent satisficing and social desirability response bias. Public opinion quarterly 67, 1 (2003), 79--125.Google Scholar
- Tianran Hu, Anbang Xu, Zhe Liu, Quanzeng You, Yufan Guo, Vibha Sinha, Jiebo Luo, and Rama Akkiraju. 2018. Touch Your Heart: A Toneaware Chatbot for Customer Care on Social Media. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 415. Google ScholarDigital Library
- Mohit Jain, Ramachandra Kota, Pratyush Kumar, and Shwetak N Patel. 2018. Convey: Exploring the Use of a Context View for Chatbots. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 468. Google ScholarDigital Library
- Timothy P Johnson, Michael Fendrich, Chitra Shaligram, Anthony Garcy, and Samuel Gillespie. 2000. An evaluation of the effects of interviewer characteristics in an RDD telephone survey of drug use. Journal of Drug Issues 30, 1 (2000), 77--101.Google ScholarCross Ref
- Sara Kiesler, Jane Siegel, and Timothy W McGuire. 1984. Social psychological aspects of computer-mediated communication. American psychologist 39, 10 (1984), 1123.Google Scholar
- Tobias Kowatsch, Marcia Nißen, Chen-Hsuan I Shih, Dominik Rüegger, Dirk Volland, Andreas Filler, Florian Künzler, Filipe Barata, Severin Hung, Dirk Büchter, et al. 2017. Text-based Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a Randomized Controlled Trial on Childhood Obesity. In Persuasive Embodied Agents for Behavior Change (PEACH2017). ETH Zurich.Google Scholar
- Jon A Krosnick. 1991. Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied cognitive psychology 5, 3 (1991), 213--236.Google Scholar
- Jon A Krosnick. 1999. Survey research. Annual review of psychology 50, 1 (1999), 537--567.Google Scholar
- Jingyi Li, Michelle X Zhou, Huahai Yang, and Gloria Mark. 2017. Confiding in and Listening to Virtual Agents: The Effect of Personality. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 275--286. Google ScholarDigital Library
- Vera Q Liao, Muhammed Masud Hussain, Praveen Chandar, Matthew Davis, Marco Crasso, Dakuo Wang, Michael Muller, Sadat N Shami, and Werner Geyer. 2018. All Work and no Play? Conversations with a Question-and-Answer Chatbot in the Wild. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18). ACM, New York, NY, USA, Vol. 13. Google ScholarDigital Library
- Laura H Lind, Michael F Schober, Frederick G Conrad, and Heidi Reichert. 2013. Why do survey respondents disclose more when computers ask the questions? Public opinion quarterly 77, 4 (2013), 888--935.Google Scholar
- Gale M Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 37 (2014), 94--100. Google ScholarDigital Library
- Ewa Luger and Abigail Sellen. 2016. Like having a really bad PA: the gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 5286--5297. Google ScholarDigital Library
- Arnold M Lund. 2001. Measuring usability with the use questionnaire12. Usability interface 8, 2 (2001), 3--6.Google Scholar
- John A McCarty and Larry J Shrum. 2000. The measurement of personal values in survey research: A test of alternative rating procedures. Public Opinion Quarterly 64, 3 (2000), 271--298.Google ScholarCross Ref
- Hendrik Müller and Aaron Sedley. 2015. Designing Surveys for HCI Research. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 2485--2486. Google ScholarDigital Library
- Hendrik Müller, Aaron Sedley, and Elizabeth Ferrall-Nunge. 2014. Survey research in HCI. In Ways of Knowing in HCI. Springer, 229--266.Google Scholar
- Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues 56, 1 (2000), 81--103.Google ScholarCross Ref
- Kristine L Nowak and Frank Biocca. 2003. The effect of the agency and anthropomorphism on users' sense of telepresence, copresence, CHI 2019, May 4--9, 2019, Glasgow, Scotland UK Kim et al. and social presence in virtual environments. Presence: Teleoperators & Virtual Environments 12, 5 (2003), 481--494. Google ScholarDigital Library
- Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I Lead, You Help but Only with Enough Details: Understanding User Experience of Co-Creation with Artificial Intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 649. Google ScholarDigital Library
- Sheizf Rafaeli. 1988. From new media to communication. Sage annual review of communication research: Advancing communication science 16 (1988), 110--134.Google Scholar
- Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press. Google ScholarDigital Library
- Landra Rezabek and John Cochenour. 1998. Visual cues in computermediated communication: Supplementing text with emoticons. Journal of Visual Literacy 18, 2 (1998), 201--215.Google ScholarCross Ref
- Catherine A Roster, Robert D Rogers, Gerald Albaum, and Darin Klein. 2004. A comparison of response characteristics from web and telephone surveys. INTERNATIONAL JOURNAL OF MARKET RESEARCH. 46 (2004), 359--374.Google ScholarCross Ref
- Michael F Schober and Frederick G Conrad. 1997. Does conversational interviewing reduce survey measurement error? Public opinion quarterly (1997), 576--602.Google Scholar
- Ameneh Shamekhi, Q Vera Liao, Dakuo Wang, Rachel KE Bellamy, and Thomas Erickson. 2018. Face Value?. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 391. Google ScholarDigital Library
- Kim Bartel Sheehan. 2002. Online research methodology: Reflections and speculations. Journal of Interactive Advertising 3, 1 (2002), 56--61.Google ScholarCross Ref
- Herbert A Simon. 1957. Models of man; social and rational. (1957).Google Scholar
- Keri K Stephens, Marian L Houser, and Renee L Cowan. 2009. RU able to meat me: The impact of students' overly casual email messages to instructors. Communication Education 58, 3 (2009), 303--326.Google ScholarCross Ref
- Jennifer Stromer-Galley. 2000. On-line interaction and why candidates avoid it. Journal of communication 50, 4 (2000), 111--132.Google ScholarCross Ref
- S Shyam Sundar, Saraswathi Bellur, Jeeyun Oh, Haiyan Jia, and HyangSook Kim. 2016. Theoretical importance of contingency in humancomputer interaction: effects of message interactivity on user engagement. Communication Research 43, 5 (2016), 595--625.Google ScholarCross Ref
- Ella Tallyn, Hector Fried, Rory Gianni, Amy Isard, and Chris Speed. 2018. The Ethnobot: Gathering Ethnographies in the Age of IoT. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 604. Google ScholarDigital Library
- Philip A Thompsen and Davis A Foulger. 1996. Effects of pictographs and quoting on flaming in electronic mail. Computers in Human Behavior 12, 2 (1996), 225--243.Google ScholarCross Ref
- Roger Tourangeau, Mick P Couper, and Darby M Steiger. 2003. Humanizing self-administered surveys: experiments on social presence in web and IVR surveys. Computers in Human Behavior 19, 1 (2003), 1--24.Google ScholarCross Ref
- Kevin B Wright. 2005. Researching Internet-based populations: Advantages and disadvantages of online survey research, online questionnaire authoring software packages, and web survey services. Journal of computer-mediated communication 10, 3 (2005), JCMC1034.Google Scholar
- Anbang Xu, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju. 2017. A new chatbot for customer service on social media. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3506--3510. Google ScholarDigital Library
- Oscar Ybarra, Eugene Burnstein, Piotr Winkielman, Matthew C Keller, Melvin Manis, Emily Chan, and Joel Rodriguez. 2008. Mental exercising through simple socializing: Social interaction promotes general cognitive functioning. Personality and Social Psychology Bulletin 34, 2 (2008), 248--259.Google ScholarCross Ref
- Victor W Zue and James R Glass. 2000. Conversational interfaces: Advances and challenges. Proc. IEEE 88, 8 (2000), 1166--1180.Google ScholarCross Ref
Index Terms
- Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality
Recommendations
Evaluating and Informing the Design of Chatbots
DIS '18: Proceedings of the 2018 Designing Interactive Systems ConferenceText messaging-based conversational agents (CAs), popularly called chatbots, received significant attention in the last two years. However, chatbots are still in their nascent stage: They have a low penetration rate as 84% of the Internet users have not ...
Small Talk Conversations and the Long-Term Use of Chatbots in Educational Settings – Experiences from a Field Study
Chatbot Research and DesignAbstractIn this paper, we analyze the use of small talk conversations based on a dialogue analysis of a long-term field study in which university students regularly interacted with a chatbot during a 3-month period of time in an educational setting. In ...
Gender Bias in Chatbot Design
Chatbot Research and DesignAbstractA recent UNESCO report reveals that most popular voice-based conversational agents are designed to be female. In addition, it outlines the potentially harmful effects this can have on society. However, the report focuses primarily on voice-based ...
Comments