skip to main content
10.1145/3411764.3445781acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Honorable Mention

ArgueTutor: An Adaptive Dialog-Based Learning System for Argumentation Skills

Published:07 May 2021Publication History

ABSTRACT

Techniques from Natural-Language-Processing offer the opportunities to design new dialog-based forms of human-computer interaction as well as to analyze the argumentation quality of texts. This can be leveraged to provide students with adaptive tutoring when doing a persuasive writing exercise. To test if individual tutoring for students’ argumentation will help them to write more convincing texts, we developed ArgueTutor, a conversational agent that tutors students with adaptive argumentation feedback in their learning journey. We compared ArgueTutor with 55 students to a traditional writing tool. We found students using ArgueTutor wrote more convincing texts with a better quality of argumentation compared to the ones using the alternative approach. The measured level of enjoyment and ease of use provides promising results to use our tool in traditional learning settings. Our results indicate that dialog-based learning applications combined with NLP text feedback have a beneficial use to foster better writing skills of students.

Skip Supplemental Material Section

Supplemental Material

3411764.3445781_videopreview.mp4

mp4

12.2 MB

References

  1. Ritu Agarwal and Elena Karahanna. 2000. Time Flies When You’re Having Fun: Cognitive Absorption and Beliefs about Information Technology Usage. MIS Quarterly 24, 4 (12 2000), 665. https://doi.org/10.2307/3250951Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. S. J. Ashford. 1986. Feedback-Seeking in Individual Adaptation : A Resource Perspective. Academy of Management Journal 29, 3 (9 1986), 465–487. https://doi.org/10.2307/256219Google ScholarGoogle ScholarCross RefCross Ref
  3. R. C. Atkinson and R. M. Shiffrin. 1968. Human Memory: A Proposed System and its Control Processes. Psychology of Learning and Motivation - Advances in Research and Theory 2, C(1968), 89–195. https://doi.org/10.1016/S0079-7421(08)60422-3Google ScholarGoogle ScholarCross RefCross Ref
  4. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. Vol. 43. 479 pages. https://doi.org/10.1097/00004770-200204000-00018Google ScholarGoogle ScholarCross RefCross Ref
  5. Paul Black and Dylan Wiliam. 2009. Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability 21, 1(2009), 5–31. https://doi.org/10.1007/s11092-008-9068-5Google ScholarGoogle ScholarCross RefCross Ref
  6. Tom Bocklisch, Joey Faulkner, Nick Pawlowski, and Alan Nichol. 2017. Rasa: Open Source Language Understanding and Dialogue Management. (12 2017). http://arxiv.org/abs/1712.05181Google ScholarGoogle Scholar
  7. Elena Cabrio and Serena Villata. 2014. Towards a Benchmark of Natural Language Arguments. CoRR abs/1405.0(2014).Google ScholarGoogle Scholar
  8. William Cai, Joshua Grossman, Zhiyuan Lin, Hao Sheng, Johnny Tian, Zheng Wei, Joseph Jay Williams, and Sharad Goel. 2019. MathBot: A Personalized Conversational Agent for Learning Math. (2019). https://doi.org/10.475/123_4Google ScholarGoogle Scholar
  9. Artem Chernodub, Oleksiy Oliynyk, Philipp Heidenreich, Alexander Bondarenko, Matthias Hagen, Chris Biemann, and Alexander Panchenko. 2019. TARGER: Neural Argument Mining at Your Fingertips. (2019), 195–200. https://doi.org/10.18653/v1/p19-3031Google ScholarGoogle ScholarCross RefCross Ref
  10. Michelene T.H. Chi and Ruth Wylie. 2014. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educational Psychologist 49, 4 (2014), 219–243. https://doi.org/10.1080/00461520.2014.965823Google ScholarGoogle ScholarCross RefCross Ref
  11. Glenn Rowe Chris Reed Raquel Mochales Palauand Marie-Francine Moens. 2008. Language Resources for Studying Argument. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), Bente Maegaard Joseph Mariani Jan Odijk Stelios Piperidis Daniel Tapias Nicoletta Calzolari (Conference Chair) Khalid Choukri (Ed.). European Language Resources Association (ELRA), Marrakech, Morocco.Google ScholarGoogle Scholar
  12. Mike Cohn. 2004. User Stories Applied For Agile Software Development. Technical Report.Google ScholarGoogle Scholar
  13. Harris M. Cooper. 1988. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society 1, 1 (1988), 104–126. https://doi.org/10.1007/BF03177550Google ScholarGoogle ScholarCross RefCross Ref
  14. R De Groot, R Drachman, R Hever, B Schwartz, U Hoppe, A Harrer, M De Laat, R Wegerif, B M Mclaren, and B Baurens. 2007. Computer Supported Moderation of E-Discussions: the ARGUNAUT Approach. Technical Report. http://www.argunaut.orgGoogle ScholarGoogle Scholar
  15. Lingjia Deng and Janyce Wiebe. 2015. MPQA 3.0: An Entity/Event-Level Sentiment Corpus. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Denver, Colorado, 1323–1328.Google ScholarGoogle ScholarCross RefCross Ref
  16. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (10 2018). http://arxiv.org/abs/1810.04805Google ScholarGoogle Scholar
  17. Nicholas Diana. 2018. Leveraging educational technology to improve the quality of civil discourse. In International Conference on Artificial Intelligence in Education, Vol. 10948 LNAI. Springer Verlag, 517–520. https://doi.org/10.1007/978-3-319-93846-2_97Google ScholarGoogle ScholarCross RefCross Ref
  18. Rosalind Driver, Paul Newton, and Jonathan Osborne. 2000. Establishing the norms of scientific argumentation in classrooms. Science Education 84, 3 (5 2000), 287–312. https://doi.org/10.1002/(SICI)1098-237X(200005)84:3<287::AID-SCE1>3.0.CO;2-AGoogle ScholarGoogle ScholarCross RefCross Ref
  19. Judith Eckle-Kohler, Roland Kluge, and Iryna Gurevych. 2015. On the Role of Discourse Markers for Discriminating Claims and Premises in Argumentative Discourse. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, 2236–2242.Google ScholarGoogle ScholarCross RefCross Ref
  20. Frans H. van Eemeren, Rob Grootendorst, Ralph H. Johnson, Christian Plantin, Charles A. Willard, Rob Grootendorst, Ralph H. Johnson, Christian Plantin, and Charles A. Willard. 1996. Fundamentals of Argumentation Theory. Routledge. https://doi.org/10.4324/9780203811306Google ScholarGoogle ScholarCross RefCross Ref
  21. Charles Fadel, Maya Bialik, and Bernie Trilling. 2015. Four-dimensional education : the competencies learners need to succeed. 177 pages.Google ScholarGoogle Scholar
  22. Frank Fischer, Ingo Kollar, Karsten Stegmann, and Christof Wecker. 2013. Toward a Script Theory of Guidance in Computer-Supported Collaborative Learning.Educational psychologist 48, 1 (1 2013), 56–66. https://doi.org/10.1080/00461520.2012.748005Google ScholarGoogle ScholarCross RefCross Ref
  23. Jürgen Flender, Ursula Christmann, and Norbert Groeben. 1999. Entwicklung und erste Validierung einer Skala zur Erfassung der passiven argumentativ-rhetorischen Kompetenz. Zeitschrift für Differentielle und Diagnostische Psychologie 20, 4 (9 1999), 309–325. https://doi.org/10.1024//0170-1789.20.4.309Google ScholarGoogle Scholar
  24. R Flesch. 1943. Marks of readable style; a study in adult education.Teachers College Contributions to Education 897 (1943).Google ScholarGoogle Scholar
  25. Hansjörg Fromm, Thiemo Wambsganss, and Matthias Söllner. 2019. Towards a Taxonomy of Text Mining Features. In European Conference of Information Systems (ECIS). 1–12.Google ScholarGoogle Scholar
  26. Jochen. Gläser and Grit. Laudel. 2010. Experteninterviews und qualitative Inhaltsanalyse : als Instrumente rekonstruierender Untersuchungen. VS Verlag für Sozialwiss. http://www.springer.com/de/book/9783531172385Google ScholarGoogle Scholar
  27. Ivan Habernal and Iryna Gurevych. 2015. Exploiting Debate Portals for Semi-Supervised Argumentation Mining in User-Generated Web Discourse. Technical Report. 17–21 pages. https://github.com/habernal/emnlp2015Google ScholarGoogle Scholar
  28. John Hattie and Helen Timperley. 2007. The Power of Feedback. Review of Educational Research 77, 1 (2007), 81–112. https://doi.org/10.3102/003465430298487Google ScholarGoogle ScholarCross RefCross Ref
  29. Sebastian Hobert. 2019. Say hello to ’Coding Tutor’! Design and evaluation of a chatbot-based learning system supporting students to learn to program. 40th International Conference on Information Systems, ICIS 2019 (2019), 1–17.Google ScholarGoogle Scholar
  30. Sebastian Hobert and Raphael Meyer Von Wolff. 2019. Say Hello to Your New Automated Tutor – A Structured Literature Review on Pedagogical Conversational Agents. 14th International Conference on Wirtschaftsinformatik, Siegen, Germany (2019).Google ScholarGoogle Scholar
  31. Chenn Jung Huang, Shun Chih Chang, Heng Ming Chen, Jhe Hao Tseng, and Sheng Yuan Chien. 2016. A group intelligence-based asynchronous argumentation learning-assistance platform. Interactive Learning Environments 24, 7 (2016), 1408–1427. https://doi.org/10.1080/10494820.2015.1016533Google ScholarGoogle ScholarCross RefCross Ref
  32. David H. Jonassen and Bosung Kim. 2010. Arguing to learn and learning to argue: Design justifications and guidelines. Educational Technology Research and Development 58, 4(2010), 439–457. https://doi.org/10.1007/s11423-009-9143-8Google ScholarGoogle ScholarCross RefCross Ref
  33. Soomin Kim, Joonhwan Lee, and Gahgene Gweon. 2019. Comparing data from chatbot and web surveys effects of platform and conversational style on survey response quality. Conference on Human Factors in Computing Systems - Proceedings (2019), 1–12. https://doi.org/10.1145/3290605.3300316Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Timothy Koschmann. 1996. Paradigm Shifts and Instructional Technology. Technical Report. 1–23 pages. http://opensiuc.lib.siu.edu/meded_books/4Google ScholarGoogle Scholar
  35. Deanna Kuhn. 1992. Thinking as Argument. Harvard Educational Review 62, 2 (7 1992), 155–179. https://doi.org/10.17763/haer.62.2.9r424r0113t670l1Google ScholarGoogle ScholarCross RefCross Ref
  36. Deanna Kuhn. 1993. Science as argument: Implications for teaching and learning scientific thinking. Science Education 77, 3 (6 1993), 319–337. https://doi.org/10.1002/sce.3730770306Google ScholarGoogle ScholarCross RefCross Ref
  37. Deanna. Kuhn. 2005. Education for thinking. Harvard University Press. 209 pages. http://www.hup.harvard.edu/catalog.php?isbn=9780674027459Google ScholarGoogle Scholar
  38. James A. Kulik and J. D. Fletcher. 2016. Effectiveness of Intelligent Tutoring Systems: A Meta-Analytic Review. Review of Educational Research 86, 1 (2016), 42–78. https://doi.org/10.3102/0034654315581420Google ScholarGoogle ScholarCross RefCross Ref
  39. Severin Landolt, Thiemo Wambsganss, and S Matthias. 2021. A Taxonomy for Deep Learning in Natural Language Processing. In Hawaii International Conference on System Sciences (HICSS).Google ScholarGoogle ScholarCross RefCross Ref
  40. John Lawrence and Chris Reed. 2019. Argument mining: A survey. Computational Linguistics 45, 4 (2019), 765–818. https://doi.org/10.1162/COLIa00364Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Matthew K.O. Lee, Christy M.K. Cheung, and Zhaohui Chen. 2005. Acceptance of Internet-based learning medium: The role of extrinsic and intrinsic motivation. Information and Management 42, 8 (2005), 1095–1104. https://doi.org/10.1016/j.im.2003.10.007Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Marco Lippi and Paolo Torroni. 2016. MARGOT: A web server for argumentation mining. Expert Systems with Applications 65 (2016), 292–303. https://doi.org/10.1016/j.eswa.2016.08.050Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Raquel Mochales Palau and Aagje Ieven. 2009. Creating an argumentation corpus: do theories apply to real arguments? {A} case study on the legal argumentation of the {ECHR}. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Law (ICAIL 2009), Twelfth international conference on artificial intelligence and law (ICAIL 2009)., Barcelona, Spain, 8-12 June 2009. ACM, 21–30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. E. Michael Nussbaum, Denise L. Winsor, Yvette M. Aqui, and Anne M. Poliquin. 2007. Putting the pieces together: Online argumentation vee diagrams enhance thinking during discussions. International Journal of Computer-Supported Collaborative Learning 2, 4 (11 2007), 479–500. https://doi.org/10.1007/s11412-007-9025-1Google ScholarGoogle ScholarCross RefCross Ref
  45. Jonathan F. Osborne, J. Bryan Henderson, Anna MacPherson, Evan Szu, Andrew Wild, and Shi Ying Yao. 2016. The development and validation of a learning progression for argumentation in science. Journal of Research in Science Teaching 53, 6 (2016), 821–846. https://doi.org/10.1002/tea.21316Google ScholarGoogle ScholarCross RefCross Ref
  46. Sabine Payr. 2003. The virtual university’s faculty: An overview of educational agents. Applied Artificial Intelligence 17, 1 (1 2003), 1–19. https://doi.org/10.1080/713827053Google ScholarGoogle ScholarCross RefCross Ref
  47. Reinhard Pekrun and Elizabeth J. Stephens. 2012. Academic emotions.APA educational psychology handbook, Vol 2: Individual differences and cultural and contextual factors. (10 2012), 3–31. https://doi.org/10.1037/13274-001Google ScholarGoogle ScholarCross RefCross Ref
  48. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. Association for Computational Linguistics (ACL), 1532–1543. https://doi.org/10.3115/v1/d14-1162Google ScholarGoogle ScholarCross RefCross Ref
  49. Niels Pinkwart, Kevin Ashley, Collin Lynch, and Vincent Aleven. 2009. Evaluating an Intelligent Tutoring System for Making Legal Arguments with Hypotheticals. Technical Report. 401–424pages. http://iaiedsoc.org/pub/1302/file/19_4_05_Pinkwart.pdfGoogle ScholarGoogle Scholar
  50. Eric Ries. 2011. The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses.Google ScholarGoogle Scholar
  51. Roman Rietsche and Matthias Söllner. 2019. Insights into Using IT-Based Peer Feedback to Practice the Students Providing Feedback Skill. Proceedings of the 52nd Hawaii International Conference on System Sciences (2019). https://doi.org/10.24251/hicss.2019.009Google ScholarGoogle ScholarCross RefCross Ref
  52. Heinrich Roth. 1970. Pädagogische Psychologie des Lehrens und Lernens. https://issuu.com/audio2brain/docs/name6bce04Google ScholarGoogle Scholar
  53. Sherry Ruan, Liwei Jiang, Justin Xu, Bryce Joe-Kun Tham, Zhengneng Qiu, Yeshuang Zhu, Elizabeth L. Murnane, Emma Brunskill, and James A. Landay. 2019. QuizBot: A Dialogue-based Adaptive Learning System System for Factual Knowledge. Chi (2019), 1–13. https://doi.org/10.1145/3290605.3300587Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Oliver Scheuer. 2015. Towards adaptive argumentation learning systems. https://www.researchgate.net/publication/298087259Google ScholarGoogle Scholar
  55. Oliver Scheuer, Frank Loll, Niels Pinkwart, and Bruce M. McLaren. 2010. Computer-supported argumentation: A review of the state of the art. International Journal of Computer-Supported Collaborative Learning 5, 1(2010), 43–102. https://doi.org/10.1007/s11412-009-9080-xGoogle ScholarGoogle ScholarCross RefCross Ref
  56. Julia E. Seaman, I. E. Allen, and Jeff Seaman. 2018. Higher Education Reports - Babson Survey Research Group. Technical Report. http://www.onlinelearningsurvey.com/highered.htmlhttps://www.onlinelearningsurvey.com/highered.htmlGoogle ScholarGoogle Scholar
  57. Bayan Abu Shawar and Eric Steven Atwell. 2005. Using corpora in machine-learning chatbot systems. International Journal of Corpus Linguistics 10, 4 (2005), 489–516. https://doi.org/10.1075/ijcl.10.4.06shaGoogle ScholarGoogle ScholarCross RefCross Ref
  58. Yi Song, Michael Heilman, Beata Beigman Klebanov, and Paul Deane. 2014. Applying argumentation schemes for essay scoring. In Proceedings of the First Workshop on Argumentation Mining. 69–78.Google ScholarGoogle ScholarCross RefCross Ref
  59. Christian Stab and Iryna Gurevych. 2014. Identifying Argumentative Discourse Structures in Persuasive Essays. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)(Oct. 2014), Association for Computational Linguistics, p.(to appear). 46–56. www.ukp.tu-darmstadt.deGoogle ScholarGoogle Scholar
  60. Christian Stab and Iryna Gurevych. 2017. Parsing Argumentation Structures in Persuasive Essays. Computational Linguistics 43, 3 (9 2017), 619–659. https://doi.org/10.1162/COLI_a_00295Google ScholarGoogle ScholarCross RefCross Ref
  61. Christian Stab and Iryna Gurevych. 2017. Recognizing Insufficiently Supported Arguments in Argumentative Essays. Technical Report. 980–990pages. www.ukp.tu-darmstadt.deGoogle ScholarGoogle Scholar
  62. Karsten Stegmann, Christof Wecker, Armin Weinberger, and Frank Fischer. 2012. Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science 40, 2 (2012), 297–323. https://doi.org/10.1007/s11251-011-9174-5Google ScholarGoogle ScholarCross RefCross Ref
  63. Patrick Suppes and Mona Morningstar. 1969. Computer-assisted instruction. Science 166, 3903 (1969), 343–350. https://doi.org/10.1126/science.166.3903.343Google ScholarGoogle ScholarCross RefCross Ref
  64. Daniel D Suthers and Christopher D Hundhausen. 2001. European Perspectives on Computer-Supported Collaborative Learning. Technical Report. 577–584pages. http://lilt.ics.hawaii.edu/papers/2001/Suthers-Hundhausen-Euro-CSCL-2001.pdfGoogle ScholarGoogle Scholar
  65. Heikki Topi. 2018. Using competencies for specifying outcome expectations for degree programs in computing: Lessons learned from other disciplines. 2018 SIGED International Conference on Information Systems Education and Research (2018).Google ScholarGoogle Scholar
  66. Stephen E. Toulmin. 2003. The uses of argument: Updated edition. 1–247 pages. https://doi.org/10.1017/CBO9780511840005Google ScholarGoogle ScholarCross RefCross Ref
  67. Viswanath Venkatesh and Hillol Bala. 2008. Technology acceptance model 3 and a research agenda on interventions. Decision Sciences 39, 2 (5 2008), 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.xGoogle ScholarGoogle ScholarCross RefCross Ref
  68. Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. 2003. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27, 3 (2003), 425–478.Google ScholarGoogle ScholarCross RefCross Ref
  69. Jan vom Brocke, Alexander Simons, Kai Riemer, Bjoern Niehaves, Ralf Plattfaut, and Anne Cleven. 2015. Standing on the shoulders of giants: Challenges and recommendations of literature search in information systems research. Communications of the Association for Information Systems 37, 1 (8 2015), 205–224. https://doi.org/10.17705/1cais.03709Google ScholarGoogle ScholarCross RefCross Ref
  70. Lev Semenovich Vygotsky. 1980. Mind in society: The development of higher psychological processes. Harvard university press.Google ScholarGoogle Scholar
  71. Henning Wachsmuth, Martin Trenkmann, Benno Stein, Gregor Engels, and Tsvetomira Palakarska. 2014. A Review Corpus for Argumentation Analysis. In Proceedings of the 15th International Conference on Computational Linguistics and Intelligent Text Processing - Volume 8404(CICLing 2014). Springer-Verlag New York, Inc., New York, NY, USA, 115–127.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Thiemo Wambsganss, Nikolaos Molyndris, and Matthias Söllner. 2020. Unlocking Transfer Learning in Argumentation Mining: A Domain-Independent Modelling Approach. In 15th International Conference on Wirtschaftsinformatik. Potsdam, Germany. https://doi.org/10.30844/wi_2020_c9-wambsganssGoogle ScholarGoogle ScholarCross RefCross Ref
  73. Thiemo Wambsganss, Christina Niklaus, Matthias Cetto, Matthias Söllner, Jan Marco Leimeister, and Siegfried Handschuh. 2020. AL : An Adaptive Learning Support System for Argumentation Skills. In ACM CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Thiemo Wambsganss, Christina Niklaus, Siegfried Handschuh, and Jan Marco Leimeister. 2020. Annotating Arguments and their Relations in Student Peer-Feedbacks. Under review at COLING2020(2020).Google ScholarGoogle Scholar
  75. Thiemo Wambsganss, Christina Niklaus, Matthias Söllner, Siegfried Handschuh, and Jan Marco Leimeister. 2020. A Corpus for Argumentative Writing Support in German. In 28th International Conference on Computational Linguistics (Coling).Google ScholarGoogle ScholarCross RefCross Ref
  76. Thiemo Wambsganss and Roman Rietsche. 2020. Towards designing an adaptive argumentation learning tool. In 40th International Conference on Information Systems, ICIS 2019. Munich, Germany, 1–9.Google ScholarGoogle Scholar
  77. Thiemo Wambsganss, Florian Weber, and Matthias Söllner. 2021. Design and Evaluation of an Adaptive Empathy Learning Tool. In Hawaii International Conference on System Sciences (HICSS).Google ScholarGoogle ScholarCross RefCross Ref
  78. Thiemo Wambsganss, Rainer Winkler, Pascale Schmid, and Matthias Söllner. 2020. Designing a Conversational Agent as a Formative Course Evaluation Tool. In 15th International Conference on Wirtschaftsinformatik. Potsdam, Germany.Google ScholarGoogle ScholarCross RefCross Ref
  79. Thiemo Wambsganss, Rainer Winkler, Pascale Schmid, and Matthias Söllner. 2020. Unleashing the Potential of Conversational Agents for Course Evaluations: Empirical Insights from a Comparison with Web Surveys. In Twenty-Eighth European Conference on Information Systems (ECIS2020). Marrakesh, Morocco, 1–18.Google ScholarGoogle Scholar
  80. Thiemo Wambsganss, Rainer Winkler, Matthias Söllner, and Jan Marco Leimeister. 2020. A Conversational Agent to Improve Response Quality in Course Evaluations. In ACM CHI Conference on Human Factors in Computing Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. World Economic Forum WEF. 2018. The Future of Jobs Report 2018. Technical Report. https://doi.org/10.1177/0891242417690604Google ScholarGoogle Scholar
  82. Armin Weinberger and Frank Fischer. 2006. A framework to analyze argumentative knowledge construction in computer-supported collaborative learning. Computers and Education 46, 1 (2006), 71–95. https://doi.org/10.1016/j.compedu.2005.04.003Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Rainer Winkler, Sebastian Hobert, Antti Salovaara, Matthias Söllner, and Jan Marco Leimeister. 2020. Sara, the Lecturer: Improving Learning in Online Education with a Scaffolding-Based Conversational Agent. In Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3313831.3376781Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. R. Winkler and M. Söllner. 2018. Unleashing the Potential of Chatbots in Education : A State-Of-The-Art Analysis . In : Academy of Management. Meeting, Annual Chicago, A O M(2018). https://www.alexandria.unisg.ch/254848/1/JML_699.pdfGoogle ScholarGoogle Scholar
  85. N. Zierau, T Wambsganss, Andreas Janson, Sofia Schöbel, and Jan Marco Leimeister. 2020. The Anatomy of User Experience with Conversational Agents : A Taxonomy and Propositions of Service Clues. In International Conference on Information Systems (ICIS).1–17.Google ScholarGoogle Scholar

Index Terms

  1. ArgueTutor: An Adaptive Dialog-Based Learning System for Argumentation Skills
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
            May 2021
            10862 pages
            ISBN:9781450380966
            DOI:10.1145/3411764

            Copyright © 2021 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 7 May 2021

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Acceptance Rates

            Overall Acceptance Rate6,199of26,314submissions,24%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format