skip to main content
10.1145/3547522.3547678acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnordichiConference Proceedingsconference-collections
extended-abstract

XAI for learning: Narrowing down the digital divide between “new” and “old” experts

Authors Info & Claims
Published:08 October 2022Publication History

ABSTRACT

Regular eXplainable AI (XAI) approaches are often ineffective in supporting decision-makers across domains. In some instances, it can even lead to automation bias or algorithmic aversion or would simply be ignored as a redundant feature. Based on cognitive psychology literature we outline a strategy for how XAI interface design could be tailored to have a long-lasting educational value. We suggest the features that could support domain-related and technical skills development this way narrowing the digital divide between “new” and “old” experts. Lastly, we suggest an intermitted explainability approach that could help to find a balance between seamless and cognitively engaging explanations.

References

  1. Asbjørn W. A. Flügge, Thomas Hildebrandt, and Naja H. Møller. 2020. Algorithmic decision making in public services: A CSCW-perspective. In Companion of the 2020 ACM International Conference on Supporting Group Work, January 06, 2020, 111-114. https://doi.org/10.1145/3323994.3369886Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Nicholas Diakopoulos. 2019. Automating the news: How algorithms are rewriting the media. Harvard University Press, Cambridge, Massachusetts.Google ScholarGoogle Scholar
  3. Anna B. Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, 1–12. https://doi.org/10.1145/3290605.3300271Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Andrew D. Selbst, Andrew, and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham L. Rev 87 (2018), 1085.Google ScholarGoogle Scholar
  5. Alejandro B. Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, 2020. Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion 58 (2020), 82–115. https://doi. org/10.1016/j.inffus.2019.12.012Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Min K. Lee, Ji T. Kim, and Leah Lizarondo. 2017. A human-centered approach to algorithmic services: Considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3365–3376. https://doi.org/10.1145/3025453.3025884Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Bashmira Nushi, Ece Kamar 2021. Does the whole exceed its parts? The effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, 1–16. https://doi.org/10.1145/3411764.3445717Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Juergen Sauer, Alain Chavaillaz, and David Wastell. 2016. Experience of automation failures in training: Effects on trust, automation bias, complacency and performance. Ergonomics 59, 6, 767–780. https://doi.org/10.1080/00140139.2015.1094577Google ScholarGoogle ScholarCross RefCross Ref
  9. Linda J. Skitka, Kathleen L. Mosier, and Mark Burdick. 1999. Does automation bias decision-making? International Journal of Human-Computer Studies 51, 5, 991–1006. https://doi.org/ 10.1006/ijhc.1999.0252Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 international conference on healthcare informatics, IEEE, 160-169.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology. General 144, 1, 114. https://doi.org/10.1037/xge0000033Google ScholarGoogle Scholar
  12. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 chi conference on human factors in computing systems, 2018, 1-14. https://doi.org/10.1145/3173574.3174014Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, 1-12. https://doi.org/10.1145/3313831.3376638Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Mohammad Naiseh, Reem S. Al-Mansoori, Dena Al-Thani, Nan Jiang, and Raian Ali. 2021. Nudging through Friction: an Approach for Calibrating Trust in Explainable AI. In 2021 8th International Conference on Behavioral and Social Computing (BESC), 2021, 1-5.Google ScholarGoogle ScholarCross RefCross Ref
  15. Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice. In 23rd international conference on intelligent user interfaces, 2018, 211-223. https://doi.org/10.1145/3172944.3172961Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Sophie van Baalen, Mieke Boon, and Petra Verhoef. 2021. From clinical decision support to clinical reasoning support systems. Journal of evaluation in clinical practice 27, 3, (2021), 520-528. doi: 10.1111/jep.13541Google ScholarGoogle ScholarCross RefCross Ref
  17. Allison Woodruff, Yasmin Asare Anderson, Katherine Jameson Armstrong, Marina Gkiza, Jay Jennings, Christopher Moessner, Fernanda Viegas, Martin Wattenberg, Fabian Wrede, and Patrick Gage Kelley. 2020. A cold, technical decision-maker: Can AI provide explainability, negotiability, and humanity?.arXiv preprint arXiv:2012.00874 (2020).Google ScholarGoogle Scholar
  18. Lingxue Yang, Hongrun Wang, and Léa A. Deleris. 2021. What Does It Mean to Explain? A User-Centered Study on AI Explainability. In International Conference on Human-Computer Interaction, 2021, 107-121. https://doi.org/10.1007/978-3-030-77772-2_8Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Ben Green, and Yiling Chen. Disparate interactions: an algorithm-in-the-loop analysis of fairness in risk assessments. Proc. In 2019 Conference on Fairness, Accountability and Transparency, January 2019, 90-99. https://doi.org/10.1145/3287560.3287563Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational psychiatry 11, 1, (2021), 1-9.Google ScholarGoogle Scholar
  21. Ming Yin, Wortman Vaughan, J. & Wallach, H. 2019. Understanding the effect of accuracy on trust in machine learning models. In Proceeding of the 2019 CHI Conference on Human Factors in Computing Systems. 2019, 1-12. https://doi.org/10.1145/ 3290605.3300509Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Max Schemmer, Patrick Hemmer, Maximilian Nitsche, Niklas Kühl, and Michael Vössing. 2022. A Meta-Analysis on the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making. arXiv preprint arXiv:2205.05126Google ScholarGoogle Scholar
  23. Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee, and André Freitas. 2022. Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making." arXiv preprint arXiv:2204.05030Google ScholarGoogle Scholar
  24. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems, 2018, 1–18. https://doi.org/10.1145/3173574.3174156Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, 2020. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, 648–657. https://doi.org/10.1145/ 3351095.3375624Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Max Schemmer, Niklas Kühl, and Gerhard Satzger. 2021. Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence. arXiv preprint arXiv:2109.13827Google ScholarGoogle Scholar
  27. Itamar Simonson, Ziv Carmon, and Suzanne O'curry. 1994. Experimental evidence on the negative effect of product features and sales promotions on brand choice. Marketing science 13, 1, (1994), 23-40.Google ScholarGoogle Scholar
  28. Paolo Legrenzi, Vittorio Girotto, and Philip N. Johnson-Laird. 1993. Focussing in reasoning and decision making. Cognition 49, 1-2, (1993), 37-66.Google ScholarGoogle ScholarCross RefCross Ref
  29. Andrew P. Sage. 1981. Behavioral and organizational considerations in the design of information systems and processes for planning and decision support. IEEE Transactions on Systems, Man, and Cybernetics 11, 9, (1981), 640-678.Google ScholarGoogle ScholarCross RefCross Ref
  30. James Schaffer, John O'Donovan, James Michaelis, Adrienne Raglin, and Tobias Höllerer. 2019. I can do better than your AI: expertise and explanations. In Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, 240-251.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Corvandar Gilvary, Neel Madhukar, Jamal Elkhader, and Olivier Elemento. 2019. The missing pieces of artificial intelligence in medicine. Trends in pharmacological sciences 40, 8 (2019), 555-564.Google ScholarGoogle Scholar
  32. Rojo, Diego, Nyi Nyi Htun, Denis Parra, Robin De Croon, and Katrien Verbert. 2021. AHMoSe: A knowledge-based visual support system for selecting regression machine learning models. Computers and Electronics in Agriculture 187 (2021), 106183.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Sarah Bayer, Henner Gimpel, and Moritz Markgraf. 2021. The role of domain expertise in trusting and following explainable AI decision support systems. Journal of Decision Systems (2021), 1-29.Google ScholarGoogle Scholar
  34. Guoyan Li, Chenxi Yuan, Sagar Kamarthi, Mohsen Moghaddam, and Xiaoning Jin. 2021. Data science skills and domain knowledge requirements in the manufacturing industry: A gap analysis. Journal of Manufacturing Systems 60 (2021), 692-706.Google ScholarGoogle ScholarCross RefCross Ref
  35. Bronwyn Jones, and Ewa Luger. 2021. AI and Journalism-Intelligible Cloud and Edge AI (ICE-AI).Google ScholarGoogle Scholar
  36. He Zhang, and Haichao Wang. 2020. Information skills and literacy in investigative journalism in the social media era. Journal of Information Science (2022), 01655515221094442.Google ScholarGoogle Scholar
  37. André Renz, and Romy Hilbig. 2020. Prerequisites for artificial intelligence in further education: identification of drivers, barriers, and business models of educational technology companies. International Journal of Educational Technology in Higher Education 17, 1 (2020), 1-21.Google ScholarGoogle ScholarCross RefCross Ref
  38. Gary Klein, and Valerie M. Chase. 1998. Sources of power: How people make decisions. Nature 392, 6673 (1998), 242. -242.Google ScholarGoogle Scholar
  39. Huber L. Dreyfus, and Stuart E. Dreyfus. 1986. From Socrates to expert systems: The limits of calculative rationality. In Philosophy and technology II, 111-130. Springer, Dordrecht.Google ScholarGoogle Scholar
  40. Robert J.B. Hutton, and Gary Klein. 1999. Expert decision making. Systems Engineering: The Journal of The International Council on Systems Engineering 2, 1 (1999), 32-45.Google ScholarGoogle ScholarCross RefCross Ref
  41. Kevin A. Hoff, and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407-434.Google ScholarGoogle ScholarCross RefCross Ref
  42. Ella Glikson, and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627-660.Google ScholarGoogle ScholarCross RefCross Ref
  43. Jens Rasmussen. 1993. Deciding and doing: decision making in natural context. In Decision Making in Action: Models and Methods. Ablex Publishing, 1993.Google ScholarGoogle Scholar
  44. Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW, 2021, 1-21.Google ScholarGoogle Scholar
  45. Kate Goddard, Abdul Roudsari, and Jeremy C. Wyatt. 2014. Automation bias: empirical results assessing influencing factors. International journal of medical informatics 83, 5 (2014), 368-375.Google ScholarGoogle Scholar
  46. Richard M. Shiffrin, and Walter Schneider. 1977. Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological review 84, 2 (1977), 127.Google ScholarGoogle Scholar
  47. Kathryn A. Lambe, Gary O'Reilly, Brendan D. Kelly, and Sarah Curristan. 2016. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ quality & safety 25, 10 (2016), 808-820.Google ScholarGoogle Scholar
  48. Pat Croskerry. 2003. Cognitive forcing strategies in clinical decisionmaking. Annals of emergency medicine 41, 1 (2003), 110-120.Google ScholarGoogle ScholarCross RefCross Ref
  49. Clara Bove, Jonathan Aigrain, Marie-Jeanne Lesot, Charles Tijus, and Marcin Detyniecki. 2022. Contextualization and Exploration of Local Feature Importance Explanations to Improve Understanding and Satisfaction of Non-Expert Users. In 27th International Conference on Intelligent User Interfaces, 807-819.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Dongxiao Gu, Wang Zhao, Yi Xie, Xiaoyu Wang, Kaixiang Su, and Oleg V. Zolotarev. 2021. A Personalized Medical Decision Support System Based on Explainable Machine Learning Algorithms and ECC Features: Data from the Real World. Diagnostics 11, 9 (2021), 1677.Google ScholarGoogle ScholarCross RefCross Ref
  51. Kyle Martin, Anne Liret, Nirmalie Wiratunga, Gilbert Owusu, and Mathias Kern. 2019. Developing a catalogue of explainability methods to support expert and non-expert users. In International Conference on Innovative Techniques and Applications of Artificial Intelligence, 309-324. Springer, Cham.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Patrick Baudisch, Nathaniel Good, Victoria Bellotti, and Pamela Schraedley. 2002. Keeping things in context: a comparative evaluation of focus plus context screens, overviews, and zooming. In Proceedings of the SIGCHI conference on Human factors in computing systems, 2002, 259-266.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Scott M. Lundberg, and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  54. Oscar Gomez, Steffen Holter, Jun Yuan, and Enrico Bertini. 2020. Vice: Visual counterfactual explanations for machine learning models. In Proceedings of the 25th International Conference on Intelligent User Interfaces, 2020, 531-535.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Yolanda Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, Daniel Garijo, Shruti Gadewar, Qifan Yang, and Neda Jahanshad. 2019. Towards human-guided machine learning. In Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019, 614-624.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Yadin Dudai, Avi Karni, and Jan Born. 2015. The consolidation and transformation of memory. Neuron 88, 1 (2015), 20-32.Google ScholarGoogle ScholarCross RefCross Ref
  57. Gary Klein, 2008. Naturalistic decision-making. Human factors 50, 3 (2008), 456-460.Google ScholarGoogle Scholar
  58. Hillel J. Einhorn, and Robin M. Hogarth. 1981. Behavioral decision theory: Processes of judgement and choice. Annual review of psychology 32, 1 (1981), 53-88.Google ScholarGoogle Scholar
  59. Gary Klein, and Beth W. Crandall. 1995. The role of mental simulation in naturalistic decision making. Local applications of the ecological approach to human-machine systems 2 (1995), 324-358.Google ScholarGoogle Scholar
  60. Jennifer K. Phillips, and D. A. Battaglia. 2003. Instructional methods for training sensemaking skills. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference. Orlando, FL: National Training Systems Association, 2003.Google ScholarGoogle Scholar
  61. Jennifer K. Phillips, Gary Klein, and Winston R. Sieck. 2004. Expertise in judgment and decision making: A case for training intuitive decision skills. Blackwell handbook of judgment and decision making 297 (2004), 315.Google ScholarGoogle Scholar
  62. Robert R. Hoffman, Beth Crandall, and Nigel Shadbolt. 1998. Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human factors 40, 2 (1998), 254-276.Google ScholarGoogle Scholar
  63. Laura G. Militello and Gary Klein. 2013. Decision-centered design. The Oxford handbook of cognitive engineering (2013), 261-271.Google ScholarGoogle Scholar
  64. Joshua Klayman, and Kaye Brown. 1993. Debias the environment instead of the judge: An alternative approach to reducing error in diagnostic (and other) judgment. Cognition 49, 1-2 (1993), 97-122.Google ScholarGoogle ScholarCross RefCross Ref
  65. Philip N. Johnson-Laird, and Eldar Shafir. 1993. The interaction between reasoning and decision making: An introduction. Cognition 49, 1-2 (1993), 1-9.Google ScholarGoogle ScholarCross RefCross Ref
  66. Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. Interpreting interpretability: understanding data scientists' use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems, 2020, 1-14. https://doi.org/10.1145/3313831.3376219Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Ross, K. G., D. Battaglia, J. Phillips, E. A. Domeshek, and J. W. Lussier. 2003. Mental models underlying tactical thinking skills. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference.Google ScholarGoogle Scholar
  68. Eduardo Salas, Katherine A. Wilson, C. Shawn Burke, and Clint A. Bowers. 2002. Myths about crew resource management training. Ergonomics in Design 10, 4 (2002), 20-24.Google ScholarGoogle ScholarCross RefCross Ref
  69. Walter Schneider. 1985. Training high-performance skills: Fallacies and guidelines. Human factors 27, 3 (1985), 285-300.Google ScholarGoogle Scholar
  70. Joshua Klayman. 1988. On the how and why (not) of learning from outcomes. Advances in psychology 54, (1988), 115-162.Google ScholarGoogle Scholar
  71. William K. Balzer, and Michael E. Doherty. Effects of cognitive feedback on performance. Psychological bulletin 106, 3 (1989), 410.Google ScholarGoogle Scholar
  72. Paul J. Hoffman, Timothy C. Earle, and Paul Slovic. 1981. Multidimensional functional learning (MFL) and some new conceptions of feedback. Organizational behavior and Human performance 27, 1 (1981), 75-102.Google ScholarGoogle Scholar
  73. Gary Klein, Brian Moon, and Robert R. Hoffman. 2006. Making sense of sensemaking 2: A macrocognitive model. IEEE Intelligent systems 21, 5 (2006), 88-92.Google ScholarGoogle Scholar
  74. Rebecca M. Pliske, Michael J. McCloskey, and Gary Klein. 2001. Decision skills training: Facilitating learning from experience. In Naturalistic Decision Making, May 4, 1998, Airlie Center, Warrenton, VA, US. Lawrence Erlbaum Associates Publishers, 2001.Google ScholarGoogle Scholar
  75. Rebecca M. Pliske, Beth Crandall, and Gary Klein. 2004. Competence in weather forecasting. Psychological investigations of competence in decision making 40, (2004), 68.Google ScholarGoogle Scholar
  76. Benjamin Höferlin, Rudolf Netzel, Markus Höferlin, Daniel Weiskopf, and Gunther Heidemann. 2012. Inter-active learning of ad-hoc classifiers for video visual analytics. In 2012 IEEE Conference on Visual Analytics Science and Technology (VAST), 23-32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Dominik Sacha, Michael Sedlmair, Leishi Zhang, John A. Lee, Jaakko Peltonen, Daniel Weiskopf, Stephen C. North, and Daniel A. Keim. 2017. What you see is what you can change: Human-centered machine learning by interactive visualization. Neurocomputing 268 (2017), 164-175.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Marko Bohanec, Marko Robnik-Šikonja, and Mirjana Kljajić Borštnar. 2017. Decision-making framework with double-loop learning through interpretable black-box machine learning models. Industrial Management & Data Systems 117, 7 (2017), 1389-1406.Google ScholarGoogle ScholarCross RefCross Ref
  79. Donald A. Norman. 1999. Affordance, conventions, and design. Interactions, 6, 3 (1999), 38-43.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans, and Rhianne Jones. 2021. Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable. Journal of Responsible Technology 7 (2021),100017.Google ScholarGoogle ScholarCross RefCross Ref
  81. Tjeerd A.J. Schoonderwoerd, Wiard Jorritsma, Mark A. Neerincx, and Karel Van Den Bosch. 2021. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies 154 (2021), 102684.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Ewart J. de Visser, Marieke MM Peeters, Malte F. Jung, Spencer Kohn, Tyler H. Shaw, Richard Pak, and Mark A. Neerincx. 2020. Towards a theory of longitudinal trust calibration in human-robot teams. International journal of social robotics 12, 2 (2020), 459-478.Google ScholarGoogle Scholar
  83. John Zerilli, Umang Bhatt, and Adrian Weller. 2022. How transparency modulates trust in artificial intelligence. Patterns 3, 4 (2022), 100455.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    NordiCHI '22: Adjunct Proceedings of the 2022 Nordic Human-Computer Interaction Conference
    October 2022
    216 pages
    ISBN:9781450394482
    DOI:10.1145/3547522

    Copyright © 2022 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 October 2022

    Check for updates

    Qualifiers

    • extended-abstract
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate379of1,572submissions,24%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format