skip to main content
10.1145/3173574.3174052acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Vibrational Artificial Subtle Expressions: Conveying System's Confidence Level to Users by Means of Smartphone Vibration

Published:21 April 2018Publication History

ABSTRACT

Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. So far, auditory ASEs using beep sounds, visual ASEs using LEDs, and motion ASEs using robot movements have been implemented and shown to be effective. In this paper, we propose a novel type of ASE that uses vibration (vibrational ASEs). We implemented the vibrational ASEs on a smartphone and conducted experiments to confirm whether they can convey a system's confidence level to users in the same way as the other types of ASEs. The results clearly showed that vibrational ASEs were able to accurately and intuitively convey the designed confidence level to participants, demonstrating that ASEs can be applied in a variety of applications in real environments.

References

  1. Stavros Antifakos, Nicky Kern, Bernt Schiele, and Adrian Schwaninger. 2005. Towards Improving Trust in Context Aware Systems by Displaying System Confidence, In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI2005), ACM Press (2005), 9--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Victoria Bellotti, and Keith Edwards. 2001. Intelligibility and Accountability: Human Considerations in Context-aware Systems, HumanComputer Interaction 16, 2 (2001), 193--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mohamed Benzeghiba, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore, Pietro Laface, Alfred Mertins, Christophe Ris, Richard Rose, Vivek Tyagi, and Christian Wellekens. 2007. Automatic Speech Recognition and Speech Variability: A Review, Speech Communication 49, 10--11 (2007), 763--786. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Meera M. Blattner, Denise A. Sumikawa, and Robert M. Greenberg. 1989. Earcons and Icons: Their Structure and Common Design Principles, SIGCHI Bulletin. 21, 1 (1989), 123--124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Hua Cai and Yingzi Lin. 2010. Tuning Trust Using Cognitive Cues for Better Human-machine Collaboration, In Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society (HFES2010), 2437--2441 (5).Google ScholarGoogle ScholarCross RefCross Ref
  6. Jessica R. Cauchard, Janette L. Cheng, Thomas Pietrzak, and James A. Landay. 2016. ActiVibe: Design and Evaluation of Vibrations for Progress Monitoring, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2016), ACM Press (2016), 3261--3271. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jinjuan Feng and Andrew Sears. 2004. Using Confidence Scores to Improve Hands-free Speech Based Navigation in Continuous Dictation Systems, ACM Transactions on Computer-Human Interaction 11, 4, ACM Press (2004), 329--356. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Kotaro Funakoshi, Kazuki Kobayashi, Miko Nakano, Takanori Komatsu, and Seiji Yamada. 2010. Nonhumanlike Spoken Dialogue: a Design Perspective, In Proceedings of the 11th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2010), 176--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jihong Hwang and Wonil Hwang. 2011. Vibration perception and excitatory direction for haptic devices, Journal of Intelligent Manufacturing 22 (2011), 17--27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Eric Horvitz and Matthew Barry. 1995. Display of Information for Time-critical Decision Making, In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann (1995), 296305. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI1999), ACM Press (1999), 159--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Michael N. Katehakis and Arthur F. Veinott, Jr. 1987. The Multi-Armed Bandit Problem: Decomposition and Computation, Mathematics of Operation Research 12, 2 (1987), 262--268. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kazuki Kobayashi, Kotaro Funakoshi, Seiji Yamada, Mikio Nakano, Takanori Komatsu, and Yasunori Saito. 2012. Impressions Made by Blinking Light Used to Create Artificial Subtle Expressions and by Robot Appearance in Human-robot Speech Interaction, In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (ROMAN2012), 215--220.Google ScholarGoogle ScholarCross RefCross Ref
  14. Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2010. Artificial Subtle Expressions: Intuitive Notification Methodology for Artifacts, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2010), ACM Press (2010), 1941--1944. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2012. How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression, In Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci2012) (2012), 1816--1821.Google ScholarGoogle Scholar
  16. Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2015. Investigating Ways of Interpretations of Artificial Subtle Expressions Among Different Languages: A Case of Comparison Among Japanese, German, Portuguese and Mandarin Chinese, In Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci2015) (2015), 1159--1164.Google ScholarGoogle Scholar
  17. Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2017. Response Times When Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2017), ACM Press (2017), 3501--3505. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bing Liu and Ian Lane. 2016. Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks, In Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2016) (2016), 22--30.Google ScholarGoogle ScholarCross RefCross Ref
  19. Deepa Mathew. 2005. vSmileys: Imaging Emotions through Vibration Patterns, In Proceedings of Alternative Access: Feelings&Games 2005, 75--80.Google ScholarGoogle Scholar
  20. Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, and Geoffrey Zweig. 2015. Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding, IEEE Transactions on Audio, Speech, and Language Processing 23 (3), (2015), 530--539. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Dylan Moore, Hamish Tennent, Nikolas Martelaro, and Wendy Ju. 2017. Making Noise Intentional: A Study of Servo Sound Perception, In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI2017), ACM Press (2017), 12--21. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Atsunori Ogawa and Atsushi Nakamura. 2012. Joint Estimation of Confidence and Error Causes in Speech Recognition, Speech Communication 54, 9 (2012), 1014--1028. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Huimin Qian, Ravi Kuber, and Andrew Sears. 2010. Towards developing perceivable tactile feedback for mobile devices, International Journal of HumanComputer Studies 69, 11 (2011), 705--719.Google ScholarGoogle ScholarCross RefCross Ref
  24. Huimin Qian, Ravi Kuber, and Andrew Sears. 2013. Developing tactile icons to support mobile users with situationally-induced impairments and disabilities, In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS2013), ACM Press (2013), Article No. 47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Shafiq Ur Réhman and Li Liu. 2008. Vibrotactile Emotions on a Mobile Phone, In Proceedings of IEEE International Conference on Signal Image Technology and Internet Based Systems (SITIS2008) (2008), 239 243. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Riener, A. Ferscha, P. Frech, M. Hackl, and M. Kaltenberger. 2010. Subliminal vibro-tactile based notification of CO2 economy while driving, In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI2010), ACM Press (2010), 92--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jonghyun Ryu, Jaehoon Jung, and Seungmoon Choi. 2008. Perceived Magnitudes of Vibrations Transmitted Through Mobile Device, In Proceedings of Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems 2008, 139--140. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Bahador Saket, Chrisnawan Prasojo, Yongfeng Huang, Shengdong Zhao. 2013. Designing an Effective Vibration-Based Notification Interface for Mobile Phones, In Proceedings of the 2013 conference on Computer supported cooperative work companion (CSCW2013), ACM Press (2013), 1499--1504. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Sichao Song and Seiji Yamada. 2017. Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot, In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI2017), ACM Press (2017), 2--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Diane Tam, Karon E. MacLean, Joanna McGrenere, and Katherine J. Kuchenbecke. 2013. The design and field observation of a haptic notification system for timing awareness during oral presentations, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2013), ACM Press (2013), 1689--1698. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Seiji Yamada, Kazunori Terada, Kazuki Kobayashi, Takanori Komatsu, Kotaro Funakoshi, and Mikio Nakano. 2013. Expressing a Robot's Confidence with Motion-based Artificial Subtle Expressions, In Extended Abstract of the SIGCHI Conference on Human Factors in Computing Systems (CHI2013), 1023--1028. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Vibrational Artificial Subtle Expressions: Conveying System's Confidence Level to Users by Means of Smartphone Vibration

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
      April 2018
      8489 pages
      ISBN:9781450356206
      DOI:10.1145/3173574

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 21 April 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader