ABSTRACT
Artificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. So far, auditory ASEs using beep sounds, visual ASEs using LEDs, and motion ASEs using robot movements have been implemented and shown to be effective. In this paper, we propose a novel type of ASE that uses vibration (vibrational ASEs). We implemented the vibrational ASEs on a smartphone and conducted experiments to confirm whether they can convey a system's confidence level to users in the same way as the other types of ASEs. The results clearly showed that vibrational ASEs were able to accurately and intuitively convey the designed confidence level to participants, demonstrating that ASEs can be applied in a variety of applications in real environments.
- Stavros Antifakos, Nicky Kern, Bernt Schiele, and Adrian Schwaninger. 2005. Towards Improving Trust in Context Aware Systems by Displaying System Confidence, In Proceedings of the 7th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI2005), ACM Press (2005), 9--14. Google ScholarDigital Library
- Victoria Bellotti, and Keith Edwards. 2001. Intelligibility and Accountability: Human Considerations in Context-aware Systems, HumanComputer Interaction 16, 2 (2001), 193--212. Google ScholarDigital Library
- Mohamed Benzeghiba, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore, Pietro Laface, Alfred Mertins, Christophe Ris, Richard Rose, Vivek Tyagi, and Christian Wellekens. 2007. Automatic Speech Recognition and Speech Variability: A Review, Speech Communication 49, 10--11 (2007), 763--786. Google ScholarDigital Library
- Meera M. Blattner, Denise A. Sumikawa, and Robert M. Greenberg. 1989. Earcons and Icons: Their Structure and Common Design Principles, SIGCHI Bulletin. 21, 1 (1989), 123--124. Google ScholarDigital Library
- Hua Cai and Yingzi Lin. 2010. Tuning Trust Using Cognitive Cues for Better Human-machine Collaboration, In Proceedings of the 54th Annual Meeting of the Human Factors and Ergonomics Society (HFES2010), 2437--2441 (5).Google ScholarCross Ref
- Jessica R. Cauchard, Janette L. Cheng, Thomas Pietrzak, and James A. Landay. 2016. ActiVibe: Design and Evaluation of Vibrations for Progress Monitoring, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2016), ACM Press (2016), 3261--3271. Google ScholarDigital Library
- Jinjuan Feng and Andrew Sears. 2004. Using Confidence Scores to Improve Hands-free Speech Based Navigation in Continuous Dictation Systems, ACM Transactions on Computer-Human Interaction 11, 4, ACM Press (2004), 329--356. Google ScholarDigital Library
- Kotaro Funakoshi, Kazuki Kobayashi, Miko Nakano, Takanori Komatsu, and Seiji Yamada. 2010. Nonhumanlike Spoken Dialogue: a Design Perspective, In Proceedings of the 11th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2010), 176--184. Google ScholarDigital Library
- Jihong Hwang and Wonil Hwang. 2011. Vibration perception and excitatory direction for haptic devices, Journal of Intelligent Manufacturing 22 (2011), 17--27. Google ScholarDigital Library
- Eric Horvitz and Matthew Barry. 1995. Display of Information for Time-critical Decision Making, In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann (1995), 296305. Google ScholarDigital Library
- Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI1999), ACM Press (1999), 159--166. Google ScholarDigital Library
- Michael N. Katehakis and Arthur F. Veinott, Jr. 1987. The Multi-Armed Bandit Problem: Decomposition and Computation, Mathematics of Operation Research 12, 2 (1987), 262--268. Google ScholarDigital Library
- Kazuki Kobayashi, Kotaro Funakoshi, Seiji Yamada, Mikio Nakano, Takanori Komatsu, and Yasunori Saito. 2012. Impressions Made by Blinking Light Used to Create Artificial Subtle Expressions and by Robot Appearance in Human-robot Speech Interaction, In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (ROMAN2012), 215--220.Google ScholarCross Ref
- Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2010. Artificial Subtle Expressions: Intuitive Notification Methodology for Artifacts, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2010), ACM Press (2010), 1941--1944. Google ScholarDigital Library
- Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2012. How Can We Live with Overconfident or Unconfident Systems?: A Comparison of Artificial Subtle Expressions with Human-like Expression, In Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci2012) (2012), 1816--1821.Google Scholar
- Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2015. Investigating Ways of Interpretations of Artificial Subtle Expressions Among Different Languages: A Case of Comparison Among Japanese, German, Portuguese and Mandarin Chinese, In Proceedings of the 37th Annual Meeting of the Cognitive Science Society (CogSci2015) (2015), 1159--1164.Google Scholar
- Takanori Komatsu, Kazuki Kobayashi, Seiji Yamada, Kotaro Funakoshi, and Mikio Nakano. 2017. Response Times When Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2017), ACM Press (2017), 3501--3505. Google ScholarDigital Library
- Bing Liu and Ian Lane. 2016. Joint Online Spoken Language Understanding and Language Modeling with Recurrent Neural Networks, In Proceedings of the 17th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL2016) (2016), 22--30.Google ScholarCross Ref
- Deepa Mathew. 2005. vSmileys: Imaging Emotions through Vibration Patterns, In Proceedings of Alternative Access: Feelings&Games 2005, 75--80.Google Scholar
- Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, and Geoffrey Zweig. 2015. Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding, IEEE Transactions on Audio, Speech, and Language Processing 23 (3), (2015), 530--539. Google ScholarDigital Library
- Dylan Moore, Hamish Tennent, Nikolas Martelaro, and Wendy Ju. 2017. Making Noise Intentional: A Study of Servo Sound Perception, In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI2017), ACM Press (2017), 12--21. Google ScholarDigital Library
- Atsunori Ogawa and Atsushi Nakamura. 2012. Joint Estimation of Confidence and Error Causes in Speech Recognition, Speech Communication 54, 9 (2012), 1014--1028. Google ScholarDigital Library
- Huimin Qian, Ravi Kuber, and Andrew Sears. 2010. Towards developing perceivable tactile feedback for mobile devices, International Journal of HumanComputer Studies 69, 11 (2011), 705--719.Google ScholarCross Ref
- Huimin Qian, Ravi Kuber, and Andrew Sears. 2013. Developing tactile icons to support mobile users with situationally-induced impairments and disabilities, In Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS2013), ACM Press (2013), Article No. 47. Google ScholarDigital Library
- Shafiq Ur Réhman and Li Liu. 2008. Vibrotactile Emotions on a Mobile Phone, In Proceedings of IEEE International Conference on Signal Image Technology and Internet Based Systems (SITIS2008) (2008), 239 243. Google ScholarDigital Library
- A. Riener, A. Ferscha, P. Frech, M. Hackl, and M. Kaltenberger. 2010. Subliminal vibro-tactile based notification of CO2 economy while driving, In Proceedings of the 2nd International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI2010), ACM Press (2010), 92--101. Google ScholarDigital Library
- Jonghyun Ryu, Jaehoon Jung, and Seungmoon Choi. 2008. Perceived Magnitudes of Vibrations Transmitted Through Mobile Device, In Proceedings of Symposium on Haptic Interfaces for Virtual Environments and Teleoperator Systems 2008, 139--140. Google ScholarDigital Library
- Bahador Saket, Chrisnawan Prasojo, Yongfeng Huang, Shengdong Zhao. 2013. Designing an Effective Vibration-Based Notification Interface for Mobile Phones, In Proceedings of the 2013 conference on Computer supported cooperative work companion (CSCW2013), ACM Press (2013), 1499--1504. Google ScholarDigital Library
- Sichao Song and Seiji Yamada. 2017. Expressing Emotions through Color, Sound, and Vibration with an Appearance-Constrained Social Robot, In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI2017), ACM Press (2017), 2--11. Google ScholarDigital Library
- Diane Tam, Karon E. MacLean, Joanna McGrenere, and Katherine J. Kuchenbecke. 2013. The design and field observation of a haptic notification system for timing awareness during oral presentations, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI2013), ACM Press (2013), 1689--1698. Google ScholarDigital Library
- Seiji Yamada, Kazunori Terada, Kazuki Kobayashi, Takanori Komatsu, Kotaro Funakoshi, and Mikio Nakano. 2013. Expressing a Robot's Confidence with Motion-based Artificial Subtle Expressions, In Extended Abstract of the SIGCHI Conference on Human Factors in Computing Systems (CHI2013), 1023--1028. Google ScholarDigital Library
Index Terms
- Vibrational Artificial Subtle Expressions: Conveying System's Confidence Level to Users by Means of Smartphone Vibration
Recommendations
Response Times when Interpreting Artificial Subtle Expressions are Shorter than with Human-like Speech Sounds
CHI '17: Proceedings of the 2017 CHI Conference on Human Factors in Computing SystemsArtificial subtle expressions (ASEs) are machine-like expressions used to convey a system's confidence level to users intuitively. In this paper, we focus on the cognitive loads of users in interpreting ASEs in this study. Specifically, we assume that a ...
Artificial subtle expressions: intuitive notification methodology of artifacts
CHI '10: Proceedings of the SIGCHI Conference on Human Factors in Computing SystemsWe describe artificial subtle expressions (ASEs) as intuitive notification methodology for artifacts' internal states for users. We prepared two types of audio ASEs; one was a flat artificial sound (flat ASE), and the other was a sound that decreased in ...
Augmenting expressivity of artificial subtle expressions (ASEs): preliminary design guideline for ASEs
AH '14: Proceedings of the 5th Augmented Human International ConferenceUnfortunately, there is little hope that information-providing systems will ever be perfectly reliable. The results of some studies have indicated that imperfect systems can reduce the users' cognitive load in interacting with them by expressing their ...
Comments