Abstract
This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time.
- ABOABA, A. 1996. Using sound in telephone-based interfaces. Master's Thesis. University of Glasgow, Glasgow, Scotland, UK.]]Google Scholar
- BADDELEY, A. 1990. Human Memory: Theory and Practice. Lawrence Erlbaum Associates, Inc., Mahwah, NJ.]]Google Scholar
- BARFIELD, W., ROSENBERG, C., AND LEVASSEUR, G. 1991. The use of icons, earcons and commands in the design of an online hierarchical menu. IEEE Trans. Prof. Commun. 34, 2, 101-108.]]Google ScholarCross Ref
- BAUMGART, D., JOHNSON, J., AND HELMSTETTER, E. 1990. Augmentative and Alternative Communication Systems for Persons with Moderate and Severe Disabilities. Paul Brooks Publishing Co., Baltimore, MD.]]Google Scholar
- BLATTNER, M., PAPP, A., AND GLINERT, E. 1992. Sonic enhancements of two-dimensional graphic displays. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 447-470.]]Google Scholar
- BLATTNER, M., SUMIKAWA, D., AND GREENBERG, R. 1989. Earcons and icons: Their structure and common design principles. Human-Comput. Interact. 4, 1, 11-44.]] Google ScholarDigital Library
- BREWSTER, S.A. 1994. Providing a structured method for integrating non-speech audio into human-computer interfaces. Ph.D. Dissertation. University of York, York, UK.]]Google Scholar
- BREWSTER, S. A., RATY, V.-P., AND KORTEKANGAS, A. 1996. Earcons as a method of providing navigational cues in a menu hierarchy. In Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '96, London, UK). British Computer Society, Swinton, UK, 169-183.]] Google ScholarDigital Library
- BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1992. A detailed investigation into the effectiveness of earcons. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 471-498.]]Google Scholar
- BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1993. An evaluation of earcons for use in auditory human-computer interfaces. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 222-227.]] Google ScholarDigital Library
- BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. The design and evaluation of an auditory-enhanced scrollbar. In Proceedings of the Conference on Human Factors in Computing Systems: "Celebrating Interdependence" (CHI '94, Boston, MA, Apr. 24-28), B. Adelson, S. Dumais, and J. Olson, Eds. ACM Press, New York, NY, 173-179.]] Google ScholarDigital Library
- BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995a. Experimentally derived guidelines for the creation of earcons. In Adjunct Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '95, Huddersfield, UK). British Computer Society, Swinton, UK, 155-159.]]Google Scholar
- BREWSTER, S., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995b. Parallel earcons: Reducing the length of audio messages. Int. J. Hum.-Comput. Stud. 43, 2 (Aug.), 153-175.]] Google ScholarDigital Library
- GAVER, W. 1989. The SonicFinder: An interface that uses auditory icons. Human-Comput. Interact. 4, 1, 67-94.]] Google ScholarDigital Library
- GAVER, W. W., SMITH, R. B., AND O'SHEA, T. 1991. Effective sounds in complex systems: The ARKOLA simulation. In Proceedings of the Conference on Human Factors in Computing Systems: Reaching through Technology (CHI '91, New Orleans, LA, Apr. 27-May 2), S. P. Robertson, G. M. Olson, and J. S. Olson, Eds. ACM Press, New York, NY, 85-90.]] Google ScholarDigital Library
- MAQUIRE, M. 1996. A human-factors study of telephone developments and convergence. Contemp. Ergon., 446-451.]]Google Scholar
- ROSSON, M. B. 1985. Using synthetic speech for remote access to information. Behav. Res. Methods Instr. Comput. 17, 2, 250-252.]]Google ScholarCross Ref
- SCHUMACHER, R. M., HARDZINSKI, M. L., AND SCHWARTZ, A. L. 1995. Increasing the usability of interactive voice response systems. Hum. Factors 37, 2, 251-264.]]Google ScholarCross Ref
- SLOWIACZEK, L. M. AND NUSBAUM, a. C. 1985. Effects of speech rate and pitch contour on the perception of synthetic speech. Hum. Factors 27, 6, 701-712.]]Google ScholarCross Ref
- STEVENS, R. 1996. Principles for the design of auditory interfaces to present complex information to blind people. Ph.D. Dissertation. University of York, York, UK.]]Google Scholar
- STEVENS, R. D., BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. Providing an audio glance at algebra for blind researchers. In Proceedings of the International Conference on Auditory Display (ICAD '94, Santa Fe, NM). 21-30.]]Google Scholar
- STIFELMAN, L. J., ARONS, B., SCHMANDT, C., AND HULTEEN, E.A. 1993. VoiceNotes: A speech interface for a hand-held voice notetaker. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 179-186.]] Google ScholarDigital Library
- SUMIKAWA, D.A. 1985. Guidelines for the integration of audio cues into computer user interfaces. Tech. Rep. UCRL 53656. Lawrence Livermore National Laboratory, Livermore, CA.]]Google Scholar
- SUMIKAWA, D., BLATINER, M., JOY, K., AND GREENBERG, R. 1986. Guidelines for the syntactic design of audio cues in computer interfaces. Tech. Rep. UCRL 92925. Lawrence Livermore National Laboratory, Livermore, CA.]]Google Scholar
- WOLF, C., KOVED, L., AND KUZINGER, E. 1995. Ubiquitous mail: Speech and graphical interfaces to an integrated voice/email mailbox. In Proceedings of the 3rd IFIP Conference on Human-Computer Interaction (INTERACT '95, Lillehammer, Norway). Chapman & Hall, Ltd., London, UK, 247-252.]]Google Scholar
- YANKELOVICH, N., LEVOW, G.-A., AND MARX, M. 1995. Designing SpeechActs: Issues in speech user interfaces. In Proceedings of the Conference on Human Factors in Computing Systems (CHI '95, Denver, CO, May 7-11), I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. ACM Press/Addison-Wesley Publ. Co., New York, NY, 369-376.]] Google ScholarDigital Library
Index Terms
- Using nonspeech sounds to provide navigation cues
Recommendations
An evaluation of earcons for use in auditory human-computer interfaces
CHI '93: Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing SystemsAn evaluation of earcons was carried out to see whether they are an effective means of communicating information in sound. An initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more ...
Understanding concurrent earcons: Applying auditory scene analysis principles to concurrent earcon recognition
Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was ...
Lyricon Lyrics + Earcons Improves Identification of Auditory Cues
Proceedings, Part II, of the 4th International Conference on Design, User Experience, and Usability: Users and Interactions - Volume 9187Auditory researchers have developed various non-speech cues in designing auditory user interfaces. A preliminary study of "lyricons" lyricsï ź+ï źearcons [1] has provided a novel approach to devising auditory cues in electronic products, by combining ...
Comments