Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2022 | OriginalPaper | Buchkapitel

Accessible User Interface Concept for Business Meeting Tool Support Including Spatial and Non-verbal Information for Blind and Visually Impaired People

verfasst von : Reinhard Koutny, Klaus Miesenberger

Erschienen in: Computers Helping People with Special Needs

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Business meetings play an essential role in many people’s work life. Although, business meetings have changed over time, especially tools, which are used to support the process, slowly moving from traditional means like flipcharts to more modern, digital alternatives, some aspects stayed the same: Visual information is used to gather thoughts, support arguments and lead the discussion. These kinds of information used to be completely inaccessible to blind and visually impaired people (BVIP) and still are, for the most part. Even though, the movement towards digitalization facilitates accessibility, no fully accessible tool support for business meetings is available. Additionally, non-verbal communication and spatial information is heavily used as well. People use mimic and gestures, and they refer to objects or other people by pointing at them. BVIP miss out on this type of information as well. Ultimately, BVIP have a significant disadvantage during business meetings and very often during their entire professional life. Research efforts have tried to mitigate single aspects of this situation, but no comprehensive user interface approach has be developed. This paper presents a user interface approach, as part of the MAPVI project [1], that allows BVIP access visual, non-verbal and spatial information of business meetings in a user-friendly manner, using only off-the-shelf hardware. Additionally, it presents results of user tests of this novel user interface.

1 Introduction

Communication between multiple humans talking to each other in person does not only consist of the spoken language and the auditory channel to convey information. It includes multiple channels. The most obvious one is the visual sense, which provides additional information on multiple levels, often referred to as a key part of “non-verbal communication”. People constantly refer to the real world, which is often done by gestures, by referring to its visual appearance or location in space or a combination of both commonly called deictic gestures. Common artifacts being descripted or referred to during conversations are objects, persons, location, or processes/actions.
In addition, non-verbal communication plays a crucial role in face-to-face conversations as well. In particular, facial expressions, postures, gestures and body language in general convey manifold complementary information, describing the underlying meaning of the spoken words, the feelings and emotions of the speaker, and other people that are part of a group conversation and often put statements into perspective, sometimes even negate the actual meaning of them (e.g. the concept of sarcasm).
However, not every persons has the same capabilities and might be restricted in to which degree they can use their senses for communication. For instance BVIP cannot use or are limited in using their visual sense and are therefore at risk to miss important information. In particular, group conversations are a major problem. Conversations only involving two persons, one of them is blind and the other one is sighted, the sighted person will most likely adapt to the situation and try avoiding gestures, which require the visual sense, or rather additionally describe important information, e.g. gestures, verbally.
Group conversations, on the other hand, where most of the participants are sighted, tend to be much more challenging in terms of equal access. Frequently, sighted people will fall back to behavioral patterns to make heavy use of gestures and body language to convey information when they specifically talk to a sighted person in a group, while the blind person is still passively participating listening and missing out on a decisive junk of information.
In particular, group conversations at business meetings with all of their visual artifacts and references to the real world introduce a vast multi-dimensional information space, which is impossible to adequately explore for blind persons by traditional methods only using their acoustic sense due to the enormous complexity of this kind of information.
A broad variety of tools to support business meetings is available. They help to record discussions, structure thoughts and support presenters with their arguments. Traditionally, analog whiteboards, sketching tables or flipcharts were used, while in recent years a slow change towards digital means has taken place. Modern meeting rooms are equipped as a minimum standard with digital projectors to show PowerPoint presentation to other participants, but wall-mounted touchscreens are commonly used as well. While the shift to digital tools theoretically provides accessibility for BVIP to some extent, the reality looks much different. Most tools, file formats and documents do not consider this target group adequately, which leads to a situation were sometimes solely textual information, and if provided alt-texts, can be read, but the spatial arrangement of items on a slide or whiteboard is inaccessible. However, this information is crucial to understand the context of the textual information, which is then confusing at best and straight inaccessible most of the time.
For decades, researchers have developed approaches to solve or at least mitigate these issues. Earlier approaches used mice and keyboards, some even optically tracked pens [2] and PDAs [3] to feed information to the system and share it on an electronic whiteboard. More recently, researchers have developed an approach facilitating a back-projected interactive table and interactive vertical screens. An overview, which goes into more detail can be found here [3, 4].
Still, visual aids of business meetings are only one source of information. Spatial aspects of the meeting and the venue, including the position of physical objects and other participants offer a much greater challenge to understandably present but give important context to conversation as well. Research efforts have been concerned with world exploration techniques for BVIP [59]. One issue that has been repeatedly identified, also in previous projects [10], is that the auditory channel of a BVIP can be overloaded quickly. This channel is used more intensely in general, in comparison to a sighted person. Therefore, it is of utmost importance for a user interface to limit the amount of information transmitted via this channel. Otherwise, the user ends up choose to either operating the user interface or listening to the conversation. In business meetings, this is barely an option. Consequentially, other approaches were explored. One direction is the adaption of regular Braille displays, which usually display one-dimensional text in a haptic manner. Hyperbraille was facilitated in this context [11] and offers in contrast a two-dimensional array of braille elements, which allows to display 2D content, and was even extended with audio notifications by adopting tangible interaction concepts [10]. This device, however, comes with some drawbacks - most notably the relatively low resolution, the size and weight, and the high price.
Therefore, the goal of this work is to provide a user interface concept to access business meeting information including spatial and non-verbal aspects. The tracking, especially of non-verbal information, is part of the MAPVI project, but is out of scope of this paper. Functional prototypes co-developed and tested by community of BVIP only using off-the-shelf hardware help to determine the benefit of this approach.

3 The Scenario

Business meetings can look different, depending on the context, the company and the profession. Acknowledging this fact, the MAPVI project has defined a scenario as an example business meeting, which is considered to offer a broad variety of types of information; namely a brainstorming meeting. In this scenario, multiple persons are gathered in the meeting room, and multiple persons are participating remotely. One or more of these participants can be BVIPs. The meeting room is equipped with multiple screens holding information. In particular, these screens contain notes arranged on 2D planes. The position of these notes holds a certain meaning and these notes can also be grouped and linked to each other. In the meeting room, participants are primarily gathered around a table, but they can also walk up to screens and point at them to emphasize on an argument or to moderate the discussion.
Looking at this scenario, two different 2D information spaces can be identified:
One information space is the meeting room itself, where participants have certain positions, perform gestures and reference at objects in the same room by pointing at them.
The second information space are the white boards: notes are arranged with the arrangement holding a certain meaning, depending on the context.

4 User Interface Concept

While parts of this information can be described textually and therefore presented to BVIP in an accessible way, usually with the help of screenreaders in combination with text-to-speech output or braille diplays, the implicit information deriving from the arrangements of entities in the two 2D information spaces cannot be conveyed in an understandable manner. Therefore, the user interface concept adds user interfaces to deal with this issue.
In particular, three user interfaces have been created to let participants of meetings, including blind and visually impaired ones, not only gain access to all information of such meetings, but also allow equally contribute to the conversation.

4.1 Web Interface

The first user interface is the web interface, which displays a visual whiteboard to sighted people, but also allows blind people to gain access to all textual information of the whiteboard, the notes. Besides that, it allows, especially blind people to retrieve information about real-world aspects of the meeting, including the participants, their names and what kind of gestures they are currently performing. This web interface is fully accessible, for BVIP using screenreaders, as well as sighted keyboard users. Furthermore, it is responsive which means that users can operate it on tablets, smartphones or on regular screens with high zoom levels to mitigate weak eyesight.

4.2 Smartphone

While the web interface grants access to a broad variety of information and is fully accessible, some types of information, in particular spatial information, simply cannot be convey in an understandable manner to BVIP. As there are two different 2D information spaces, two separate user interfaces were created to make both of them accessible. The first user interface uses off-the-shelf smartphones using Google AR-Core [12] to create a haptic VR representation of the whiteboard in front of the user. This representation can be described as a virtual wall erected in front of the user, with objects on it. These objects correspond in position to notes on the whiteboard of the web interface. The user can explore the same whiteboard with the smartphone in hand be simple moving the hand. If the hand approaches a note, the phone starts vibrating in short bursts, getting stronger the closer it gets. If the phone virtually touches a note, the device continuously vibrates. Additionally, the phone announces via text to speech the name of the note. In addition to the exploration of the whiteboard, the user can perform several actions on a virtual note using the hardware volume buttons on the phone with short and long presses. One action is to retrieve addition information, like the body text of the note. Another action is moving and rotation the note be pressing and holding a button and moving the phone. This way, blind and visually impaired users can intuitively manipulate the position and orientation of notes, which would not be possible otherwise. Finally yet importantly, users can highlight either single notes by the press of a button, or show a visually cursor for sighted people to highlight whole regions. This is helpful and necessary to not only limit BVIP to the role of retrieve information and participating in meetings, but also moderating the meeting and creating their own information and manipulating existing one.

4.3 Smartwatch

Since the smartphone UI approach only covers the information space of whiteboards, a third UI has been developed facilitating off-the-shelf smartwatches. It allows blind and visually impaired participants to explore the physical meeting room. The user has the smartwatch strapped to his or her hand and receives information of persons or objects he or she is pointing at. By swiping the hand across the room, the user can explore the whole room. The smartwatch starts to vibrate if a person or object is in pointing direction. It is also possible to search for specific persons or objects. This can be triggered in the web interface. After this the smartwatch switches to a specific mode and the vibration gets stronger, the closer the direction is to the actual direction of the object or person.

5 Methodology

User tests are undertaken in three stages:

5.1 Stage 1

In collaboration with two peer researchers, the first prototypes of these three user interfaces were tested, examined and improved.

5.2 Stage 2

The user interfaces described above will be tested in a next step by a small group (about 5 persons) of other blind users. The goal is to further improve the usability and accessibility and to avoid an “over-optimization” for only a few users (the peer researchers). Two of these user studies are considered useful for each of the three user interfaces (in total six user studies). Peer researchers are being involved and support the user studies. Between the individual user studies, the user interfaces will be further improved based on the findings of the previous user study. A single iteration of a user study consists of five parts:
  • - Training: The project is explained to the participants of the user studies and a short introduction to the concept of operation of the user interface is given.
  • - Evaluation: Participants are given tasks to perform. Recordings are made with their consent (video, time, error rate…).
  • - Feedback: Participants fill out a questionnaire. Here they can state what they liked, what they didn't like and what and how they would improve parts of the user interface in their opinion.
  • - Analysis: The feedback as well as the recordings are analyzed and aggregated. Suggestions for improvement are created and weighted based on this.
  • - UI improvement: The improvement suggestions will be implemented for the next iteration of the user studies depending on impact, effort and other criteria.

5.3 Stage 3

Finally, a large user study will be conducted (possibly divided into several sessions, depending on the current Corona regulations). The goal is to have the user interfaces tested by a group of users that is as large as possible (about 10 people) in order to obtain quantitatively meaningful insights into the added value of these user interfaces. Peer researchers are involved and support the user studies.
Current Status of User Tests and Preliminary Results
User studies are being held and are about to enter stage 2. The first stage showed promising results for the overall user interface concept as well as for the three different user interfaces themselves. The web interface, while being still a novelty as fully accessibly whiteboard and meeting support tool, is the user interface, which is the most traditional one in terms of concept of operations. Therefore, it is considered by both peer researchers to be on a very high level of accessibility and usability with only minor suggestions for improvement e.g. regarding the naming of menu element. These suggestions are being implemented with the next iteration of the prototype.
The feedback regarding the smartphone user interface, which allows to haptically explore whiteboards, has been positive as well. Both peer researchers see great potential in the user interface concept to explore 2D spatial information in general and have suggested to adapt this concept to other contexts as well, for example education. However, due to the nature of this user interface, there is the larger space for improvement and multiple suggestions were mentioned, which will be implemented after prioritization. One suggestion was that the range of motion should be configurable by the user. This means the user should be able to define how large the area in front of him or her is, that is used to represent the whiteboard. A smaller area has the benefit, that less space around the user is required to explore the whole whiteboard, but it also means that notes are closer to each other, which seems to increase the error rate, especially with more notes on the whiteboard. Another request was that it would be beneficially for a user if there was the option to do a guided walkthrough through all of the notes.
Regarding the smartwatch user interface, which allows to explore the actual meeting room, the feedback was again quite positive and was found to be rather intuitive. However, one suggestion was, to add, in addition to vibrations, also acoustic feedback if an object or person is directly in pointing direction.
Currently, stage one of the user test has ended and stage two is about to start. Due to COVID- related issue stages two and three were postponed, with stage two starting in mid of April.

6 Conclusion

In the extended abstract of this paper, we presented a complete user interface concept for accessible business meetings with blind and visually impaired people as equal collaborators. This concept does not treat blind and visually impaired participants as someone who can only receive information others created, but it allows them to understand spatial information, create and manipulate it. In addition to the concept, we presented functional prototypes for each user interface and preliminary tests of the ongoing user tests. We expect the full results of the user tests including stage two and three to be available in summer.

Acknowledgements

This project (MAPVI) including this publication was funded by the Austrian Science Fund (FWF): I 3741-N31.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
1.
Zurück zum Zitat Gunther, S., et al.: MAPVI. In: Makedon. F. (ed.) Proceedings of the PETRA'19, 12th ACM International Conference on Pervasive Technologies related to Assistive Environments, Rhodes, Greece June 05–07, pp. 343–352. ACM = Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3316782.3322747 Gunther, S., et al.: MAPVI. In: Makedon. F. (ed.) Proceedings of the PETRA'19, 12th ACM International Conference on Pervasive Technologies related to Assistive Environments, Rhodes, Greece June 05–07, pp. 343–352. ACM = Association for Computing Machinery, New York (2019). https://​doi.​org/​10.​1145/​3316782.​3322747
2.
Zurück zum Zitat Elrod, S., et al.: Liveboard. In: Bauersfeld, P., Bennett, J., Lynch, G. (eds.) Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '92, pp. 599–607. ACM Press, New York, New York, USA (1992). https://doi.org/10.1145/142750.143052 Elrod, S., et al.: Liveboard. In: Bauersfeld, P., Bennett, J., Lynch, G. (eds.) Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '92, pp. 599–607. ACM Press, New York, New York, USA (1992). https://​doi.​org/​10.​1145/​142750.​143052
3.
Zurück zum Zitat Magerkurth, C., Prante, T.: „Metaplan“ für die westentasche: mobile computerunterstützung für kreativitätssitzungen. In: Oberquelle, H., Oppermann, R., Krause, J. (eds.) Mensch & Computer 2001: 1. Fachübergreifende Konferenz, pp. 163–171. Vieweg+Teubner Verlag, Wiesbaden (2001). https://doi.org/10.1007/978-3-322-80108-1_18 Magerkurth, C., Prante, T.: „Metaplan“ für die westentasche: mobile computerunterstützung für kreativitätssitzungen. In: Oberquelle, H., Oppermann, R., Krause, J. (eds.) Mensch & Computer 2001: 1. Fachübergreifende Konferenz, pp. 163–171. Vieweg+Teubner Verlag, Wiesbaden (2001). https://​doi.​org/​10.​1007/​978-3-322-80108-1_​18
4.
Zurück zum Zitat Lahlou, S. (ed.): Designing user friendly augmented work environments. From meeting rooms to digital collaborative spaces. Computer supported cooperative work. Springer, London, New York (2009) Lahlou, S. (ed.): Designing user friendly augmented work environments. From meeting rooms to digital collaborative spaces. Computer supported cooperative work. Springer, London, New York (2009)
5.
Zurück zum Zitat Bolt, R.A.: Put-that-there. In: Thomas, J.J., Ellis, R.A., Kriloff, H.Z. (eds.) Proceedings of the 7th annual conference on Computer graphics and interactive techniques - SIGGRAPH '80, pp. 262–270. ACM Press, New York, New York, USA (1980). https://doi.org/10.1145/800250.807503 Bolt, R.A.: Put-that-there. In: Thomas, J.J., Ellis, R.A., Kriloff, H.Z. (eds.) Proceedings of the 7th annual conference on Computer graphics and interactive techniques - SIGGRAPH '80, pp. 262–270. ACM Press, New York, New York, USA (1980). https://​doi.​org/​10.​1145/​800250.​807503
6.
Zurück zum Zitat Brock, M., Kristensson, P.O.: Supporting blind navigation using depth sensing and sonification. In: Mattern, F., Santini, S., Canny, J.F., Langheinrich, M., Rekimoto, J. (eds.) Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication, pp. 255–258. ACM, New York, NY, USA (08 Sept 2013). https://doi.org/10.1145/2494091.2494173 Brock, M., Kristensson, P.O.: Supporting blind navigation using depth sensing and sonification. In: Mattern, F., Santini, S., Canny, J.F., Langheinrich, M., Rekimoto, J. (eds.) Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication, pp. 255–258. ACM, New York, NY, USA (08 Sept 2013). https://​doi.​org/​10.​1145/​2494091.​2494173
8.
Zurück zum Zitat Guo, A., et al.: VizLens. In: Rekimoto, J., Igarashi, T., Wobbrock, J.O., Avrahami, D. (eds.) Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 651–664. ACM, New York, NY, USA (16 Oct 2016). https://doi.org/10.1145/2984511.2984518 Guo, A., et al.: VizLens. In: Rekimoto, J., Igarashi, T., Wobbrock, J.O., Avrahami, D. (eds.) Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 651–664. ACM, New York, NY, USA (16 Oct 2016). https://​doi.​org/​10.​1145/​2984511.​2984518
11.
Zurück zum Zitat Pölzer, S., Miesenberger, K.: A Tactile Presentation Method of Mind Maps in Co-located Meetings. undefined, vol. (2014) Pölzer, S., Miesenberger, K.: A Tactile Presentation Method of Mind Maps in Co-located Meetings. undefined, vol. (2014)
Metadaten
Titel
Accessible User Interface Concept for Business Meeting Tool Support Including Spatial and Non-verbal Information for Blind and Visually Impaired People
verfasst von
Reinhard Koutny
Klaus Miesenberger
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-031-08648-9_37

Neuer Inhalt