Skip to main content

2007 | Buch

Universal Access in Human-Computer Interaction. Ambient Interaction

4th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2007 Held as Part of HCI International 2007 Beijing, China, July 22-27, 2007 Proceedings, Part II

herausgegeben von: Constantine Stephanidis

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The 12th International Conference on Human-Computer Interaction, HCI Inter- tional 2007, was held in Beijing, P.R. China, 22-27 July 2007, jointly with the S- posium on Human Interface (Japan) 2007, the 7th International Conference on Engineering Psychology and Cognitive Ergonomics, the 4th International Conference on Universal Access in Human-Computer Interaction, the 2nd International Conf- ence on Virtual Reality, the 2nd International Conference on Usability and Inter- tionalization, the 2nd International Conference on Online Communities and Social Computing, the 3rd International Conference on Augmented Cognition, and the 1st International Conference on Digital Human Modeling. A total of 3403 individuals from academia, research institutes, industry and g- ernmental agencies from 76 countries submitted contributions, and 1681 papers, judged to be of high scientific quality, were included in the program. These papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers accepted for presentation th- oughly cover the entire field of Human-Computer Interaction, addressing major - vances in knowledge and effective use of computers in a variety of application areas. This volume, edited by Constantine Stephanidis, contains papers in the thematic area of Universal Access in Human-Computer Interaction, addressing the following major topics: • Intelligent Ambients • Access to the Physical Environment, Mobility and Transportation • Virtual and Augmented Environments • Interaction Techniques and Devices

Inhaltsverzeichnis

Frontmatter

Part I: Intelligent Ambients

Frontmatter
Creating Smart and Accessible Ubiquitous Knowledge Environments

Digital libraries offer substantial volumes of declarative knowledge to the information society. This paper explores the extent to which current and future digital libraries, also known as ubiquitous knowledge environments, can be made sufficiently usable, accessible and smart to support an inclusive information society and the aspiration of universal access. Using a range of converging methods to evaluate a random sample of such digital library websites, it is concluded that, whilst they act as substantial and functional repositories for knowledge, there is potential to improve, particularly in accessibility and smartness. The current methods are validated through the substantial statistical significance levels and by the meaningful patterns found in the resulting data. A new measure of system smartness is introduced and found to provide a useful metric for present purposes, though it is clear that further work will be needed.

Ray Adams, Andrina Granić
Coupling Interaction Resources and Technical Support

Coupling is the action of binding two entities so that they can operate together to provide new functions. In this article, we propose a formal definition for coupling and present a graph theoretic notation so that the side-effects of the creation of a coupling can be analyzed in a formal and systematic way. We then describe I-AM (Interaction Abstract Machine), a middleware that supports the dynamic coupling of interaction resources such as screens, keyboards and mice, to form a unified interactive space. Using our notation, we illustrate how couplings are supported in I-AM.

Nicolas Barralon, Joëlle Coutaz, Christophe Lachenal
Learning Situation Models for Providing Context-Aware Services

In order to provide information and communication services without disrupting human activity, information services must implicitly conform to the current context of human activity. However, the variability of human environments and human preferences make it impossible to preprogram the appropriate behaviors for a context aware service. One approach to overcoming this obstacle is to have services adapt behavior to individual preferences though feedback from users. This article describes a method for learning situation models to drive context-aware services. With this approach an initial simplified situation model is adapted to accommodate user preferences by a supervised learning algorithm using feedback from users. To bootstrap this process, the initial situation model is acquired by applying an automatic segmentation process to sample observation of human activities. This model is subsequently adapted to different operating environments and human preferences through interaction with users, using a supervised learning algorithm.

O. Brdiczka, J. L. Crowley, P. Reignier
Ambient Intelligence and Multimodality

Ambient Intelligence (AmI) scenarios place strong emphasis on the fact that interaction takes place through natural interfaces, in such a way that people can perceive the presence of smart objects only when needed. As a possible solution to achieving relaxed and enjoyable interaction with the intelligent environments depicted by AmI, the ambient could be equipped with suitably designed multimodal interfaces bringing up the opportunity to communicate using multiple natural interaction modes. This paper discusses challenges to be faced when trying to design multimodal interfaces that allow for natural interaction with systems, with special attention to speech-based interfaces. It describes an application that was built to serve as a test bed and to conduct evaluation sessions in order to ascertain the impact of multimodal natural interfaces on users and to assess their usability and accessibility.

Laura Burzagli, Pier Luigi Emiliani, Francesco Gabbanini
Is the Intelligent Environment Smart Enough?

Ambient Intelligence (AmI) is considered a likely future embodiment of the information society, according to development scenarios proposed worldwide. With reference to the scenarios developed in Europe by the Information Society Technology Advisory Group (ISTAG), this position paper maintains that, up to now, the attention has been more on the smart objects that are supposed to populate the environment and their interfaces than on the “intelligence” of the entire system. The latter is of paramount importance if users are to be served according to the specifications of the AmI environments. Correspondingly, the current notion of Design for All needs to be revised in order to take into account the additional complexity of the emerging information society. Examples of the main points to be considered are offered in order to elicit discussion.

Laura Burzagli, Pier Luigi Emiliani, Francesco Gabbanini
An Agent-Based Framework for Context-Aware Services

A major challenge of Ambient Intelligence lies in building middleware that can ease service implementation through allowing the application developer to emphasize only the service logic. In this paper we describe the architecture of an Ambient Intelligence system established in the scope of the European research project CHIL (Computers in the Human Interaction Loop). CHIL aims at developing and realizing computer services that are delivered to humans in an implicit and unobtrusive way. The framework presented here supports the implementation of human-centric context-aware applications. This includes the presentation of the sensors used in CHIL spaces, the mechanisms employed for controlling sensors and actuating devices, as well as the perceptual components and the middleware approach for combining them in the scope of applications. Special focus lies on the design and implementation of an agent based framework that supports “pluggable” service logic in the sense that the service developer can concentrate on coding the service logic independently of the underlying middleware. Following the description of the framework, we elaborate on how it has been used to support two prototype context-aware human centric and non-obtrusive services.

Axel Bürkle, Wilmuth Müller, Uwe Pfirrmann, Nikolaos Dimakis, John Soldatos, Lazaros Polymenakos
An MDE-SOA Approach to Support Plastic User Interfaces in Ambient Spaces

User interface (UI) plasticity denotes UI adaptation to the context of use (user, platform, physical and social environment) while preserving usability. Our approach to this problem is to bring together MDE (Model Driven Engineering) and SOA (Service Oriented Approach) within a unified framework that covers both the development stage and the runtime phase of plastic UI’s. In particular, an interactive system is modelled as a graph of models that can be dynamically manipulated by, and/or encapsulated as services.

J. Coutaz, L. Balme, X. Alvaro, G. Calvary, A. Demeure, J. -S. Sottet
Whole-System Programming of Adaptive Ambient Intelligence

Ambient intelligence involves synthesising data from a range of sources in order to exhibit meaningful adaptive behaviour without explicit user direction, driven by inputs from largely independent devices and data sources. This immediately raises questions of how such behaviours are to be specified and programmed, in the face of uncertainty both in the data being sensed and the tasks being supported. We explore the issues that impact the stability and flexibility of systems, and use these issues to derive constraints and targets for the next generation of programming frameworks.

Simon Dobson, Paddy Nixon
Informative Art Display Metaphors

Informative Art display systems have been proposed to provide users with information considered relevant at arbitrary points of work or living engagement, originating from many different –mostly geographically dislocated– sources and presented at the periphery of human (visual) perception. Having the displays operate at the periphery of a user’s attention allows other user tasks to sustain primary. Much like the information presented by wallclocks, posters, paintings or windows, peripheral displays move to the center of attention only when appropriate and desirable. Abstract art has been proposed to serve as the visualization paradigm, encoding information into graphical or pictorial artwork by subtly modifying its shape, color and appearance details or its overall impression. This paper approaches a systematic elaboration of visual metaphors able to deliver information via peripheral displays in an aesthetic, artful way. In our approach, the choice of metaphors is driven by the aesthetic appeal of the visual appearance of the display as a whole, out of which certain dynamic emblems or symbols are used to conotate information in a visual style. From experiments we find, that such metaphors are considered by users as a means of personal emotional expression, and that controllable aesthetic attractiveness turns out to be the dominant factor of display appreciation. The choice of aesthetic themes, as well as the control of emblem and symbol dynamics are supported and implemented in our

peripheral display framework

, a general purpose software framework for informative art display systems.

Alois Ferscha
Usable Multi-Display Environments: Concept and Evaluation

The number of conference or meeting rooms with multiple displays available is on the rise. While this increased availability of displays opens up many new opportunities, the management of information across them is not trivial, especially when multiple users with diverging interests have to be considered. This particularly applies for dynamic ensembles of displays. We propose to cast the Display Mapping problem as an optimization task, where we define an explicit criterion for the global quality of a display mapping and then use computer support to calculate the optimum. We argue that in dynamic multi-user, multi-display environments, an automatic – or at least computer supported – document-display assignment improves the user experience in multi-display environments.

Thomas Heider, Thomas Kirste
Ambient Intelligence in Assisted Living: Enable Elderly People to Handle Future Interfaces

Ambient Assisted Living is currently one of the important research and development areas, where accessibility, usability and learning plays a major role and where future interfaces are an important concern for applied engineering. The general goal of ambient assisted living solutions is to apply ambient intelligence technology to enable people with specific demands, e.g. handicapped or elderly, to live in their preferred environment longer. Due to the high potential of emergencies, a sound emergency assistance is required, for instance assisting elderly people with comprehensive ambient assisted living solutions sets high demands on the overall system quality and consequently on software and system engineering – user acceptance and support by various user-interfaces is an absolute necessity. In this article, we present an Assisted Living Laboratory that is used to train elderly people to handle modern interfaces for Assisted Living and evaluate the usability and suitability of these interfaces in specific situations, e.g., emergency cases.

Thomas Kleinberger, Martin Becker, Eric Ras, Andreas Holzinger, Paul Müller
Multi-modal Authentication for Ubiquitous Computing Environments

In ubiquitous computing environments, the computer technology will recede into the background of our lives for its ultimate goal, invisibility. For ensuring security and privacy in those environments, both human beings and surrounding devices should be authenticated under the interaction methods that are used for ubiquitous services. However, the invisibility of devices, the adaptiveness of interactions, and the varying performance of devices will make it difficult to achieve it. In this paper, we reconsider authentication for ubiquitous computing environments and propose a conceptual framework for resolving the difficulties.

Taekyoung Kwon, Sang-ho Park, Sooyeon Shin
Secure Authentication and Accounting Mechanism on WLAN with Interaction of Mobile Message Service

In a wireless network that uses 802.1X Port Access Control, the wireless station plays the role of the Remote User and the wireless AP plays the role of the Network Access Server (NAS). However, Security problem for user authentication has been increased on existing IEEE 802.11 wireless network. However, brute force dictionary attack can be launched against the shared secret on existing IEEE802.1x (EAP-MD5, EAP-TLS, EAP-TTLS) protocols or security systems. Therefore, we overview main problem on existing EAP-MD5 authentication mechanism on Wireless LAN and propose a SMS(Short Message Service) based secure authentication and accounting mechanism for providing security enhanced wireless network transactions against those attacks.

Hyung-Woo Lee
Dynamic Conflict Detection and Resolution in a Human-Centered Ubiquitous Environment

In this paper, a Conflict Control Manager (CCM) for a ubiquitous services system is presented to prevent the mode confusion of humans. CCM consists of a lock-based conflict detection module and a D-PRI (dynamic priority)-based conflict resolution. By means of CCM, the mode confusion can drastically be reduced, and, as a result, CCM can assist in designing and implementing a human-centered ubiquitous environment. Through a case study, it is observed that CCM can successfully detect and resolve the runtime conflicts caused by multiple devices interconnected in a ubiquitous environment. It can also be used to detect the potential conflict risk during the service registering phase so that computerized devices are deployed to improve the human interactions with them.

Haining Lee, Jaeil Park, Peom Park, Myungchul Jung, Dongmin Shin
From Ambient Devices to Smart Care for Blind People: A Metaphor of Independent Living with Responsive Service Scenarios

We present a metaphor showing that blind people (users) often are living in a perplexing contexture – a chain of barriers affecting their ability to live independently. In such a context to support users’ tasks in real time, current technologies may not be intuitive enough to be used for this kind of real world application. The increasingly specialised devices and rapidly advanced assistive technologies require a composite architecture of scalable non-textual reading services. We illustrate this requirement by three user scenarios at a scale of a device, an object-awareness and a real-time situated meaningful response.

Ying Liu, Roger Wilson-Hinds
Crisis Rooms Are Ambient Intelligence Digital Territories

The study of Digital Territories provides a way to conceptualize the interactions happening in Pervasive Computing Environments. This paper will address Crisis Rooms as Digital Territories. Based on the concepts stemming from Digital Territories we will attempt to give a high level overview of issues that can be applicable in the context of the future Crisis rooms and of the interactions that happen within them.

Irene Mavrommati, Achilles Kameas
Learning Topologies of Situated Public Displays by Observing Implicit User Interactions

In this paper we present a procedure to learn a topological model of Situated Public Displays from data of people traveling between these displays. This model encompasses the distance between different displays in seconds for different ways and/or different travel modes. It also shows how many people travel between displays in each direction. Thus, the model can be used to predict where and when people will appear next after showing up in front of one display. This can be used for example to create continuous ‘shows’ spanning multiple displays while people pass them. To create the model, we use Bluetooth connection data of mobile phones people carry, and employ the EM algorithm to estimate mean travel times for different paths people take.

Hans Jörg Müller, Antonio Krüger
A Context-Aware Service Platform to Support Continuous Care Networks for Home-Based Assistance

Efficient and effective treatment of chronic disease conditions requires the implementation of emerging continuous care models. These models pose several technology-oriented challenges for home-based continuous care, requiring assistance services based on collaboration among different stakeholders: health operators, patient relatives, as well as social community members. This work describes a context-aware service platform designed for improving patient quality of life by supporting care team activity, intervention and cooperation. Leveraging on an ontology-based context management middleware, the proposed architecture exploits information coming from biomedical and environmental sensing devices and from patient social context in order to automate context-aware patient case management, especially for alarm detection and management purposes.

Federica Paganelli, Dino Giuli
Architectural Backpropagation Support for Managing Ambiguous Context in Smart Environments

The evolution to ubiquitous information and communication networks is evident. Technology is emerging that connects everyday objects and embeds intelligence in our environment. In the Internet of Things, smart objects collect context information from various sources to turn a static environment into a smart and proactive one. Managing the ambiguous nature of context information will be crucial to select relevant information for the tasks at hand. In this paper we present a vector space model that uses context quality parameters to manage context ambiguity and to identity irrelevant context providers. We also discuss backpropagation applied in the network architecture to filter unused context information in the network as close to the source as possible. Experiments show that our contribution not only reduces the amount of useless information a smart object deals with, but also the distribution of unused context information throughout the network architecture.

Davy Preuveneers, Yolande Berbers
Managing Disclosure of Personal Health Information in Smart Home Healthcare

Recent advances in ubiquitous computing have evoked the prospect of real-time monitoring of people’s health in context-aware homes. Home is the most private place for people and health information is of highly intimate nature. Therefore,

users-at-home

must have means to benefit from home healthcare and preserve privacy as well. However, most smart home healthcare systems currently lack support for privacy management for home inhabitants. In this paper, we analyze the privacy needs of smart home inhabitants utilizing a healthcare system and present a conceptual framework to manage disclosure of their personal health information. The proposed framework supports sharing the most meaningful detail of personal health information at different time granularities with different recipients in different contexts. To relieve the burden of configuration, default disclosure settings are provided, and to ensure end-user’s control over disclosure, the option to override default settings is included.

Umar Rashid, Hedda Schmidtke, Woontack Woo
Intelligent Privacy Support for Large Public Displays

This paper presents a novel concept for personalized privacy support on large public displays. In a first step, a formative evaluation was conducted in order to analyze the requirements of potential users regarding the protection of private information on large public displays. The insights gained in this evaluation were used to design a system, which automatically adapts the information visible on public displays according to the current social situation and the individual privacy preferences of the user working on the display. The developed system was evaluated regarding its appropriateness for daily usage and its usefulness to protect privacy.

Carsten Röcker, Steve Hinske, Carsten Magerkurth
Universal Access Issues in an Ambient Intelligence Research Facility

An Ambient Intelligence Research Facility is being set up at ICS-FORTH, with the goal of providing an experimentation platform for Ambient Intelligence (AmI) technologies and for studying their potential impact on users as individuals and as society. This paper discusses the opportunities that such a facility will offer towards the investigation of AmI from a Universal Access perspective, focusing in particular on issues related to Design for All.

Constantine Stephanidis, Margherita Antona, Dimitrios Grammenos
Designing Ubiquitous Shopping Support Systems Based on Human-Centered Approach

We introduce our human-centered approach for the purpose of developing a ubiquitous computing system aiming at providing better experiences for shoppers at a supermarket. We focus on shopping processes by using ethnographic research techniques, understand the process with details, and construct TPM which classifies a shopper’s behaviors and states of mind change into three phases. We also describe our concept design of service types for a prototype system and deal with allocation and configuration of the service types corresponding to TPM.

Hiroshi Tamura, Tamami Sugasaka, Satoko Horikawa, Kazuhiro Ueda
CSCL at Home: Affordances and Challenges of Ubiquitous Computing

Starting from an analysis of how ubiquitous computing technologies have afforded the design of novel learning experiences in different domains, we consider how such technologies can support domestic learning, thus conceiving the family as a community of practice. We exemplify such a vision with the Living Cookbook appliance: This relies on the video capture and retrieval of family members’ cooking sessions, so as to enable the creation and sharing of personalized, multimedia cooking instructions. By augmenting the cooking activity with novel social and entertaining aspects, our goal is to motivate cooking and the learning thereof. We report on the implementation and evaluation of the appliance and in conclusion we discuss our results in light of their possible implications for the design of domestic technology.

Lucia Terrenghi, Armin Prosch
Non-homogenous Network, Control Hub and Smart Controller (NCS) Approach to Incremental Smart Homes

The rapid increase in memory and processing power of even simple devices is opening up new opportunities for intelligent devices and environments. However, major barriers exist to practical limitations. Many “smart environments” are currently more complex to either set up or operate than their predecessors. Environments which are simpler to use are often very complex to set up. They also often require wholesale re engineering of the environment. Proposed is a model for using a mixture of non homogeneous network technologies, a control hub and a smart controller to provide a way for users to slowly transition both themselves and their houses from current technologies to smart technologies and environments.

Gregg Vanderheiden, Gottfried Zimmermann
Accessibility of Internet Portals in Ambient Intelligent Scenarios: Re-thinking Their Design and Implementation

Internet portals are gateways to the World Wide Web, which offer an amalgamation of services, like search engines, online shopping information, email, news, weather reports, stock quotes, community forums, maps, travel information, etc. Furthermore, with the arrival of the Mobile Web, they are also frequently used in Ambient Intelligence scenarios. This paper will discuss basic design considerations inspired by systems theory fundamental principles, where the portal as a whole and its components (known as

portlets

) are analyzed. This analysis also includes a set of user requirements for people with special needs gathered in previous user studies from the authors.

Evangelos Vlachogiannis, Carlos A. Velasco, Henrike Gappa, Gabriele Nordbrock, Jenny S. Darzentas
Engineering Social Awareness in Work Environments

A growing interest is seen for designing intelligent environments that support personally meaningful, sociable and rich everyday experiences. In this paper we describe an intelligent, large screen display called

Panorama

that is aimed at supporting and enhancing social awareness within an academic work environment. Panorama is not intended to provide instrumental or other productivity related information. Rather, the goal of Panorama is to enhance social awareness by providing interpersonal and rich information related to co-workers and their everyday interactions in the department. A two-phase assessment of Panorama showed to promote curiosity and interest in exploring different activities in the environment.

Dhaval Vyas, Marek R. van de Watering, Anton Eliëns, Gerrit C. van der Veer
Case Study of Human Computer Interaction Based on RFID and Context-Awareness in Ubiquitous Computing Environments

Context-awareness becomes the key technology in the human computer interaction of ubiquitous computing. The paper discusses the characteristic, significance as well as function of the context, and the properties of the human computer interaction in the ubiquitous environments where the physical space fuses with the information space. These characteristics bring new requirements, that is, mobility, tractability, predictably and personality. To satisfy the demands, we present a method to realize context-awareness and the wireless interaction by using the pervasive RFID tags to track the context and using Bluetooth as the contact-less communication measure. We also construct a prototype system composed of RFID tags, BTEnableReaders and Bluetooth-enable mobile terminals. One case of application scenario is given and the experimental results show that the performance and robustness of the device are suitable for ubiquitous applications and the interaction is experienced more positively by users than the conventional method. The devices we design also can be extended to other application areas such as wearable computing, health care, disable help, and road navigation.

Ting Zhang, Yuanxin Ouyang, Yang He, Zhang Xiong, Zhenyong Chen

Part II: Access to the Physical Environment, Mobility and Transportation

Frontmatter
Accessibility and Usability Evaluation of MAIS Designer: A New Design Tool for Mobile Services

This paper reports the results of a study to evaluate accessibility and usability of services developed by the MAIS Designer, a new design tool that provides services suited to different mobile devices. The discussion is aimed at highlighting the methodology adopted, which is tailored to characteristics of mobile computing and the relative results obtained.

Laura Burzagli, Marco Billi, Enrico Palchetti, Tiziana Catarci, Giuseppe Santucci, Enrico Bertini
Enhancing the Safety Feeling of Mobility Impaired Travellers Through Infomobility Services

This paper describes the health emergency module (HEM) of ASK-IT, a European project, co-funded by the EC 6th Framework Program, within the e-Inclusion area. It identifies the functionalities and specifications of the HEM, as well as its scenarios of application, its requirements derived from the technical and legal analysis and how it interacts with other ASK-IT modules and the whole platform. Special emphasis is given to the User Interface designed, according to the specific user groups’ functional characteristics.

Maria Fernanda Cabrera-Umpierrez, Juan Luis Villalar, Maria Teresa Arredondo, Eugenio Gaeta, Juan Pablo Lazaro
Handling Uni- and Multimodal Threat Cueing with Simultaneous Radio Calls in a Combat Vehicle Setting

We investigated uni- and multimodal cueing of horizontally distributed threat directions in an experiment requiring each of twelve participants to turn a simulated combat vehicle towards the cued threat as quickly and accurate as possible, while identifying simultaneously presented radio call information. Four display conditions of cued threat directions were investigated; 2D visual, 3D audio, tactile, and combined cueing of 2D visual, 3D audio, and tactile. During the unimodal visual and tactile indications of threat directions an alerting mono sound also was presented. This alerting sound function was naturally present for the unimodal 3D audio and multimodal conditions, with the 3D audio simultaneously alerting for and cueing direction to the threat. The results show no differences between conditions in identification of radio call information. In contrast, the 3D audio generated greater errors in localization of threat direction compared to both 2D visual and multimodal cueing. Reaction times to threats were also slower with both the 3D audio and 2D visual compared to the tactile and the multimodal, respectively. In conclusion, the results might reflect some of the benefits in employing multimodal displays for certain operator environments and tasks.

Otto Carlander, Lars Eriksson, Per-Anders Oskarsson
Necropolis as a Material Remembrance Space

The contemporary town planning and architecture abundantly create various public, private, production, recreation, and remembrance spaces, in order to comply with the material and spiritual needs of individuals and large communities alike. Remembrance places – necropolises - are important structural elements of cities that strongly affect the human psyche. Modern forms of spatial arrangement of necropolises search for solutions which will not only provide a rational (ergonomic, economic, ecological) material shape of the burial place, but also satisfy man’s mental needs connected with the burial, funeral, veneration of the dead, visits to the cemetery, irrespective of man’s age and physical fitness level.

Built over the centuries and still existing necropolises are a material and spiritual cultural heritage left to us by the past generations. Mostly built of symbolic stones - ”remembrance stones”, they make specific ”libraries” with ”stone books” for the present and future generations.

J. Charytonowicz, T. Lewandowski
Reconsumption and Recycling in the Ergonomic Design of Architecture

One of the characteristics of human activity is the ability to transform the environment and create new structures. Such actions include various forms of building activities. The adjustments of the whole of material surroundings to the needs and possibilities of man is dealt with by ergonomics. The practical and specific application of the general principles of ergonomics, on the other hand, is dealt with by architecture, i.e. architects designing the

material framework for human life

. The quality of this ”framework” determines the quality of human life. A widely understood design more and more often goes away from creation of the defined, finished work - object, to initiate and sustain the development process and different activities connected with the space creation. This way is related to sustainable design that is generally defined as design that meets the needs of the present without compromising the ability of future generations to meet their own needs.

Much of waste comprises valuable raw materials for further utilization and the best way to do it is to reuse the waste at the same level of this original usage. The measures to reduce material consumption in the construction industry are to be sought in the implementation of novel renewable materials of natural origin as well as the non-renewable materials, yet possible to regenerate and reuse, that is in reconsumption and recycling applied, among others, in ergonomic design of architecture.

Jerzy Charytonowicz
Listen! There Are Other Road Users Close to You – Improve the Traffic Awareness of Truck Drivers

As the amount of good transportation on road is increasing the accidents involving heavy trucks and other road users are also increasing. To make the truck driver aware of other road users close to the truck is very important to avoid accidents. Present study tested different auditory icons that were representing different road users and presented in 3 dimensions in the truck cockpit to see if such design could improve the driver traffic awareness in trucks. A prototype system including four different type-of sound themes has been developed to present the road users such as pedestrian, cyclists, motorcycles and other vehicles. The setting was tested on subjects and integrated in a truck-simulation at Volvo Technology Corporation. An experiment was conducted to test whether these 3D sounds can improve the driver’s traffic situation awareness. The results suggest that natural or realistic sounds (auditory icon) are most suitable to this application due to their intuitiveness, distinguish ability and relatively low degree of disturbance.

Fang Chen, Georg Qvint, Johan Jarlengrip
HMI Principles for Lateral Safe Applications

LATERAL SAFE is a subproject of the PREVENT Integrated Project, co-funded by the European Commission under the 6th Framework Programme. LATERAL SAFE introduces a cluster of safety applications of the future vehicles, in order to prevent lateral/rear related accidents and assist the driver in adverse or low visibility conditions and blind spot areas. LATERAL SAFE applications include a lateral and rear monitoring system (LRM), a lane change assistant (LCA) and a lateral collision warning (LCW). An effective Human Machine Interface (HMI) is being developed, addressing each application, on the basis of the results emerged from mock-up tests realised in three sites (one in Greece and two in Sweden), aiming to determine which is the best HMI solution to be provided in each case. In the current paper, the final HMI principles, adopted and demonstrated for each application, are presented.

Lars Danielsson, Henrik Lind, Evangelos Bekiaris, Maria Gemou, Angelos Amditis, Maurizio Miglietta, Per Stålberg
INSAFES HCI Principles for Integrated ADAS Applications

In order to integrate several time critical warning systems, e.g. Collision Warning and Lane Departure Warning, in the same vehicle one has to deal with the problem of warning management to not overload the driver in critical situations and to make sure that driver’s focus is directed to the right place. This paper presents INSAFES integration schemes to ensure these issues, and gives general as well as specific use cases based on warning systems integrated in one of INSAFES demonstrator vehicles. From these use cases are then requirements on warning management derived regarding prioritization schemes. The requirements concludes in a proposed extension of the warning management concepts derived in the AIDE project.

Lars Danielsson, Henrik Lind, Stig Jonasson
Sonification System of Maps for Blind

Presentation of graphical information is very important for blind. This information will help blind better understand surrounding world. The developed system is devoted for investigation of graphical information by blind user using a digitiser. SVG language with additional elements is used for describing of maps. Non-speech sounds are used to transfer information about colour. Alerting sound signal is issued near two regions boundary.

Gintautas Daunys, Vidas Lauruska
An Accesible and Collaborative Tourist Guide Based on Augmented Reality and Mobile Devices

The goal of this project is to provide support for a system of geolocation powered by augmented reality, offering also advanced services such as, context awareness mobile applications and natural interaction related to the concept of ambient intelligent which favour the creation of intelligent environments whose services fit dynamically the demand, not always made explicit, of the user. A design and a development of a location system is obtained that provides extra services based on the positional information of the different system’s users. In this way, the user receives specific information of the place where he or she is located. This service is based on the Global Positioning System, from now on GPS. The aim with this platform is to locate, guide and give information to blind people, although it is open to any kind of people. It will allow the users to see information related to a place, to write comments about it and leave objects for the rest of the users to read and see. The information will be shown as a written text and as an oral one and in every moment the location of the user will be traced thanks to the virtual positioning of him or her on a map.

Fidel Díez-Díaz, Martín González-Rodríguez, Agueda Vidau
The Use of Kaizen Continuous Improvement Approach for Betterment of Ergonomic Standards of Workstations

The paper describes: elements of a continuous improvement system in an enterprise, teamwork as an approach towards solving problems at workstations (especially problems concerning ergonomic issues) and methods and techniques used in ergonomic standards improvement at the stages of problem identification as well as search and implementation of the solutions. Requirements and conditions for efficient implementation are substantiated for improvements such as: training system, motivational system, system of applying and evaluation of applications, financial support of the implementation process and 5S program as a starting point for ergonomic improvements. Theoretical considerations will be illustrated with examples of improvements implemented in Polish enterprises.

Ewa Gorska, Anna Kosieradzka
Development and Application of a Universal, Multimodal Hypovigilance-Management-System

States of hypovigilance cause severe accidents. Technical compensation can be provided by hypovigilance management systems (HVMS). In this paper, existing HVMS are discussed and the need for the development of a novel universal, multimodal HVMS is deducted. The development of such a system is presented and its application is illustrated with two application scenarios.

Lorenz Hagenmeyer, Pernel van den Hurk, Stella Nikolaou, Evangelos Bekiaris
Towards Cultural Adaptability to Broaden Universal Access in Future Interfaces of Driver Information Systems

This paper elucidates and discusses some aspects of cultural adaptability which aid usability and universal access. We describe the concept, influence and Use Cases of cultural adaptability in driver information and assistance systems exemplified by a portable navigation system. Thereby, the reasons, advantages and problems of using adaptability regarding the driving safety and the driver preferences will be addressed. Differences in the amount of information for map display and in interaction behavior which depend on the

cultural

background of the users (e.g. attitude, preference, skill etc.). We explain how cultural adaptability can improve usability and how it has a share in universal access. Finally, a short outlook into the future of adaptive driver information and assistance systems closes our reflections.

Rüdiger Heimgärtner, Lutz-Wolfgang Tiede, Jürgen Leimbach, Steffen Zehner, Nhu Nguyen-Thien, Helmut Windl
A Multi-modal Architecture for Intelligent Decision Making in Cars

This paper describes a software architecture named “Gatos” engineered for intelligent decision making. The architecture is built on a distributed multi-agent system cougaar. Gatos provides a solution for sensor fusion. We propose using multiple sensors to monitor driver status, driving performance, and the driving environment in order to address bad driving behavior. Our approach for a Driving Monitor is based on both monitoring and regulating driver behavior. The system is designed to intervene and to interact with the driver in real time (if possible) to regulate their behavior and promote safe driving. A prototype is implemented using a driving simulator, but infrastructure buildup and new in-vehicle technologies make this a feasible solution for vehicles on the road.

Qamir Hussain, Ing-Marie Jonsson
Usability in Location-Based Services: Context and Mobile Map Navigation

The paper discusses usability and communicative capability of mobile multimodal systems. It reports on the evaluation of one particular interactive multimodal route navigation system and discusses the challenges encountered in this task. The main questions concerned the user’s preference of one input mode over the other (speech vs. tactile/graphics input), usefulness of the system in completing the task (route navigation), and user satisfaction (willingness to use the system in the future). The user’s expectations and real experience of the system were analysed by comparing the users’ assessments before and after the system use. Conclusions concerning system design are drawn and discussed from the perspective of the system’s communicative capability, based on the view of the computer as an interactive agent.

Kristiina Jokinen
Performance Analysis of Acoustic Emotion Recognition for In-Car Conversational Interfaces

The automotive industry are integrating more technologies into the standard new car kit. New cars often provide speech enabled communications such as voice-dial, as well as control over the car cockpit including entertainment systems, climate and satellite navigation. In addition there is the potential for a richer interaction between driver and car by automatically recognising the emotional state of the driver and responding intelligently and appropriately. Driver emotion and driving performance are often intrinsically linked and knowledge of the driver emotion can enable to the car to support the driving experience and encourage better driving. Automatically recognising driver emotion is a challenge and this paper presents a performance analysis of our in-car acoustic emotion recognition system.

Christian Martyn Jones, Ing-Marie Jonsson
In-Vehicle Information System Used in Complex and Low Traffic Situations: Impact on Driving Performance and Attitude

This paper describes a study where drivers’ responses to an in-vehicle information system were tested in high and low density traffic. There were 17 participants in a study that was run using a driving simulator. Data was gathered for a comparison of how drivers react to an in-vehicle information system in low density traffic, complex traffic, and without system. Participants were also asked for their subjective evaluation of trust of the system and how they perceived it influenced their driving performance. Results show gender differences for both driving performance and attitude.

Ing-Marie Jonsson, Fang Chen
Changing Interfaces Using Natural Arm Posture – A New Interaction Paradigm for Pedestrian Navigation Systems on Mobile Devices

This paper presents a new interaction technique, which is based on arm posture recognition, for mobile computing devices to switch between different visualization modes seamlessly. We implemented a pedestrian navigation system on Pocket PC, which is connected to a GPS receiver and an inertial orientation tracker. In the global coordinate system, user’s position is tracked with GPS data, and in the local coordinate system user’s arm posture is mapped into two application dependent states with inertial orientation tracker data. Hence, natural interaction and different levels of information is provided by processing orientation tracker data. As unnecessary computation and rendering increase power consumption in small devices, we introduced another state to our system, which saves battery according to the user’s idle arm posture.

Ceren Kayalar, Selim Balcisoy
Ergonomics of Contemporary Urban Necropolises

The contemporary ergonomics can accuraely describe psychophysical capabilities of the human body, thus greatly contributing to the process of improving the living quality and parameters. Many everyday activities, relating to the man’s work, leisure, communication, or social relations, are subject to ergonomic rules and principles, and the same is true of urban and architectural space of urbanized centres as a material space of such activities. It is here that man actively satisfies his needs to engineer his space and facilities necessary for him as an individual – e.g. dwelling houses, and as community – e.g. necropolises. Modern forms of spatial arrangement of necropolises search for solutions which will not only provide a rational - ergonomic material shape of the burial place, but also satisfy mental needs of the man connected with the burial, funeral, cult of the dead, visits to the cemetery, irrespective of the man’s age and ability. Among important problems to be solved we should pay special attention to the question of accessibility of the cemetery space to the elderly and the disabled. Therefore all elements constituting the structure of necropolis must allow for ergonomic designing factors.

T. Lewandowski, J. Charytonowicz
The Use of Virtual Reality to Train Older Adults in Processing of Spatial Information

The present study examined the effect of virtual reality/VR on training older adults in spatial-based performance. Navigating emergency escape routes in a local hospital was exemplified as the taks domain. 15 older adults and 15 college students participated in an experiment where VR, VR plus a bird-view map, and two-diemtional/2D map presentations were manipulated as within-subject treatment levels of training media. The results indicated that the older subject was less advantaged in identifying the correct turns leading to the emergency exits. While the older subject was also found to have more difficulty in recalling route landmarks, the 2D and VR-plus-map presentations produced significantly stronger spatial memory than the pure VR media for both age groups. When mental rotation was evaluated, the older subject was able to achieve comparable performance if emergency routes were trained by the VR, and the VR-plus-map presentations. Detailed implications were discussed for the design of training media with age considerations.

Dyi-Yih Michael Lin, Po-Yuan Darren Yang
Using Personas and Scenarios as an Interface Design Tool for Advanced Driver Assistance Systems

When looking at the traditional way of conducting human factors research within the active safety area, focus often tends to be on drivers’ cognitive capacities like; situation awareness, workload and behavioural adaptation. This research is of course invaluable but other important issues that tend to be forgotten are: What are the drivers’ needs and how should an interface be designed to satisfy those needs? This paper describes the process of defining requirements for a dynamic graphical interface for ADAS using a rather new method,

Personas

, as a starting point in the design process. Based on the Personas different scenarios and narratives were created and used in a workshop to specify user needs and requirements in the interface design for Advanced Driver Assistance Systems.

Anders Lindgren, Fang Chen, Per Amdahl, Per Chaikiat
Pedestrian Navigation System Implications on Visualization

With the technical advances in mobile computing electronic maps and guiding systems become widely available for everywhere usage. The computing power allows for guides and even decision support systems. But mobile devices are used on the move and therefore become a secondary task. To reduce cognitive load and to reduce attention intensity visualizations and interaction patterns are needed that are fast and comprehensible. We present a pedestrian navigation system that uses a zoomable interface together with the Halo visualization approach for off screen locations. A user trial with 24 participants indicate that this approach reduces the device interaction immensely leaving more attention to the primary task.

Thorsten Mahler, Markus Reuff, Michael Weber
A DIYD (Do It Yourself Design) e-Commerce System for Vehicle Design Based on Ontologies and 3D Visualization

The state of the art in vehicle configuration is still very much characterized by a face-to-face sales situation. In addition, web browsers are becoming market places. But direct sales over the internet, without contact with a sales person constitute still a small segment of the market, of only a few percent for European manufacturers. The internet is more used as a medium to gather information. A standardised DIYD vehicle configuration is thus a must for European manufacturers today. This paper presents an Intelligent DIY e-commerce system for vehicle design, based on Ontologies and 3D Visualization, that aims at enabling a suitable representation of products with the most realistic possible visualisation outcome. The platform, designed for the automotive sector, includes all the practicable electronic commerce variants and its on-line product configuration process is controlled by an ontology, that was created using the OWL Web Ontology Language.

L. Makris, N. Karatzoulis, D. Tzovaras
WATCH-OVER HMI for Vulnerable Road Users’ Protection

WATCH-OVER is a European project, aiming at the enhancement of road safety and the impairment of traffic accidents involving vulnerable road users (VRUs), such as pedestrians, bicyclists and motorcyclists, in urban and extra-urban areas. The project carries out research and development activities, in order to design an integrated cooperative system for accident prevention. In this paper, the concept of the Human Machine Interface of the WATCH-OVER system is discussed and its user-centred approach, based on a user requirement survey, is described. Regarding the HMI, the basic functionalities and elements, as well as the preliminary guidelines that endorse the WATCH-OVER system approach, are presented.

Katrin Meinken, Roberto Montanari, Mark Fowkes, Anny Mousadakou
Improvement Approach of the Automation System in Aviation for Flight Safety

Next generation cockpit concept aiming to reduce the risk of pilot-error-induced accident was studied. This new cockpit concept, called Human-Centered Cockpit incorporates several ideas which aim to improve the pilot’s situation awareness for the terrain and the aircraft situation without increasing the pilot’s cognitive workload. This concept is built on task analysis and accident analysis, and through several times of the airline pilot reviews using partial task simulation of new functions, design issues were identified and the design was brushed up. Fully functional cockpit simulator was finally developed to evaluate the effectiveness of this cockpit concept in the realistic commercial aircraft operational environment from preflight to spot-in, including the ATC. Six pilots participated in the final evaluation and the result showed that this cockpit concept enhances the pilot’s situation awareness in the actual operation environment, and improves the pilot’s cognitive workload in flight.

Takafumi Nakatani, Kenichiro Honda, Yukihiko Nakata
Addressing Concepts for Mobile Location-Based Information Services

Emerging mobile location-based information services enable people to place digital content into the physical world. Based on three technical com ponents (1) mobile devices, (2) wireless networking and (3) location-sensing the implementation of location-based services can be considered state of the art. In contrast, we observe a lack of conceptual work in terms of user interface issues, like designing indirect (one-to-any) addressing models, handling infor mation overflow and avoiding spam. Every user is able to arbitrarily place information anywhere without structure or restrictions, and is confronted with an information mess in return. The focus of this paper concentrates on a novel addressing concept for mobile location-based information services, which systematically structures both direct and indirect addressing methods and supports the users in finding or filtering the information they are interested in.

Wolfgang Narzt, Gustav Pomberger, Alois Ferscha, Dieter Kolb, Reiner Müller, Horst Hörtner, Ronald Haring
Ergonomic Design of Children’s Play Spaces in the Urban Environment

Any space available to children can be used as a playground. Such places are getting more and more diminished and isolated from the nearby surroundings. Creating spatial enclaves, apart from undeniable measurable advantages (e.g. safety), causes various negative social and organizational consequences (age discrimination, monotony, uniformization, loosened and deteriorated interpersonal relationships). However, the arranged playgrounds may become a means of an effective psychophysical and social development and rehabilitation of the handicapped children. The paper discusses the following issues: evolution of housing needs of children of all ages, with special concern for spatial requirements connected with children’s increased mobility; role of a dwelling, the importance of a child’s room and the importance of conditions of acquiring independence and autonomy; the importance of the play environment in the open urban space and the role it plays in the family life and in the life of individual children, and problems of its evolution in the circumstances of the progressing urbanization.

Przemysław Nowakowski, Jerzy Charytonowicz
Towards an Accessible Europe

Mobility is a right that we all have. However, being able to travel by yourself, without the need of another person’s assistance, is not always the case with mobility-impaired (MI) users. The reason for this is the non-accessible environment, which prevents an MI person from moving around, using and changing transportation means, having access to the proper information (on timetables, routes, etc.). Nevertheless, there exist certain accessible points and transportation means available in most European countries, but the people mostly in need of them do not have the proper information about it. ASK-IT IP aims to eliminate these barriers, by offering information about accessible content (transportation means, points of interest, etc.), following a ‘design for all’ concept and taking advantage of both location-based and infomobility services.

Maria Panou, Evangelos Bekiaris, María García Robledo
Nomad Devices Adaptation for Offering Computer Accessible Infomobility Services

This paper describes the adaptation approach for users with disability of nomad devices within the ASK-IT European project funded by the EC 6

th

Framework Program within the e-Inclusion area. The devices, software and hardware modules involved are described. The User Interface (UI) configuration, defined according to the functional characteristics of specific user groups, is analysed along with the technical specifications of the devices and the provided services. Finally, the current mock-ups of the system for different nomad devices are illustrated.

Laura Pastor, María García Robleda, Luis Reigosa, Maria Fernanda Cabrera-Umpierrez, Alexandros Mourouzis, Brigitte Ringbauer
GOOD ROUTE HMI for Actors Involved in Dangerous Goods Transportation

GOOD ROUTE is a European project developing a cooperative system for routing, monitoring, re-routing, enforcement and driver support of dangerous goods vehicles, based upon dynamic, real time data, in order to minimise the Societal Risks related to their movements, whereas still generating the most cost efficient solution for all actors involved in their logistic chain. In this paper the theoretical background for the Human-Machine Interface of the GOOD ROUTE system is discussed, different actors are characterised and their user needs are described. Basic functionalities and elements as well as the preliminary guidelines that endorse the GOOD ROUTE system approach are presented.

Marco Santi, Katrin Meinken, Harald Widlroither, Evangelos Bekiaris
An Empirical Study of Developing an Adaptive Location-Based Services Interface on Smartphone

A mobile market shows that the global LBS (Location-Based Service) market is noticeable and continues to grow rapidly. With the coming of mobile applications, the requirement of the small screen interface (SSI) is even more reinforced because of the need for more functions and contents on the devices. This research explore an empirical study of user access to PoI’s (Point of Interest) information of the map view display (MVD) and list view display (LVD) meeting the user’s needs base on the principle of adaptive and intuitive visualization on Smartphone. Further, the prototype of LBS on smartphone was emulated by VB.Net program, which interfaces are evaluated through objective measurement and subjective investigation. Our study’s results appear cognition of symbols that affects operating performance, so the suggestion is towards using LVD more effectively than MVD on LBS applications. The findings of the study will be helpful to enrich functionality and customization of the LBS appearance on smartphone.

Kuo-Wei Su, Ching-Chang Lee, Li-Kai Chen
Augmented Ambient: An Interactive Mobility Scenario

This paper presents the Augmented Ambient project that aims to construct a highly interactive mobility scenario based on augmented reality applications running on heterogeneous multimedia devices. Mobility is made available through ambient networks, which are dynamic computer networks. A case study has been performed about a virtual museum, where users join a service network that includes art pieces visualization, broadcast interview, chat, and remote live auctions. These services are implemented in Desktop, Pocket PC and Symbian OS platforms. Each one has its own limitations related to processing power and content exhibition, which are considered during media exchange. The application development process for each supported platform is detailed in the text that presents also some libraries built to simplify and speed up the development, namely OgreAR, OGRE port for Pocket PC and CIDA, beyond the ambient networks related software infrastructure.

Veronica Teichrieb, Severino Gomes Neto, Thiago Farias, João Marcelo Teixeir, João Paulo Lima, Gabriel Almeida, Judith Kelner
A New Approach for Pedestrian Navigation for Mobility Impaired Users Based on Multimodal Annotation of Geographical Data

Although much effort is spent in developing navigation systems for pedestrians, many users with special needs are mostly excluded due to a lack of appropriate geographical data such as landmarks, waypoints, or obstacles. Such data is necessary for computing suitable routes which might differ from being the shortest or fastest one. In this paper, the concept of multimodal annotation of geographical data for personalized navigation is described. Direct input by the user is combined with data derived from the observation of the user’s LOM-Modality (Location, Orientation, and Movement) to annotate geographical data. Based on this data and data derived from other users of the same user group, suitable routes even in unknown territory can be calculated.

Thorsten Völkel, Gerhard Weber
A Proposal for Distance Information Displaying Method of a Walking Assistive Device for the Blind

In this paper, we propose a device that indicates the direction of an obstacle that is encroaching into the path of a visually impaired person who is walking. Our proposed system, which would resemble a pair of eyeglasses, first detects an obstacle and then indicates its direction and distance to the wearer through puff of air to the forehead. This paper describes the preliminary testing for presenting distance information.

Chikamune Wada, Miki Asonuma
Specification of Information Needs for the Development of a Mobile Communication Platform to Support Mobility of People with Functional Limitations

Opportunities for people with functional limitations are increasing. ICT provides a number of possibilities to receive care, to travel, to work, to educate oneself, to inform oneself and to meet other people. In this paper, the methodology for defining user requirements for supporting people with functional limitations through ICT (the ASK-It Concept) is presented. The methodology covers various domains. A case example as an illustration of the process is used: a communication platform to support social relations and communities. The methodology is built upon the definition of user groups, the elaboration and implementation of relevant action and activity theory principles, and is successively developed with the content modelling procedure, in order to provide a formal description of user information needs in a computer understandable and interoperable format.

Marion Wiethoff, Sascha M. Sommer, Sari Valjakka, Karel Van Isacker, Dionisis Kehagias, Evangelos Bekiaris
Intuitive Map Navigation on Mobile Devices

In this paper, we propose intuitive motion-based interfaces for map navigation on mobile devices with built-in cameras. The interfaces are based on the visual detection of the devices self-motion. This gives people the experience of navigating maps with a virtual looking glass. We conducted a user study to evaluate the accuracy, sensitivity and responsiveness of our proposed system. Results show that users appreciate our motion-based user interface and find it more intuitive than traditional key-based controls, even though there is a learning curve.

Stefan Winkler, Karthik Rangaswamy, ZhiYing Zhou

Part III: Virtual and Augmented Environments

Frontmatter
An Interactive Entertainment System Usable by Elderly People with Dementia

As the population profile in most part of the world is more and more weighted towards older people, the incidence of dementia will continue to increase. Dementia is marked by a general cognitive decline, and in particular the impairment of working (short-term) memory. Finding ways to engage people with dementia in stimulating but safe activities which they can do without the help of a carer would be beneficial both to them and to their caregivers. We are developing an interactive entertainment system designed to be used alone by a person with dementia without caregiver assistance. We have piloted a number of interactive virtual environments and activities both with people with dementia and professionals in the field of dementia care. We report the results of this pilot work and consider the further questions to be addressed in developing an engaging, multimedia activity for people with dementia to use independently.

Norman Alm, Arlene Astell, Gary Gowans, Richard Dye, Maggie Ellis, Phillip Vaughan, Alan F. Newell
VRfx – A User Friendly Tool for the Creation of Photorealistic Virtual Environments

By using VR, industrial designs and architectural studies can be evaluated in early stages of development. In order to judge visual appearances and surface materials, a high visual quality is crucial. Today’s programmable graphics hardware allows rendering of photorealistic effects in real-time. Basically, this functionality can be ex-ploited in VR, but the amount of work for model creation must be by orders of magnitudes lower than what’s acceptable for computer games. Thus, a tool is needed which allows efficient preparation of design models from the digital process chain for high-fidelity VR models and which is easy to use for people who are familiar with modeling or CAD software. In this article, we describe the software tool VRfx. which addresses this task.

Matthias Bues, Günter Wenzel, Manfred Dangelmaier, Roland Blach
Effects of Virtual Reality Display Types on the Brain Computer Interface System

This paper presents the study of evaluating VR display types on Brain Computer Interface (BCI) performance. In this study, a configurable virtual reality BCI system was used for users to control the virtual environment to execute the ubiquitous computing home facilities. The study evaluated various VR display types: 2D arrow cue, 3D virtual reality, 3D fully immersive CAVE system, and 3D CAVE cue. The task involved users to imagine left or right arm movements for rotating the direction in the virtual environment and move forward by using a direction locking device. The result shows that there was no significant improvement on BCI classification rate even by enhancing the immersion of VR displays. Instead, the level of simulator sickness was increased. This result indicates a new improved display type is needed for the ubiquitous computing environment control BCI system.

Hyun Sang Cho, Kyoung Shin Park, Yongkag Kim, Chang S. Kim, Minsoo Hahn
A First Person Visuo-Haptic Environment

In real life, most of the tasks we perform throughout the day are first person tasks. Shouldn’t these same tasks be realized from a first person point of view in virtual reality? This paper presents a first person Projection-based Visuo-Haptic Environment, and virtual prototyping and data exploration applications taking advantage of the first person visuo-haptic features of this configuration.

Sabine Coquillart
AKROPHOBIA Treatment Using Virtual Environments: Evaluation Using Real-Time Physiology

In the present paper a VR (Virtual Reality) exposure treatment program for Acrophobia (fear of heights) is introduced and evaluated against an in vivo exposure with the same success rate. During the VR exposure psychophysiological parameters (heart rate, respiratory rate) are collected. VR offers the good opportunity to study psychophysiological effects under almost standardized conditions. The findings reflect partly the somatic correlates during an anxiety attack. Beside the opportunity of standardized circumstances other advantages of VR techniques are discussed (cost effectiveness, enhancement of the narration process, higher user acceptance).

Marcel Delahaye, Ralph Mager, Oliver Stefani, Evangelos Bekiaris, Michael Studhalter, Martin Traber, Ulrich Hemmeter, Alexander H. Bullinger
Multimodal Augmented Reality in Medicine

The driving force of our current research is the development of medical training systems using augmented reality techniques. To provide multimodal feedback for the simulation, haptic interfaces are integrated into the framework. In this setting, high accuracy and stability are a prerequisite. Misalignment of overlaid virtual objects would greatly compromise manipulative fidelity and the sense of presence, and thus reduce the overall training effect. Therefore, our work targets the precise integration of haptic devices into the augmented environment and the stabilization of the tracking process. This includes a distributed system structure which is able to handle multiple users in a collaborative augmented world. In this paper we provide an overview of related work in medical augmented reality and give an introduction to our developed system.

Matthias Harders, Gerald Bianchi, Benjamin Knoerlein
New HCI Based on a Collaborative 3D Virtual Desktop for Surgical Planning and Decision Making

Today, diagnosis of cancer and therapeutic choice imply strongly structured meeting between specialized practitioners. These complex and not standardized meetings are generally located at a same place and need a heavy preparation-time. In this context, we assume that efficient collaborative tools could help to reduce decision time and improve reliability of the chosen treatments. The European project Odysseus investigates how to design a Collaborative Decision Support Systems (CDSS) for surgical planning. We present here an activity analysis and the first outcomes of a participatory design method involving end users. Especially a new concept of Graphic User Interface (GUI) is proposed. It tries to make use of Virtual Reality technologies to overcome issues met with common collaborative tools.

Pascal Le Mer, Dominique Pavy
Measurement and Prediction of Cybersickness on Older Users Caused by a Virtual Environment

Recently the development of network technology is quickly, there are more and more VEs can be browsed on the Web, such as video games, digital museums and electronic shops. Therefore, the older web-users can easily immerse in a VE at home and become the fastest growing group of internet users. In general, these visitors browse the web-VEs on the field of TFT-LCD display. This study found that the SSQ scores of cybersickness increased significantly with increasing navigation rotating speed and exposure duration on older participants whilst the device of TFT-LCD displays are used to present the VE. Therefore, the cybersickness-predicting model was designed by fuzzy sets including speed of navigation rotation, angle of navigation rotation and exposure duration to evaluate the symptom of cybersickness for older users on TFT-LCD display in VE.

Cheng-Li Liu, Shiaw-Tsyr Uang
VR, HF and Rule-Based Technologies Applied and Combined for Improving Industrial Safety

Industrial safety can be regarded as a major issue of industrial environments nowadays. This is why industries are currently spending huge amounts of resources to improve safety in all levels by reducing risks of causing damages to equipment, human injuries or even fatalities. This paper describes how Virtual Reality, Human Factors and Rule-based technologies are used in the framework of the VIRTHUALIS Integrated Project towards industrial training, safety management and accident investigation. The paper focuses mainly on the VR system specification and basic modules, while at the same time it presents the main system modules that synthesize the tool as a whole.

Konstantinos Loupos, Luca Vezzadini, Wytze Hoekstra, Waleed Salem, Paul Chung, Matthaios Bimpas
Adaptive Virtual Reality Games for Rehabilitation of Motor Disorders

This paper describes the development of a Virtual Reality (VR) based therapeutic training system aimed at encourage stroke patients with upper limb motor disorders to practice physical exercises. The system contains a series of physically-based VR games. Physically-based simulation provides realistic motion of virtual objects by modelling the behaviour of virtual objects and their responses to external force and torque based on physics laws. We present opportunities for applying physics simulation techniques in VR therapy and discuss their potential therapeutic benefits to motor rehabilitation. A framework for physically-based VR rehabilitation systems is described which consists of functional tasks and game scenarios designed to encourage patients’ physical activity in highly motivating, physics-enriched virtual environments where factors such as gravity can be scaled to adapt to individual patient’s abilities and in-game performance.

Minhua Ma, Michael McNeill, Darryl Charles, Suzanne McDonough, Jacqui Crosbie, Louise Oliver, Clare McGoldrick
Controlling an Anamorphic Projected Image for Off-Axis Viewing

We modify a projected image so as to compensate for changes in the viewer’s location. We use the concept of a virtual camera in the viewing space to achieve a transformable display with improved visibility. The 3D space and virtual camera are initialized and then the image is translated, rotated, scaled and projected. The user can modify the position and size of the image freely within the allowable projection area. They can also change its orientation as seen from their viewpoint, which can be off the axis of projection.

Jiyoung Park, Myoung-Hee Kim
An Anthropomorphic AR-Based Personal Information Manager and Guide

The use of personal electronic equipment has significantly increased during recent years. Augmented Reality (AR) technology enables mobile devices to provide a very rich user experience by combining mobile computing with connectivity and location-awareness. In this paper we discuss the approach and development of an Augmented Reality-based personal assistant, combining the familiar interface of a human person with the functionality of a location-aware digital information system. The paper discusses the main components of the system, including the anthropomorphic user interface as well as the results of an initial prototype evaluation.

Andreas Schmeil, Wolfgang Broll
Merging of Next Generation VR and Ambient Intelligence – From Retrospective to Prospective User Interfaces

In this paper we present current and future approaches to merge intelligent interfaces with immersive Virtual Environments (VEs). The aim of this paper is to substantiate the introductory presentation in the session ”Facing Virtual Environments with innovative interaction techniques” at HCI 2007. Although VEs and multimodal interfaces tried to make Human-Computer- Interaction as natural as possible, they have shown serious usability problems. We describe concepts to aid users in supporting their personal cognitive and perceptual capabilities, where the Virtual Environment will adapt dynamically and in real-time to the users’ physiological constitution, previous behaviour and desires. With our concept, human performance can be significantly enhanced by adapting interfaces and environments to the users’ mental condition and their information management capacity. Health and usability problems caused by stress, workload and fatigue will be avoided. We intend to encourage discussions on this topic among the experts, which are gathered this session.

Oliver Stefani, Ralph Mager, Evangelos Bekiaris, Maria Gemou, Alex Bullinger
Steady-State VEPs in CAVE for Walking Around the Virtual World

The human brain activities of steady-state visual evoked potentials, induced by a virtual panorama and two objects, were recorded for two subjects in immersive virtual environment. The linear discriminant analysis with single trial EEG data for 1.0 seconds resulted in 74.2 % of averaged recognition rate in inferring three gaze directions. The possibility of online interaction with 3D images in CAVE will be addressed for walking application or remote control of a robotic camera.

Hideaki Touyama, Michitaka Hirose

Part IV: Interaction Techniques and Devices

Frontmatter
An Eye-Gaze Input System Using Information on Eye Movement History

We have developed an eye-gaze input system for people with severe physical disabilities such as amyotrophic lateral sclerosis. The system utilizes a personal computer and a home video camera to detect eye gaze under natural light. It also compensates for measurement errors caused by head movements; in other words, it can detect the eye gaze with a high degree of accuracy. We have also developed a new gaze selection method based on the eye movement history of a user. Using this method, users can rapidly input text using eye gazes.

Kiyohiko Abe, Shoichi Ohi, Minoru Ohyama
Handheld Haptic Display with Braille I/O

This paper describes the implementation of a handheld haptic display using verbal and nonverbal communication technologies for visually impaired pedestrians. Wearable and mobile human-computer-interface technologies provide the means to use the display in daily life. Six ring-mounted vibrators for the finger-braille method, one of the commonly used communication methods among deaf-blind people in Japan, and a textual input interface designed on the basis of the braille input method, are adopted as the verbal I/O interface. As the non-verbal I/O interface, a perceptual force attraction method, which can convey “pull” or “push” sensations on handheld devices, is adopted for intuitive way-finding. The handheld haptic display with these technologies integrated has the potential to support wayfinding not only for blind people but also for sighted people.

Tomohiro Amemiya
Nonverbally Smart User Interfaces: Postural and Facial Expression Data in Human Computer Interaction

We suggest that User Interfaces (UIs) can be designed to serve as cognitive tools based on a model of nonverbal human interaction. Smart User Interfaces (SUIs) have the potential to support the human user when and where appropriate and thus indirectly facilitate higher mental processes without the need for end-user programming or external actuation. Moreover, graphic nonverbally sensitive SUIs are expected to be less likely to interfere with ongoing activity and disrupt the user. We present two non-invasive methods to assess postural and facial expression components and propose a contextual analysis to guide SUI actuation and supportive action. The approach is illustrated in a possible redesign of the Microsoft helper agent “Clippit” ®.

G. Susanne Bahr, Carey Balaban, Mariofanna Milanova, Howard Choe
Towards a Physical Based Interaction-Model for Information Visualization

The ongoing process of collection and storage of knowledge with computer technology leads to highly complex information environments. The efficient access to information and the structure itself gets more and more complicated. The presented work investigates the usefulness of physical based interaction and representation behavior in immersive environments for information visualization. A framework will be presented for the mapping of physical behavior on abstract data entities and interaction. This framework is applied to an early prototype for market research.

Roland Blach, Günter Wenzel, Manfred Dangelmaier, Jörg Frohnmayer
A Note on Brain Actuated Spelling with the Berlin Brain-Computer Interface

Brain-Computer Interfaces (BCIs) are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by the brain. Since the characteristics of such direct brain-to-computer interaction are limited in several aspects, one major challenge in BCI research is intelligent front-end design. Here we present the mental text entry application ‘Hex-o-Spell’ which incorporates principles of Human-Computer Interaction research into BCI feedback design. The system utilises the high visual display bandwidth to help compensate for the extremely limited control bandwidth which operates with only two mental states, where the timing of the state changes encodes most of the information. The display is visually appealing, and control is robust. The effectiveness and robustness of the interface was demonstrated at the CeBIT 2006 (world’s largest IT fair) where two subjects operated the mental text entry system at a speed of up to 7.6 char/min.

Benjamin Blankertz, Matthias Krauledat, Guido Dornhege, John Williamson, Roderick Murray-Smith, Klaus-Robert Müller
EOG Pattern Recognition Trial for a Human Computer Interface

The setup of a human computer interaction electrooculography (EOG) measurement trail for developing pattern recognition algorithms is described. With an easy to wear EOG measurement device we relized performance tests with a group of normal individuals as well as with one individual suffering from multiple sclerosis (MS). The individuals had to perform different eye movement patterns for coding information to control the environment. Different patterns of recognition in the time domain have been tried and implemented to perform online performance tests. The aim is to develop an EOG based communication device passed on pattern recognition algorithms for user with limited functionality.

Sara Brunner, Sten Hanke, Siegfried Wassertheuer, Andreas Hochgatterer
Continuous Recognition of Human Facial Expressions Using Active Appearance Model

Recognizing human facial expressions continuously is useful since it has many potential applications. We have implemented a continuous facial expression recognition system using Active Appearance Model (AAM). AAM has been widely used in face tracking, face recognition, and object recognition tasks. In this study, we adopt an independent AAM using Inverse Compositional Image Alignment method. The evaluation of this system carried out with the standard Cohn-Kanade facial expression database. Result shows that it could useful for many applications.

Kyoung-Sic Cho, Yong-Guk Kim
Robust Extraction of Moving Objects Based on Hue and Hue Gradient

This paper presents a new method for robustly extracting moving objects in an environment with varying illuminations. The proposed method applies background subtraction scheme based on hue and hue gradient to minimize the effect of the illumination change. First, we train the background images in the HSI color space and build the Gaussian background model with respect to the hue and hue gradient. Next, image subtraction is performed between the trained background image and the current input image based on the Gaussian background model. Finally, the morphological operations are applied to remove the background noise. In this paper, we compare the previous subtraction schemes to our method applied to both the hand and body tracking in order to prove the robustness of the proposed method in sudden illumination changes.

Yoo-Joo Choi, Je-Sung Lee, We-Duke Cho
An Adaptive Vision System Toward Implicit Human Computer Interaction

In implicit human computer interaction, computers are required to understand users’ actions and intentions so as to provide proactive services. Visual processing has to detect and understand human actions and then transform them as the implicit input. In this paper an adaptive vision system is presented to solve visual processing tasks in dynamic meeting context. Visual modules and dynamic context analysis tasks are organized in a bidirectional scheme. Firstly human objects are detected and tracked to generate global features. Secondly current meeting scenario is inferred based on these global features, and in some specific scenarios face and hand blob level visual processing tasks are fulfilled to extract visual information for the analysis of individual and interactive events, which can further be adopted as implicit input to the computer system. The experiments in our smart meeting room demonstrate the effectiveness of the proposed framework.

Peng Dai, Linmi Tao, Xiang Zhang, Ligeng Dong, Guangyou Xu
Detailed Monitoring of User’s Gaze and Interaction to Improve Future E-Learning

In this paper, we investigate how to use future interaction technologies to enhance learning technologies. We examine in detail how tracking the mouse pointer and observing the user’s gaze can help to monitor the use of web applications and in particular E-learning applications. To improve learning and teaching, it is of interest to understand in what order and to what extent users read texts, how much time they spend on individual parts of the teaching materials, and where they get stuck. Based on a standard web browser as an application platform, extended with a gaze tracking facility, we conducted studies to explore the feasibility of this novel approach. The concept includes an extension of current technologies to allow JavaScript code running in the browser to access the current gaze position of the user. Our work shows how pieces of web technology and eye gaze tracking can be put together to create a new platform for E-learning that provides additional benefits for learners and teachers.

Heiko Drewes, Richard Atterer, Albrecht Schmidt
Facial Expression Recognition Based on Color Lines Model and Region Based Processing

Facial expression involves various movements. We present various facial expressions as simple regions on facial area and get recognition rate. There are two steps for obtaining the expression regions. First step is to extract facial area from input image with color lines model and second step is to catch regions of expression on extracted facial area with active contour without edges method as a region based processing. We have tested this presentation method for facial expressions from the open facial expression database JAFFE(Japanese Female Facial Expressions). In this method, we can get a facial expression region without any manual works. In the future, we will compensate for lack of ambiguities among expressions with person specific method and this method will be extended to image sequence.

GeonAe Eom, Hyun-Seung Yang
A Real-Time Gesture Tracking and Recognition System Based on Particle Filtering and Ada-Boosting Techniques

A real-time gesture tracking and recognition system based on particle filtering and Ada-Boosting techniques is presented in this paper. The particle filter, which is a flexible simulation-based method and suitable for non-linear tracking problems, is adopted to achieve hand tracking robustly. In order to avoid the influence of the other exposed skin parts of a human body and skin-colored objects in the background, our system further applies the motion information as a feature of the hand in addition to the skin color information. Compared with the conventional particle filters, our method leads to more efficient sampling and requires fewer particles. It results in lowering computational cost and saving much time for gesture recognition later. The gesture recognition uses the features derived from the wavelet transform, and employs an Ada-Boost algorithm which is excellent in facilitating the speed of convergence during the training. Hence, it is conducive to update new information and expand new gesture archives. The experimental results reveal our system is fast, accurate, and robust in hand tracking. Moreover, it has good performance in gesture recognition under complicated environments.

Chin-Shyurng Fahn, Chih-Wei Huang, Hung-Kuang Chen
Enhancing Human-Computer Interaction with Embodied Conversational Agents

We survey recent research in which the impact of an embodied conversational agent on human-computer interaction has been assessed through a human evaluation. In some cases, the evaluation involved comparing different versions of the agent against itself in the context of a full interactive system; in others, it measured the effect on user perception of spoken output of specific aspects of the embodied agent’s behaviour. In almost all of the studies, an embodied agent that displays appropriate non-verbal behaviour was found to enhance the interaction.

Mary Ellen Foster
Comparison Between Event Related Potentials Obtained by Syllable Recall Tasks and by Associative Recall Tasks

The final goal of this research is to establish some useful verbal communication systems between computers and persons, or between handicapped persons and normal persons. For a substantial progress toward the research goal, we investigate Event Related Potentials (ERP’s for short) caused by Electroencephalograms (EEG’s for short). By observing ERP’s, we estimate recalled words, phrases, or sentences that may contain some homonyms or related words. In particular, we pay attention to the difference between ERP’s caused by recalling a single syllable individually without other syllables and ERP’s caused by recalling a syllable together with a word containing the syllable. From our observation of this difference, we believe that it may be useful to discuss the possibility of estimating a recalled word by combining ERPs caused by syllables.

Mariko F. Funada, Miki Shibukawa, Tadashi Funada, Satoki P. Ninomija, Yoshihide Igarashi
Gaze as a Supplementary Modality for Interacting with Ambient Intelligence Environments

We present our current research on the implementation of gaze as an efficient and usable pointing modality supplementary to speech, for interacting with augmented objects in our daily environment or large displays, especially immersive virtual reality environments, such as reality centres and caves. We are also addressing issues relating to the use of gaze as the main interaction input modality. We have designed and developed two operational user interfaces: one for providing motor-disabled users with easy gaze-based access to map applications and graphical software; the other for iteratively testing and improving the usability of gaze-contingent displays.

Daniel Gepner, Jérôme Simonin, Noëlle Carbonell
Integrating Multimodal Cues Using Grammar Based Models

Multimodal systems must process several input streams efficiently and represent the input in a way that allows the establishment of connections between modalities. This paper describes a multimodal system that uses Combinatory Categorial Grammars to parse several input streams and translate them into logical formulas. These logical formulas are expressed in Hybrid Logic, which is very suitable for multimodal integration because it can represent temporal relationships between modes in an abstract way. This level of abstraction makes it possible to define rules for multimodal processing in a straightforward way.

Manuel Giuliani, Alois Knoll
New Type of Auditory Progress Bar: Exploration, Design and Evaluation

In this paper, we focus on the method to explore a different type of auditory progress bar by analyzing the characteristics of the visual progress bar and contexts of auditory application. A scenario of bearing in the forward/reverse modes of digital compass is selected to implement the auditory progress bar. The auditory cues play an interactive role in the bearing that they are altered according to the user’s operating behavior. Composed of sound signal and silent pause, the auditory cues are generated from a formula based on the warm/cold metaphor. A method incorporating the foreground/ background sounds is also designed to provide different ranges of progress information/progress update expressed through auditory cues. In this report four versions of auditory cues are presented as the solution to the interactive auditory progress bar and a pilot study is evaluated.

Shuo Hsiu Hsu, Cécile Le Prado, Stéphane Natkin, Claude Liard
Factors Influencing the Usability of Icons in the LCD Touch Screens

The purpose of this study was to investigate factors influencing the usability of icons in the LCD touch screens. In this study, subjects had to fill in the questionnaire and rated questions on 7-point Likert scales. 20 evaluation items were collected from relevant interface design guidelines. A total of 30 subjects, 10 none- experience users, 10 click-experience users (PDA user), 10 touch-experience users (LCD touch screens user), participated in the investigation. As main statistical test, a principal component analysis (PCA) was performed with SPSS/ PC. The results from the principal components analysis showed that the usability of touch icon was affected by seven factors: touch field, semantics quality, dynamics, hit quality, tactility, color quality and shape quality. Among these, touch field was the most important. Finally, the results of correlation analyses indicated that experience related to importance ratings for usability. Especially, subjects show significant difference in the size element (p<0.05). Further, user who has pen-click experience, such as PDA user, still show better performance for touch screen even if they use smaller icon.

Hsinfu Huang, Wang-Chin Tsai, Hsin-His Lai
Natural Demonstration of Manipulation Skills for Multimodal Interactive Robots

This paper presents a novel approach to natural demonstration of manipulation skills for multimodal interactive robots. The main focus is on the natural demonstration of manipulation skills, especially grasping skills. In order to teach grasping skills to a multimodal interactive robot, a human instructor makes use of natural spoken language and grasping actions demonstrated to the robot. The proposed approach emphasizes on four different aspects of learning by demonstration: First, the dialog system for processing natural speech is considered. Second, an object detection and classification scheme for the robot is shown. Third, the correspondence problem is addressed by an algorithm for visual tracking of the demonstrator’s hands in real time and the transformation of the tracking results into an approach trajectory for a robotic arm. The fourth aspect addresses the fine-tuning of the robot’s hand configuration for each grasp. It introduces a criterion to evaluate a grasp for stability and possible reuse of a grasped object. The approach produces stable grasps and is applied and evaluated on a multimodal service robot.

Markus Hüser, Tim Baier-Löwenstein, Marina Svagusa, Jianwei Zhang
Smart SoftPhone Device for the Network Quality Parameters Discovery and Measurement

Due to the shared nature of current network structures, guaranteeing the quality of service (QoS) of Internet applications from an end-to-end is sometimes difficult and then it has been requested to develop smart devices which have multi-modal functionality for ubiquitous network and computing environment. In this paper, we design smart SoftPhone device for guaranteeing QoS which can discover and measure various network parameters during realtime phone-call service through IP network. The smart SoftPhone for discovering and measuring of QoS-factors in realtime consists of four main blocks that is in order to control and measure various parameters independently based on UDP/SIP/RTP protocol during the end-to-end voice service. Also, we provide critical message report procedures and management schemes to guarantee QoS based on using smart SoftPhone device. For the reporting quality parameters optimally during establishing call sessions of VoIP service, we design critical management module blocks for call session and for quality reporting. To sum up, for the performance evaluation of the smart SoftPhone with scientific exactitude of quality factors, we examine the proposed technique based on the realtime phone-call service through heterogeneous network. The experimental results confirm that the developed smart SoftPhone is very useful to quality-measuring for the quality guaranteed realtime VoIP service and then it could also be applied to improve speech quality as a packet compensation device.

Jinsul Kim, Minsoo Hahn, Hyun-Woo Lee
BloNo: A New Mobile Text-Entry Interface for the Visually Impaired

We present a new mobile text-entry method that relies on alphabet navigation and dismisses memorizing, offering visually impaired individuals an easy writing mechanism. Current mobile text-entry interfaces are not suitable for blind users and special braille devices are too heavy, large and cumbersome to be used in a mobile context. With the enormous growth of mobile communications and applications it was urgent to offer visually impaired individuals the ability to operate this kind of devices. Evaluation studies were carried and validated the navigation method as a new mobile text-entry interface for the target population.

Paulo Lagoá, Pedro Santana, Tiago Guerreiro, Daniel Gonçalves, Joaquim Jorge
Low-Cost Portable Text Recognition and Speech Synthesis with Generic Laptop Computer, Digital Camera and Software

Blind persons or people with reduced eyesight could benefit from a portable system that can interpret textual information in the surrounding environment and speak directly to the user. The need for such a system was surveyed with a questionnaire, and a prototype system was built using generic, inexpensive components readily available. The system architecture is component-based so that every module can be replaced with another generic module. Even though the system makes partly incorrect recognition of text in a versatile environment, the evaluation of the system with five actual users suggested that the system can provide genuine additional value in coping with everyday issues outdoors.

Lauri Lahti, Jaakko Kurhila
Human Interface for the Robot Control in Networked and Multi-sensored Environment

In this paper, we propose a human-robot interface in networked and multi-sensored environments. The human robot interface is an essential part of intelligent robotic system. Through the human robot interface, human being can interact with the robot. Especially, in multi-sensored environment, the human robot interface can be developed with remarkably extended functionality. Generally, handheld device such as PDA is a suitable for human robot interface because of its mobility and network ability. In this paper, we select PDA as device of human robot interface. In the implemented framework, the robot user can monitor what happens in multi-sensored environment and control the mobile robot easily and intuitively.

Hyun-Gu Lee, Yong-Guk Kim, Ho-Dong Lee, Joo-Hyung Kim, Gwi-Tae Park
Gesture-Based Interactions on Multiple Large Displays with a Tabletop Interface

We like large displays. Also, we love to equip with multiple displays for exercising multiple tasks in parallel. It is not unusual to have multiple large displays in our offices. Therefore, we can see many widgets on multiple large displays and would like to select and manipulate them in more convenient and faster ways. Because the widgets are physically spread in multiple large displays, it is not easy for users to reach them easily. It follows that new interaction techniques must be provided.[1] New interaction techniques for accessing distant widgets on multiple large displays using a tabletop interface called ‘u-Table’ [2] are proposed in this paper. Hand gestures are mainly used on tabletop interfaces because of their intuitive, non-invasive and easy operations. We incorporate advantages of existing techniques such as intuitiveness of tabletop interfaces, fastness and simultaneity of existing interaction techniques such as Drag-and-pick [10] and Vacuum [11]. The proposed interaction techniques include fetching, sending, and manipulating distant widgets on multiple large displays. We expect our techniques can be applied various interfaces using hand gestures and heterogeneous displays.

Jangho Lee, Jun Lee, HyungSeok Kim, Jee-In Kim
3D Model Based Face Recognition by Face Representation Using PVM and Pose Approximation

Since a generative 3D face model consists of a large number of vertex points and polygons, a 3D model based face recognition system is generally inefficient in computation time. In this paper, we present a novel 3D face representation method to reduce the number of vertices and optimize its computation time and generate the 3D Korean face model based on the representation method. Also, a pose approximation method is described for initial fitting parameter. Finally, we evaluate the performance of proposed method with the face databases collected using a stereo-camera based 3D face capturing device and a web camera.

Yang-Bok Lee, Taehwa Hong, Hyeon-Joon Moon, Yong-Guk Kim
The Use of Interactive Visual Metaphors to Enhance Group Discussions Using Mobile Devices

In this paper, we consider the problems of group discussions and collaborative decision-making, where one or more of the participants are using restrictive interfaces such as mobile phones or PDAs. We suggest possible solutions to some of these problems and present MAVis (Mobile Argumentation Visualizer), a web-based interface built upon upon a balance-beam visual metaphor. We report on user experiences of interacting with the visual metaphor, and on the challenges of transferring this to a multi-user environment supporting mobile devices.

John McGinn, Rich Picking, Liz Picking, Vic Grout
An Accessible and Usable Soft Keyboard

AUK

is a 3x3 multi-tier onscreen keyboard. It supports various entry modes, including 1 to 10-key and joystick modes, allowing text entry with a remarkable range of devices. This paper presents the menu structure of

AUK

, the alternative entry modes, and several layouts for novice, moderate and expert users. The potential of

AUK

, as a text entry solution both for disabled and able-bodied users, is discussed. Overall, the work presented here is considered as a contribution to Universal Access and towards

ambient text entry

.

Alexandros Mourouzis, Evangelos Boutsakis, Stavroula Ntoa, Margherita Antona, Constantine Stephanidis
Ambient Documents: Intelligent Prediction for Ubiquitous Content Access

Ubiquitous service delivery expects that content will be available where, when and how the user needs it. Consumers are becoming ever demanding and the consumers of ubiquitous services are no different in this regard. Their expectations escalate in terms of relevance, ease of access, recency, accuracy and latency of content supply. In addition they expect that the content be supplied proactively in anticipation of their needs together with delivery when they require it. This presupposes that content can be delivered relative to both the consumers location and their technological context. Within this paper we explore how traditional document access can be transformed and introduce

Ambient Documents

a new metaphor for document content access.

Gregory M. P. O’Hare, Michael J. O’Grady, Conor Muldoon, Caroline A. Byrne
Combining Pointing Gestures with Video Avatars for Remote Collaboration

We present a simple and intuitive method of user interaction, based on pointing gestures, which can be used with video avatars in a remote collaboration. By connecting the head and fingertip of a user in 3D space we can identify the direction in which they are pointing. Stereo infrared cameras in front of the user, together with an overhead camera, are used to find the user’s head and fingertip in a CAVE

TM

-like system. The position of the head is taken to be the top of the user’s silhouette, while the location of the user’s fingertip is found directly in 3D space by searching the images from the stereo cameras for a match with its location in the overhead camera image in real time. The user can interact with the first object which collides with the pointing ray. In an experimental result, the result of the interaction is shown together with the video avatar which is visible to a remote collaborator.

Seon-Min Rhee, Myoung-Hee Kim
Integrating Language, Vision and Action for Human Robot Dialog Systems

Developing a robot system that can interact directly with a human instructor in a natural way requires not only highly-skilled sensorimotor coordination and action planning on the part of the robot, but also the ability to understand and communicate with a human being in many modalities. A typical application of such a system is interactive assembly for construction tasks. A human communicator sharing a common view of the work area with the robot system instructs the latter by speaking to it in the same way that he would communicate with a human partner.

Markus Rickert, Mary Ellen Foster, Manuel Giuliani, Tomas By, Giorgio Panin, Alois Knoll
A New Gaze-Based Interface for Environmental Control

This paper describes a new control system interface which utilises the user’s eye gaze to enable severely disabled individuals control electronic devices easily. The system is based upon a novel human computer interface, which facilitates simple control of electronic devices by predicting and responding to the user’s possible intentions, based intuitively upon their point of gaze. The interface responds by automatically pre-selecting and offering only those controls appropriate to the specific device that the user looks at, in a simple and accessible manner. It therefore affords the user conscious choice of the appropriate range of control actions required, which may be executed by simple means and without the need to navigate manually through potentially complex control menus to reach them. Two systems using the head-mounted and the remote eye tracker respectively are introduced, compared and evaluated in this paper.

Fangmin Shi, Alastair Gale, Kevin Purdy
Geometry Issues of a Gaze Tracking System

One of the most confusing aspects that one meets as he introduces himself into gaze tracking is the variety, in terms of hardware equipment, of available systems providing solutions to the same matter, i.e. determining subject’s gaze. Calibration permits adjusting trackers based on different hardware and image features to the subject. The negative aspect of calibration is that it permits the system to work properly but at the expense of a lack of control over the intrinsic behavior of the tracker. The objective of this work is to overcome this obstacle to explore more deeply the elements of a tracker from a purely geometrical point of view. Alternative models based on image features are evaluated. As result of this study a model has been constructed based on minimal calibration using one camera and multiple lighting with acceptable accuracy level.

Arantxa Villanueva, Juan J. Cerrolaza, Rafael Cabeza
Adaptive Context Aware Attentive Interaction in Large Tiled Display

We propose a conceptual model towards a context-based attentive interaction. Our focus is to improve the interaction between the user and the application of large tiled display by introducing user context and user attention. Interaction in our proposed system adapts user’s visual attention region with the user’s changing context based on head movement to achieve an immersive interaction with the tile display. Our experiment uses computer vision to track the user’s presence and projects the most attentive regions in a tiled display in high resolution. User will be able to see other regions in higher resolution according to the head movement. At the same time, user attention is captured and modeled to learn the attentive regions to be displayed for other users. This paper will show the experimental result of the effectiveness of the perceptual interaction in a large tiled display environment.

Chee-Onn Wong, Dongwuk Kyoung, Keechul Jung
Improvements of Chord Input Devices for Mobile Computer Users

This study will be using a tablet computer as an example to study mobile products and make a comparison with keyboard and mouse by providing an input device consisting of a new form of touch pen in combination with chord input. The goal is to find the best combination of input device, minimizing harm caused by the input device and provide a reference for further input device designs. Therefore the NEW chord keyboard and touch pen conforms to the needs of a new mobile product and it becomes the best combination of this experiment which can be considered for future product design references.

Fong-Gong Wu, Chun-Yu Chen, Chien-Hsu Chen
A Study of Control Performance in Low Frequency Motion Workstation

Many studies have found the performance of using non-keyboard input devices (NKID) was affected by motion environment, but few of them have considered the interaction between motion direction and the approach angle on manipulating NKID. In this study, an experiment was conducted to investigate the effect of different approach angles (0o, 45o, 90o, 135o, 180o, 225o, 270o, 315o, 360o) and motion directions (roll and pitch) on the performance (movement time and error rate) of using trackball. The results showed that the main effect of approach angle on movement time was significant, whereas there was neither significant interaction nor the main effect of motion direction. The effects of approach angle and motion direction on the error rate were not significant. Some suggestions on the control console and interface design were proposed based on the results of the experiment.

Yi-Jan Yau, Chin-Jung Chao, Sheue-Ling Hwang, Jhih-Tsong Lin
An Ambient Display for the Elderly

The demand for systems to assist in the care of the elderly is continually increasing. We propose an ambient display that allows casual and implicit interaction with an elderly user. The display system recognizes the user and measures the distance between the user and the display using information from an RFID reader and an ultrasonic sensor. It uses this information to adjust the level of detail of the displayed information. If the user is far from the display, a black-and-white image is displayed that does not attract attention. But when the user’s approach is recognized, the display provides three-dimensional spatial navigation through the image space. When the user is very close to the display, they can interact directly using the touch screen. In the event of an emergency, LEDs attached around the display call the user’s attention by flashing the light.

Yeo-Jin Yoon, Han-Sol Ryu, Ji-Man Lee, Soo-Jun Park, Seong-Joon Yoo, Soo-Mi Choi
Personal Companion: Personalized User Interface for U-Service Discovery, Selection and Interaction

In this paper, we propose a mobile user interface named personal companion which enables selection and interaction of u-services based on context of user. Personal companion selects u-services from a list of discovered services, supports camera-based selection with embedded marker and personalizes UI of the selected service in ubiquitous computing environment. In order to verify its usefulness, we implemented personal companion on PDA and UMPC platform and deployed into smart home testbed for selecting and interacting with u-services. The proposed personal companion is expected to play a vital role in ubiquitous computing environment by bridging users and u-services.

Hyoseok Yoon, Hyejin Kim, Woontack Woo
Backmatter
Metadaten
Titel
Universal Access in Human-Computer Interaction. Ambient Interaction
herausgegeben von
Constantine Stephanidis
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-73281-5
Print ISBN
978-3-540-73280-8
DOI
https://doi.org/10.1007/978-3-540-73281-5

Neuer Inhalt