Skip to main content

2015 | Buch

Human-Computer Interaction: Interaction Technologies

17th International Conference, HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The 3-volume set LNCS 9169, 9170, 9171 constitutes the refereed proceedings of the 17th International Conference on Human-Computer Interaction, HCII 2015, held in Los Angeles, CA, USA, in August 2015. The total of 1462 papers and 246 posters presented at the HCII 2015 conferences was carefully reviewed and selected from 4843 submissions. These papers address the latest research and development efforts and highlight the human aspects of design and use of computing systems. The papers in LNCS 9170 are organized in topical sections on gesture and eye-gaze based interaction; touch-based and haptic interaction; natural user interfaces; adaptive and personalized interfaces; distributed, migratory and multi-screen user interfaces; games and gamification; HCI in smart and intelligent environments.

Inhaltsverzeichnis

Frontmatter

Gesture and Eye-gaze Based Interaction

Frontmatter
Using Gesture-Based Interfaces to Control Robots

This paper analyzes human-robot interaction (HRI) to evaluate whether the use of a gesture-based interface is viable for robot control. An experiment was conducted with 19 volunteers. Using a body tracking device, they had to perform gestural commands to remotely control a mobile robot and complete a path marked on the floor. After the experiment, volunteers answered a questionnaire assessing aspects such as system’s responsiveness, precision, and triggers to possible physical and psychological discomforts.

The results achieved validated the research aim partially, as it was determined that this control method is viable but only for short-term operations, pointing a necessity to create a more suitable control, less prone to cause user fatigue during long-term use. The developed system was designed not only for the analysis of HRI factors, but also for applications in remote operation contexts, such as industrial maintenance and exploration of inhospitable environments.

Gabriel M. Bandeira, Michaela Carmo, Bianca Ximenes, Judith Kelner
Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere

One of the general problems of the pupil-corneal reflection-based gaze detection systems is that the frames and lens of eyeglasses produce reflection images of the light sources in the camera image when a user wears eyeglasses. The glass reflections tend to be misdetected as the pupil and corneal reflections. In the present paper, we propose a novel geometrical methodology based on the optical structure of the eyeball to detect a true pair of the pupil and corneal reflection. The experimental results show that the proposed method improved the precision of gaze detection when the subjects wore glasses or when disturbance light sources existed.

Kiyotaka Fukumoto, Takumi Tsuzuki, Yoshinobu Ebisawa
Designing Touchless Gestural Interactions for Public Displays In-the-Wild

Public displays, typically equipped with touchscreens, are used for interactions in public spaces, such as streets or fairs. Currently low-cost visual sensing technologies, such as Kinect-like devices and high quality cameras, allow to easily implement touchless interfaces. Nevertheless, the arising interactions have not yet been fully investigated for public displays in-the-wild (i.e. in appropriate social contexts where public displays are typically deployed). Different audiences, cultures and social settings strongly affect users and their interactions. Besides gestures for public displays must be guessable to be easy to use for a wide audience. Issues like these could be solved with user-centered design: gestures must be chosen by users in different social settings, and then selected to be resilient to cultural bias and provide a good level of guessability. Therefore the main challenge is to define touchless gestures in-the-wild by using novel UCD methods applied out of controlled environments, and evaluating their effectiveness.

Vito Gentile, Alessio Malizia, Salvatore Sorce, Antonio Gentile
To Write not Select, a New Text Entry Method Using Joystick

Existing joystick text entry methods for game and TV boxes are curser-based selections on virtual keyboards. In this paper we present a new text entry method using joysticks as tangible devices to capture users’ freehand writing gestures. The method has considerable accuracy to accomplish English text entry. On the prediction model, we introduced HMM algorithm so users can enter text assisted with automatic correcting. We conducted a pairwise usability test on the keyboard selection method and writing-with-joystick method. The result shows that both of them are very easier to learn and writing-with-joystick is faster than the keyboard selection method both on the prediction model or none-prediction model. Subjects also report that using the keyboard selection method to enter text can be boring when using handwriting is somehow natural. This result indicates that writing with joystick may be another text entry option for game console or Smart TV users.

Zhenyu Gu, Xinya Xu, Chen Chu, Yuchen Zhang
AirFlip: A Double Crossing In-Air Gesture Using Boundary Surfaces of Hover Zone for Mobile Devices

Hover sensing capability provides richer interactions on mobile devices. For one such exploration, we show a quick double crossing in-air gesture for mobile devices, called

AirFlip

. In this gesture, users move their thumb into the hover zone from the side, and then move it out of the hover zone. Since this gesture does not conflict with any touch gestures that can be performed on mobile devices, it will serve as another gesture on mobile devices with touchscreens where only a limited input vocabulary is available. We implemented two applications based on AirFlip. In this paper, we show the results of a comparative user study that we conducted to identify the performance of AirFlip. We also discuss the characteristics of AirFlip on the basis of the results.

Hiroyuki Hakoda, Takuro Kuribara, Keigo Shima, Buntarou Shizuki, Jiro Tanaka
Design and Evaluation of Freehand Gesture Interaction for Light Field Display

The paper reports on a user study of freehand gesture interaction with a prototype of autostereoscopic 3D light field display. The interaction was based on a direct touch selection of simple objects rendered at different positions in space. The main goal of our experiment was to evaluate the overall user experience and perceived cognitive workload of such freehand interaction in 3D environment and compare it to the simplified touch-based interaction in 2D environment. The results of the experiment confirmed the hypothesis that significantly more time is required for the interaction in 3D than the interaction in 2D. Surprisingly, no significant difference was found in the results of the assessment of cognitive workload when comparing 3D and 2D. We believe the interaction scenario proposed and evaluated in this study could represent an efficient and intuitive future interaction technique for the selection and manipulation of content rendered on autostereoscopic 3D displays

Vamsi Kiran Adhikarla, Grega Jakus, Jaka Sodnik
Beyond Direct Gaze Typing: A Predictive Graphic User Interface for Writing and Communicating by Gaze

This paper introduces a new gaze-based Graphic User Interface (GUI) for Augmentative and Alternative Communication (AAC). In the state of the art, prediction methods to accelerate the production of textual, iconic and pictorial communication only by gaze control are still needed. The proposed GUI translates gaze inputs into words, phrases or symbols by the following methods and techniques: (i) a gaze-based information visualization technique, (ii) a prediction technique combining concurrent and retrospective methods, and (iii) an alternative prediction method based either on the recognition or morphing of spatial features. The system is designed for extending the communication function of individuals with severe motor disabilities, with the aim to allow end-users to independently hold a conversation without needing a human interpreter.

Maria Laura Mele, Damon Millar, Christiaan Erik Rijnders
Nonlinear Dynamical Analysis of Eye Movement Characteristics Using Attractor Plot and First Lyapunov Exponent

The purpose of this study was to clarify eye movement characteristics during a visual search using nonlinear dynamics (chaos analysis). More concretely, the first Lyapunov exponent and the attractor plot were obtained for the time series data of

x

- and

y

-directional eye-gaze locations. An attempt was made to compare the first Lyapunov exponent and the attractor plot during a visual search task as a function of layout complexity of the display and to verify whether chaotic properties existed in the fluctuation of eye-gaze locations, and to examine how the scaling properties change as a function of the layout complexity. First Lyapunov exponent of the time series of eye-gaze locations took positive values, and tended to increase with the increase of search task difficulty (layout complexity). The attractor plot drew a trajectory like an ellipse, and the variation in attractor plots tended to be more complicated with the increase of task difficulty.

Atsuo Murata, Tomoya Matsuura
Optimal Scroll Method for Eye-Gaze Input System
Comparison of R-E and R-S Compatibility

It is not clear which of the R-E and the S-R compatibility principles is proper for the eye-gaze input. This issue should be addressed for the development of more usable eye-gaze input system. The aim of this study was to explore which of the two compatibility principles was proper for the eye-gaze input system. For all scroll methods, the task completion time did not differ between R-E and S-R compatibility conditions (see Fig.

4

). In other words, the speed of scroll did not differ between two compatibility conditions for all of three scroll methods. The number of errors per 90 trials significantly differed among scroll conditions and between R-E and S-R compatibility conditions. Judging from the accuracy of scroll, the error was less when the S-R compatibility like non-touch screen Microsoft Windows was applied than when the R-E compatibility like iPod or iPad was applied. In the range of this study, it seems that the S-R compatibility is dominant from the viewpoints of scroll accuracy for all of three scroll methods. The subjective rating on both usability and fatigue also supported the superiority of S-R compatibility over the R-E compatibility condition. In conclusion, the S-R compatibility was found to be superior for the eye-gaze input system.

Atsuo Murata, Makoto Moriwaka, Yusuke Takagishi
Effects of Target Shape and Display Location on Pointing Performance by Eye-Gaze Input System
Modeling of Pointing Time by Extended Fitts’ Law

This study aimed at investigating the effects of the target shape, the movement distance, the target size, and the direction of target presentation on the pointing performance using an eye-gaze input system. The target shape, the target size, the movement distance, and the direction of target presentation were within-subject experimental variables. The target shape included: diamond, circle, rectangle, and square. The direction of target presentation included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. As a result, the pointing time of the rectangle tended to be longer. The upper directional movement also tended to prolong the pointing time. Such results would be effective for designing an eye-gaze-input HCI (Human-Computer Interaction). Moreover, as a result of modeling the pointing time by Fitts’ modeling, it was suggested that the index of difficulty in Fitts’ modeling for the rectangle should be defined separately from the circle, the diamond, and the square.

Atsuo Murata, Makoto Moriwaka, Daichi Fukunaga
Analysis of Eye Hand Interaction in Drawing Figure and Letter
For the Development of Handwrite-Training Device

We investigated the eye hand interaction by focusing on the position of fixation, in order to clarify the role of eye in drawing. In the experiment, participants were asked to draw the simple circle and popular Chinese letters under the three different conditions; drawing by using the pen which is out of ink, tracing and drawing. The result showed that three modes of eye-hand behaviour were observed. We suggested that these three modes should be considered to develop the training device to realize more effective handwrite-training.

Yumiko Muto, Takeshi Muto
Swift Gestures: Seamless Bend Gestures Using Graphics Framework Capabilities

With the advent of bendable devices, Lahey et al. [

1

], explored bend gestures for mobile phone applications. Considering

millions of applications present on app stores

[

2

], it would be a challenge to modify source code to handle bend gestures. We propose a novel approach to assign bend gestures

using graphics framework capabilities, which does not require application source code changes

. Because of the ease in use of the proposed approach, bend gestures get acceptance from research community and industry.

Samudrala Nagaraju
Phases of Technical Gesture Recognition

To realize a hands-free controlled system by recognition of mid-air gestures still a bundle of serious problems exists. It is not really clarified how commands have to be interpreted by gestures because it is possible to understand the stroke phases as static as well as dynamic. But depending on which meaning will be used the gesture itself has to be executed in different manners. With video sequences of different interpretations and an online questionnaire this question was examined. The results and also pending problems led to a first solution of a mobile and hands-free controlled transportation system (for picking, lifting and transportation of small boxes) in logistic domains.

Tobias Nowack, Nuha Suzaly, Stefan Lutherdt, Kirsten Schürger, Stefan Jehring, Hartmut Witte, Peter Kurtz
Automatic Classification Between Involuntary and Two Types of Voluntary Blinks Based on an Image Analysis

Several input systems using eye blinking for communication with the severely disabled have been proposed. Eye blinking is either voluntary or involuntary. Previously, we developed an image analysis method yielding an open-eye area as a measurement value. We can extract a blinking wave pattern using statistical parameters yielded from the measurement values. Based on this method, we also proposed an automatic classification method for both involuntary blinking and one type of voluntary blinking. In this paper, we aim to classify a new type of voluntary blinking in addition to the two previous known types. For classifying these three blinking types, a new feature parameter is proposed. In addition, we propose a new classification method based on the measurement results. Our experimental results indicate a successful classification rate of approximately 95 % for a sample of seven subjects using our new classification method between involuntary blinking and two types of voluntary blinking.

Hironobu Sato, Kiyohiko Abe, Shoichi Ohi, Minoru Ohyama

Touch-based and Haptic Interaction

Frontmatter
GUIs with Haptic Interfaces

While there are many studies regarding utilization of haptic feedback to enhance desktop GUIs and utilizing haptic devices as additional interfaces to improve performance in current interaction techniques, there are not many studies that uses haptic device as a primary input device. In this study, we present an experimentation conducted with 30 students, comparing performance of a haptic device with mouse to use a GUI elements commonly used with mouse gestures. This study is inspired by a system that utilizes both mouse and a haptic device, thus also taking task switching into consideration. We conclude that it is possible to achieve an acceptable performance with a haptic device in a desktop-like GUI but further study and experimentation is necessary.

M. Arda Aydin, Nergiz Ercil Cagiltay, Erol Ozcelik, Emre Tuner, Hilal Sahin, Gul Tokdemir
Effect of Button Size and Location When Pointing with Index Finger on Smartwatch

Users control smartwatches through touch screen interfaces such as smartphones. However, because smartwatches are very small and users’ postures differ depending on the device, control using touch screens needs to be adapted for smartwatches. Users tap buttons on the touch screen to control the smartwatch, so speed and accuracy of button input are required. Users’ button input speed and accuracy are affected by displayed button size and location. In this study, we investigated the effects of button size and location when pointing with the index finger on a smartwatch. The results suggest that the pointing error rate is significantly affected by button size and location. The error rates became lower as the buttons became larger and when the buttons were located near the center of the screen.

Kiyotaka Hara, Takeshi Umezawa, Noritaka Osawa
Preliminary Study to Determine a “User-Friendly” Bending Method: Comparison Between Bending and Touch Interaction

We suggest “User-Friendly” bending methods applied in a suitable context to flexible displays by a comparative analysis of touch interaction (TI). To determine appropriate method, we selected video and eBook applications for our experiment tasks. In the experiment, participants executed application commands through touch or bending interaction (BI) to determine the appropriate interaction method between two comparative interactions: flexibility and touch-based interaction. From the experiment, we found that BI does not apply to all commands in a flexible display. In both applications, users preferred BI for infrequently or continuously used commands: bookmarking, fast-forwarding, and rewinding. When users executed these commands, they intuitively used an “analog metaphor” as the BI. However, in both applications, users did not prefer BI for commands that required detailed and quantitative control. Based on the results of this study, we expect to discover new interactions for flexible displays and to suggest utilization direction of bending interaction.

BoKyung Huh, HaeYoun Joung, SeungHyeon Im, Hee Sun Kim, GyuHyun Kwon, JiHyung Park
Musician Fantasies of Dialectical Interaction: Mixed-Initiative Interaction and the Open Work

We compare some recent trends in mixed-initiative HCI and interactive electronic music, and consider what useful knowledge can be shared between them. We then present two novel principles for understanding the nature of this common trend:

$$spaces of co-agency$$

s

p

a

c

e

s

o

f

c

o

-

a

g

e

n

c

y

and

dialectical

interaction

; and discuss some of the philosophical and technical challenges they present in relation to musical interaction. A technically advanced prototype, the Mephistophone, is discussed as a case-study for understanding these design principles, concluding with some more general points for creative mixed-initiative interaction.

Leonardo Impett, Isak Herman, Patrick K. A. Wollner, Alan F. Blackwell
RICHIE: A Step-by-step Navigation Widget to Enhance Broad Hierarchy Exploration on Handheld Tactile Devices

Exploring large hierarchies is still a challenging task, especially for handheld tactile devices, due to the lack of visualization space and finger’s occlusion. In this paper, we propose the RICHIE (Radial InCremental HIerarchy Exploration) tool, a new radial widget that allows step-by-step navigation through large hierarchies. We designed it to fit handheld tactile requirements such as target reaching and space optimization. Depth exploration is made by shifting two levels of hierarchy at the same time, for reducing the screen occupation. This widget was implemented in order to adapt a Command and Control (C2) system to mobile tactile devices, as these systems require the on-screen presence of an important unit’s hierarchy (the ORder of BATtle). Nevertheless, we are convinced that RICHIE could be used on several systems that require hierarchical data exploration, such as phylogenetic trees or file browsing.

Alexandre Kabil, Sébastien Kubicki
Information Select and Transfer Between Touch Panel and Wearable Devices Using Human Body Communication

This paper proposes a technique to enable the simple transfer of information between a computer with a large touch-panel display, such as a tabletop PC, and another computer, typically one worn by the user. With our technique, the user touches an intended item displayed on the panel to select and transfer it to his or her device. We describe some illustrative usage scenarios and outline a prototype system that can communicate image data between a tabletop PC and a wearable device. We conducted preliminary experiments to evaluate this system’s user interface and performed interviews with test subjects regarding the prototype.

Yuto Kondo, Shin Takahashi, Jiro Tanaka
Mouse Augmentation Using a Malleable Mouse Pad

We present mouse augmentation that uses a malleable mouse pad, which is named “Sinkpad”. Sinkpad augments mouse functionalities by allowing a user to sink the mouse into the pad and tilt the mouse on the pad. In addition, the user is provided with haptic feedback via the mouse on the pad. Sinkpad allows the user to perform: sink, tilt, and sink+move actions. This paper describes Sinkpad, its applications, and its evaluation.

Takuro Kuribara, Buntarou Shizuki, Jiro Tanaka
Spatial Arrangement of Data and Commands at Bezels of Mobile Touchscreen Devices

We show a data and commands arrangement design on mobile touchscreen devices. In this design, a user can arrange any data, such as text and Web pages,

at the bezel

of the touchscreen by using a simple crossing gesture across the bezel. Our design has three main merits: data can be arranged while the small display area on mobile environment is kept open; the user can continuously execute multiple commands with the user’s minimal visual attention; and memorizing the locations of the data is made easier by utilizing the user’s spatial memory.

Toshifumi Kurosawa, Buntarou Shizuki, Jiro Tanaka
Fitts’ Throughput and the Remarkable Case of Touch-Based Target Selection

The method of calculating Fitts’ throughput is detailed, considering task characteristics, the speed-accuracy trade-off, data collection, and data aggregation. The goal is to bring consistency to the method of calculation and thereby strengthen between-study comparisons where throughput is used as a dependent measure. In addition, the distinction between indirect and direct pointing devices is elaborated using the examples of a mouse as an indirect pointing device and a finger as a direct pointing device. An experiment with 16 participants using a smart phone was conducted as an empirical test of direct touch-based target selection. Overall, the throughput was 6.95 bps. This is a remarkable figure – about 50 % higher than accepted values for a mouse. The experiment included task type (1D vs. 2D) and device position (supported vs. mobile) as independent variables. Throughput for the 1D task was 15 % higher than for the 2D task. No difference in throughput was observed between the supported and mobile conditions.

I. Scott MacKenzie
Investigation of Transferring Touch Events for Controlling a Mobile Device with a Large Touchscreen

When users hold large mobile devices equipped with a large touchscreen in one hand, the region distant from the thumb is too distant for users to control. This forces users to change their hand posture so that their thumb can reach to the top half. To address this problem, we explore a technique that transfers touch events on the bottom half of a touchscreen to its top half. This technique may allow users to control all regions of a large touchscreen by using only the bottom half. Thus, users can control a mobile device without changing hand posture. We conducted a user study to investigate the feasibility of our technique. From the results, our technique is marginally faster than direct touch and thus, might be feasible.

Kazusa Onishi, Buntarou Shizuki, Jiro Tanaka
GyroTouch: Wrist Gyroscope with a Multi-Touch Display

We present GyroTouch, a multi-modal approach to the use of a digital gyroscope in a watch form-factor and a multi-touch desktop display with the aim to find properties that can yield better navigation in 3D virtual environments. GyroTouch was created to augment multi-touch gestures with other devices. Our approach addressed 3D rotations and 3D Translation used in navigation of virtual environments. This work also includes an algorithm for estimating angular velocity for any given axis, using only one previous sample.

Francisco R. Ortega, Armando Barreto, Naphtali Rishe, Nonnarit O-larnnithipong, Malek Adjouadi, Fatemeh Abyarjoo

Natural User Interfaces

Frontmatter
Giving Voices to Multimodal Applications

The use of speech interaction is important and useful in a wide range of applications. It is a natural way of interaction and it is easy to use by people in general. The development of speech enabled applications is a big challenge that increases if several languages are required, a common scenario, for example, in Europe. Tackling this challenge requires the proposal of methods and tools that foster easier deployment of speech features, harnessing developers with versatile means to include speech interaction in their applications. Besides, only a reduced variety of voices are available (sometimes only one per language) which raises problems regarding the fulfillment of user preferences and hinders a deeper exploration regarding voices’ adequacy to specific applications and users.

In this article, we present some of our contributions to these different issues: (a) our generic modality that encapsulates the technical details of using speech synthesis; (b) the process followed to create four new voices, including two young adult and two elderly voices; and (c) some initial results exploring user preferences regarding the created voices.

The preliminary studies carried out targeted groups including both young and older-adults and addressed: (a) evaluation of the intrinsic properties of each voice; (b) observation of users while using speech enabled interfaces and elicitation of qualitative impressions regarding the chosen voice and the impact of speech interaction on user satisfaction; and (c) ranking of voices according to preference.

The collected results, albeit preliminary, yield some evidence of the positive impact speech interaction has on users, at different levels. Additionally, results show interesting differences among the voice preferences expressed by both age groups and genders.

Nuno Almeida, António Teixeira, Ana Filipa Rosa, Daniela Braga, João Freitas, Miguel Sales Dias, Samuel Silva, Jairo Avelar, Cristiano Chesi, Nuno Saldanha
It’s not What It Speaks, but It’s How It Speaks: A Study into Smartphone Voice-User Interfaces (VUI)

Since voice-user interfaces (VUI) are becoming an attractive tool for more intuitive user interactions, this study proposes a between-subject experiment in which variations in voice characteristics (i.e., voice gender and manner) of VUI are examined as key determinants of user perceptions. This study predicts that the voice gender (male vs. female) and manner (calm vs. exuberant) are likely to have significant effects on psychological and behavior outcomes, including credibility and trustworthiness of information delivered via VUI.

Jaeyeol Jeong, Dong-Hee Shin
StringWeaver: Research on a Framework with an Alterable Physical Interface for Generative Art

In order to improve the input interface for generative art, the author was inspired by a traditional game called

string figure

to design a framework with an alterable physical input interface named StringWeaver. The input system of StringWeaver is consisted of strings (made of black conductive rubber tube) which can be customized physically by rearranging and blob tracking system that can track audience finger. The visual output is directly projected on the input interface with music and sound generated. StringWeaver was proved to be useful by three prototypes developed under the framework of it. Limitations and future work are introduced at the end of the paper.

Yunshui Jin, Zhejun Liu
Synchronization Between Utterance Rhythm and Body Movement in a Two-Person Greeting

In this study, designed to clarify the relationship between utterance rhythm and body movement in a greeting, two experiments were conducted to examine the greeting between persons of equal social standing and that between persons of different social standings. In both experiments, high synchronization between speakers was observed. In the analysis of the relationship between durations in subjects, synchronization was found between utterance rhythm and body movement for the greetings between social equals. However, for the greetings between persons of different social standings, there was little synchronization between utterance rhythm and body movement in subjects. These results are used to discuss the mechanism of the greeting, the starting point for communication.

Kenta Kinemuchi, Hiroyuki Kobayashi, Tomohito Yamamoto
Heuristics for NUI Revisited and Put into Practice

Natural User Interfaces (NUIs) represent a strong tendency for interaction with new computational technologies. They also represent a big challenge for designers, since delivering the promised feelings of naturalness is not trivial. In this paper, we revisit a set of 23 heuristics for NUI applications within the context of three experiments to evaluate the design of two scenarios of using NUI as assistive technology systems. While using the initial set of heuristics, they also were evaluated. Results of the experiments led to a leaner set of 13 NUI heuristics, with a compliance scale ranging from –4 to 4. The heuristics in the revisited set were defined, described and illustrated in the context of the experiments, so that they can be useful for designers and evaluators.

Vanessa Regina Margareth Lima Maike, Laurindo de Sousa Britto Neto, Siome Klein Goldenstein, Maria Cecília Calani Baranauskas
Using Neural Networks for Data-Driven Backchannel Prediction: A Survey on Input Features and Training Techniques

In order to make human computer interaction more social, the use of supporting backchannel cues can be beneficial. Such cues can be delivered in different channels like vision, speech or gestures. In this work, we focus on the prediction of acoustic backchannels in terms of speech. Previously, this prediction has been accomplished by using rule-based approaches. But like every rule-based implementation, it is dependent on a fixed set of handwritten rules which have to be changed every time the mechanism is adjusted or different data is used. In this paper we want to overcome these limitations by making use of recent advancements in the field of machine learning. We show that backchannel predictions can be generated by means of a neural network based approach. Such a method has the advantage of depending only on the training data, without the need of handwritten rules.

Markus Mueller, David Leuschner, Lars Briem, Maria Schmidt, Kevin Kilgour, Sebastian Stueker, Alex Waibel
Towards Creation of Implicit HCI Model for Prediction and Prevention of Operators’ Error

This paper describes development of a new generation of the interactive industrial workplace, through introduction of a novel implicit Human Computer Interaction (HCI) model. Proposed framework aims at being a foundation of a computer-based system that enables an increase of workers safety and well-being in industrial environments. Further aim is to enable an increase in production levels, together with improvement of ergonomics of the workplace. Specifically targeted environments are industrial workplaces that include repetitive tasks, which are in most of the cases monotonic in nature. Implicit HCI model could enable development of a specific technical solution that is meant to be an integral and inseparable part of a future workplace and should serve to predict human errors and communicate a warning to a worker. As such, system is meant to increase situational awareness of the workers and prevent errors in operating that would otherwise lead to work-related injuries (including causalities).

Pavle Mijović, Miloš Milovanović, Miroslav Minović, Ivan Mačužić, Vanja Ković, Ivan Gligorijević
Development of Chat System Added with Visualized Unconscious Non-verbal Information

Face-to-face communications are performed by sending and receiving verbal and non-verbal information. And non-verbal information are sent and received consciously and unconsciously. In the face-to-face communication, this non-verbal information plays the important roles for smooth communication. In the case of text chat, we can send the some kind of non-verbal information, for example, the face marks, smiley and stamps to let the partner know our emotion and the true meaning of verbal information. However, it is difficult to treat the unconscious non-verbal information in text chat. Because of this, sometime we have a misunderstanding of text information. Therefore, we propose the text chat system which visualizes the unconscious non-verbal information of user. In the proposed system, the change of heart’s pulse wave of user is reflected in the background color of text chat. In this paper, the detail of proposed system and the result of system evaluation by sensory evaluation are described.

Masashi Okubo, Haruna Tsujii
Implications for Design of Personal Mobility Devices with Balance-Based Natural User Interfaces

In this paper, we present a set of guidelines for designing personal mobility devices (PMDs) with body balance exclusively as input modality. Using an online survey, focus group and design workshop, we designed several PMD prototypes that used a natural user interface (NUI) and balance as its only form of user input. Based on these designs we constructed a physical and functional PMD prototype, which was tested using a usability test to explore how the balance interface should be designed. In conclusion, we discuss whether the guidelines from the literature could apply when designing PMDs and present a set of implications for the design of PMDs with balance-based NUIs based on both the guidelines and our own findings.

Aleksander Rem, Suhas Govind Joshi
Stage of Subconscious Interaction for Forming Communication Relationship

We assume that subconscious interaction is carried out to make possible the forming of a communication relationship with the object. To model this stage of interaction, two experiments were carried out. We created an experimental environment to observe the interaction between a human and a robot whose behavior was actually mapped by another human. In experiment 1, interaction with an unknown robot and a known robot were compared. As a result, the interaction property for each condition was confirmed. This result suggests that a stage of subconscious interaction does exist for recognition of artifacts as interaction partners. In experiment 2, we explore the relation between physical interaction and cognitive states by the think-aloud method. Behavioral data was analyzed by a Bayesian network (BN). As a result, it is obvious that BN structure relates to speaking data. This indicates that it is likely to model the process of subconscious interaction.

Takafumi Sakamoto, Yugo Takeuchi
Interactive Sonification Markup Language (ISML) for Efficient Motion-Sound Mappings

Despite rapid growth of research on auditory display and sonification mapping per se, there has been little effort on efficiency or accessibility of the mapping process. In order to expedite variations on sonification research configurations, we have developed the Interactive Sonification Markup Language (ISML). ISML is designed within the context of the Immersive Interactive Sonification Platform (iISoP) at Michigan Technological University. We present an overview of the system, the motivation for developing ISML, and the time savings realized through its development. We then discuss the features of ISML and its accompanying graphical editor, and conclude by summarizing the system’s feature development and future plans for its further enhancement. ISML is expected to decrease repetitive development tasks for multiple research studies and to increase accessibility to diverse sonification researchers who do not have programming experience.

James Walker, Michael T. Smith, Myounghoon Jeon

Adaptive and Personalized Interfaces

Frontmatter
Defining and Optimizing User Interfaces Information Complexity for AI Methods Application in HCI

The HCI has understandably become user-centric, but if we are to consider human operator and computer device as even components of a human-computer system and seek to maximize its overall efficacy with AI methods, we would need to optimize information flows between the two. In the paper, we would like to call to the discussion on defining and measuring the information complexity of modern two-dimensional graphic user interfaces. By analogy with Kolmogorov complexity (algorithmic entropy) for computability resources, the interface information complexity could allow estimating the amount of human processor resources required for dealing with interaction task. The analysis of the current results allows concluding that interface “processing” time by humans is indeed affected by the interface message “length” parameter, and, presumably, by vocabulary size. We hope the results could aid in laying ground for broader AI methods application for HCI in the coming era of ubiquitous Big Interaction.

Maxim Bakaev, Tatiana Avdeenko
A Systematic Review of Dementia Focused Assistive Technology

This paper presents a systematic review which explores the nature of assistive technologies currently being designed, developed and evaluated for dementia sufferers and their carers. A search through four large databases, followed by filtering by relevance, led to the identification and subsequent review of papers. Our review revealed that the majority of research in this area focuses on the support of day-to-day living activities, safety monitoring, memory aids and preventing social isolation. We conclude that the majority of AT currently available support day-to-day living activities, safety monitoring and assisting healthcare. However these devices merely address the ‘ease of living’ rather than focusing on ‘quality of life’. Although there are some devices which address social symptoms of Dementia, few address behavioural issues such as aggression and virtually none are available to support recreational activities. After discussing the implications of these findings, we finally reflect on general design issues for assistive technologies in this domain that became apparent during the review.

Joanna Evans, Michael Brown, Tim Coughlan, Glyn Lawson, Michael P. Craven
Trust-Based Individualization for Persuasive Presentation Builder

For most people, decision-making involves collecting opinion and advice from others who can be trusted. Personalizing a presentation’s content with trustworthy opinions can be very effective towards persuasiveness of the content. While the persuasiveness of presentation is an important factor in face-to-face scenarios, it becomes even more important in an online course or other educational material when the “presenter” cannot interact with audience and attract and influence them. As the final layer of our personalization model, the Pyramid of Individualization, in this paper we present a conceptual model for collecting opinionative information as trustworthy support for the presentation content. We explore selecting a credible publisher (expert) for the supporting opinion as well as the right opinion that is aligned with the intended personalized content.

Amirsam Khataei, Ali Arya
Context Elicitation for User-Centered Context-Aware Systems in Public Transport

In the area of public transport context-aware systems have great relevance regarding the barriers. The service of these systems can be adapted to the individual situation in order to support the user in carrying out his tasks during the journey. The adaption is based on the context of user which is mainly influenced by the user goals and the associated tasks. In the context-aware system development for public transport the early stages of requirements engineering require more detailed investigations. The research of this paper is focused on the initial context elicitation which is a precondition for the analysis and modelling of the context. The first part discusses the knowledge lack of the developer team about the context in the beginning of a development and presents a task-oriented context taxonomy of public transport to overcome this problem. Furthermore, the second part sets out to address the concerns of designing a concept of user data acquisition and provides a framework for the selection and combination of elicitation methods.

Heidi Krömker, Tobias Wienken
Personalization Through Personification
Factors that Influence Personification of Handheld Devices

In the close future, flexible bending display will emerge, bringing greater degree of freedom for users to personalize their devices. According to theoretical researches, the newly introduced technologies will be personified since people tend to be attracted to things that are similar to them, and treat them as if they were real people. Thus, this paper investigates what variables influence upon the personification of the flexible devices. To find these variables, interview was conducted on 10 individuals, asking how they would personify the device based on Paul Ekman’s six basic emotions, and what kinds of variable influence their emotional change on the devices. As a result, the degree of angle, the speed and continuity of the movement and positioning of the device are the major factors that influence personification of flexible handheld devices.

Jung Min Lee, Da Young Ju
Enterprise Systems for Florida Schools

The purpose of this paper is to show the impact of decision making about technology in school districts. During our research we found a lot of important information about decision making and forecasting. Furthermore, we also found information on EBusiness, along with Business Strategy, Structure, and Impact. All these important factors come together to help us understand where the Charlotte County’s district went wrong in their decisions with their upgraded system. We explore our findings and present the results of how better planning can help other districts.

Mandy Lichtenstein, Kathleen Clark
Toward Usable Intelligent User Interface

Context-awareness of interaction with intelligent user interface has been considered as a potentially important factor of their usability. A fair amount of research has been conducted to identify and help developing advanced adaptations in order to streamline interaction with systems. However, it has to be noted that adaptations could have an adverse impact when it does not meet users expectations. Thereby ‘Context-awareness’ as well as ‘user-centeredness’ become more crucial to improve the quality of interaction as well as UIs. Inter-twinned with intelligent techniques, HCI proved an ability to be more intuitive, nevertheless a significant lack of transparency and controllability and predictability were detected. This work is aimed to improve the quality of interaction to fit intelligent user interface performance. We focus on interaction as a key factor for improving the user satisfaction and the interface usability during use. This paper considers major issues and challenges of improving interaction with user interfaces during their use by considering the ISO2941. It presents a methodological proposal for guiding UI developers to designs predict and evaluates interaction quality with regards to well-defined dialog principles.

Nesrine Mezhoudi, Iyad Khaddam, Jean Vanderdonckt
Suturing Space: Tabletop Portals for Collaboration

Most video-conferencing technologies focus on 1-1, person-to-person links, typically showing the heads and shoulders of the conversants seated facing their cameras. This limits their movement and expects foveal attention. Adding people to the conversation multiplies the complexity and competes for visual real estate and video bandwidth. Most coronal meaning-making activity is excised by this frontal framing of the participants. This method does not scale well as the number of participants rises. This research presents a different approach to augmenting collaboration and learning. Instead of projecting people to remote spaces, furniture is digitally augmented to effectively exist in two (or more) locations at once. An autoethnographic analysis of social protocols of this technology is presented. We ask, how can such shared objects provide a common site for ad hoc activity in concurrent conversations among people who are not co-located but co-present via audio?

Evan Montpellier, Garrett Laroy Johnson, Omar Al Faleh, Joshua Gigantino, Assegid Kidane, Nikolaos Chandolias, Connor Rawls, Todd Ingalls, Xin Wei Sha
Violin Fingering Estimation According to the Performer’s Skill Level Based on Conditional Random Field

In this paper, we propose a method that estimates appropriate violin fingering according to the performer’s skill level based on a conditional random field (CRF). A violin is an instrument that can produce the same pitch for different fingering patterns, and these patterns depend on skill level. We previously proposed a statistical method for violin fingering estimation, but that method required a certain amount of training data in the form of fingering annotation corresponding to each note in the music score. This was a major issue of our previous method, because it takes time and effort to produce the annotations. To solve this problem, we proposed a method to automatically generate training data for a fingering model using existing violin textbooks. Our experimental results confirmed the effectiveness of the proposed method.

Shinji Sako, Wakana Nagata, Tadashi Kitamura
Interactive Motor Learning with the Autonomous Training Assistant: A Case Study

At-home exercise programs have met limited success in rehabilitation and training. A primary cause for this is the lack of a trainer’s presence for feedback and guidance in the home. To create such an environment, we have developed a model for the representation of motor learning tasks and training protocols. We designed a toolkit based on this model, the Autonomous Training Assistant, which uses avatar interaction and real-time multi-modal feedback to guide at-home exercise. As an initial case study, we evaluate a component of our system on a child with Cerebral Palsy and his martial arts trainer through three simple motion activities, demonstrating the effectiveness of the model in representing the trainer’s exercise program.

Ramin Tadayon, Troy McDaniel, Morris Goldberg, Pamela M. Robles-Franco, Jonathan Zia, Miles Laff, Mengjiao Geng, Sethuraman Panchanathan

Distributed, Migratory and Multi-screen User Interfaces

Frontmatter
Living Among Screens in the City

Screens have become the apparatuses through which we encounter the world. However, this does not simply mean that our use of screens has increased, but rather that our relationship towards them has changed the way in which we see and live. Through screens we get knowledge and communicate with other people as well as with what is all around us, particularly the urban environment. Individuals and screens have become the inseparable elements of a single communicational and social system raising the fundamental questions of its comprehension and governance. The proliferation of screens and new information and communication technologies (ICT) is accomplishing a perceptive revolution. Our goal is to study the use of screens in the city and propose a new ecosystem contributing to their better use and mastery.

Bertrand David, René Chalon
Delegation Theory in the Design of Cross-Platform User Interfaces

The amalgamation of various technologies to support the needs of new computing models has become prevalent in computing environments like ubiquitous computing. Amalgamation means here heterogeneity caused by not only the coexistence of various devices in the same computing environment, but also the diversity between software, users as well as interaction modalities. The platform heterogeneity together with additional needs of interaction modalities and the proliferation of new technologies pose unique challenges for user interface (UI) designers and developers. We consider the problem of heterogeneity as a demand of collaboration between platforms (device and system) that are owned or controlled by a human user. Hence, we drive the concept of delegation to be implemented in a peer-to-peer model, where one peer (known as

delegator

) delegates another peer (known as

delegatee

) to run a UI (or a single interaction-modality) on its behalf. Thus, the delegatee uses its own capabilities to present the required UI or interaction-modality.

Dagmawi L. Gobena, Gonçalo N. P. Amador, Abel J. P. Gomes, Dejene Ejigu
Current Challenges in Compositing Heterogeneous User Interfaces for Automotive Purposes

Composition (i.e. merging distinct parts to form a new whole) of user interfaces from different providers or devices is popular in many areas. Current trends in the automotive area show, that there is a high interest in compositing interfaces from mobile devices into automotive user interfaces. “Apple CarPlay” and “Android Auto” are concrete examples of such compositions. However composition is addressed with challenges, especially if the parts are originally designed for different purposes.

This paper presents the problem statement of compositing heterogeneous devices. Furthermore, it presents a layer model showing architectural levels, where compositions can take place and for each of these layers challenges have been identified.

Tobias Holstein, Markus Wallmyr, Joachim Wietzke, Rikard Land
A Framework for Distributing and Migrating the User Interface in Web Apps

Nowadays, the advent of mobile technologies with increasing functionality and computing power is changing the way people interact with their applications in more and more different contexts of use. This way, many traditional user interfaces are evolving towards “distributed” user ones, allowing that interaction elements can now be distributed among heterogeneous devices from different platforms. In this paper we present an HTTP-Based framework for generating and distributing UIs (User Interfaces) of custom applications, allowing device change with state preservation. We use a schema-based definition of DUIs (Distributed User Interfaces), allowing the specification of the elements to be distributed. The framework is based on open standards and supports any markup-based web language. We provide a graphic case of use implemented in HTML5.

Antonio Peñalver, David Nieves, Federico Botella
UniWatch - Some Approaches Derived from UniGlyph to Allow Text Input on Tiny Devices Such as Connected Watches

Smartwatches are a fast-expanding type of interactive device that allow users to directly access to many applications on smartphone. At the moment, smartwatches lack a usable means of text entry. In this paper, we propose a new approach of text entry on smartwatches called UniWatch. At first, we give a state of the art concerning text entry on small devices. Then, we recall our past approaches of text entry and more particularly Uniglyph, a text input method for handheld devices that used a 4-button keyboard. Secondly, we describe and compare the different adaptations of Uniglyph for tiny connected devices such as smartwatches. All the proposed adaptations require only three buttons or three simple finger strokes on the screen. Thirdly, we examine the role of word completion and word prediction for such devices.

Franck Poirier, Mohammed Belatar
A Model-Based Framework for Multi-Adaptive Migratory User Interfaces

Nowadays users are surrounded by a broad range of networked interaction devices for carrying out their everyday activities. Flexible and natural interaction with such devices in a seamless manner remains a challenging problem, as many different contexts of use (platform, user, and environment) have to be supported. In this regard, enabling task continuity by preserving the user interface’s state and adapting it to the changing context of use can help to improve user experience despite possible device changes. The development of such multi-adaptive migratory user interfaces (MAMUIs) involves several challenges for developers that are partially addressed by frameworks like CAMELEON-RT. However, supporting the development of user interfaces with adaptation and migration capabilities is still a challenging task. In this paper, we present an integrated model-based framework for supporting the development of MAMUIs.

Enes Yigitbas, Stefan Sauer, Gregor Engels

Games and Gamification

Frontmatter
A Dome-Shaped Interface Embedded with Low-Cost Infrared Sensors for Car-Game Control by Gesture Recognition

This paper proposes a steering wheel like interface using infrared sensors suitable for in-car control, car-game control or any interface with spin or turn hand gesture. Most of the interfaces introduced to-date use touch, position/depth sensing using cameras or proximity sensors positioned in a 2-D configuration. The electronic screen used for touch interface requires the user to maintain contact with specific positions on the screen. In contactless interfaces the sensors or camera are placed in a planar configuration, and complex gestures like turns or twist is intensive signal analysis. In the proposed preliminary model we introduce a contactless gesture recognition design shaped as a dome to allow natural hand movement for turns and tested to control a virtual object mimicking the movement of a car-wheel. The system recognizes hand movements like forward (translated as acceleration), backward (deceleration/slow), steady-hold (cruise), lateral for braking, turns-clockwise (right turn of the wheel) and anti-clockwise (left-turn of the wheel)– using 9 low-cost IR sensors embedded in a dome-shaped structure. The convex shape reduces interferences from adjacent sensors to a significant extent and allows for capturing distinct gestures. The inclusion of the acceleration and braking action to be controlled by the hand movement is to test and reduce leg and hand reflexes difference in the human visuo-motor feedback response system. The Hidden Markov Model was used for 5 basic gestures deduced from the IR signal analysis. The first version of the system was tested on a 3D virtual wheel-like object simulating a car tire. Real-time user gesture data tested against this model gave an overall average accuracy of 88.01 % for the five gestures The user gestures were timed and were in the range of 140-300 ms depending on the gesture sequence. Some of the limitations of the first version of the design being addressed are noisy signals to reduce errors in gesture recognition. Secondly we need to test this on a comprehensive driving simulation to collect empirical data on the adaptation of the hand movement to control braking and acceleration.

Jasmine Bhanushali, Sai Parthasarathy Miduthuri, Kavita Vemuri
Evaluating a Public Display Installation with Game and Video to Raise Awareness of Attention Deficit Hyperactivity Disorder

Networked Urban Screens offer new possibilities for public health education and awareness. An information video about Attention Deficit Hyperactivity Disorder (ADHD) was combined with a custom browser-based video game and successfully deployed on an existing research platform, Screens in the Wild (SitW). The SitW platform consists of 46-in. touchscreen or interactive displays, a camera, a microphone and a speaker, deployed at four urban locations in England. Details of the platform and software implementation of the multimedia content are presented. The game was based on a psychometric continuous performance test. In the gamified version of the test, players receive a score for correctly selected target stimuli, points being awarded in proportion to reaction time and penalties for missed or incorrect selections. High scores are shared between locations. Questions were embedded to probe self-awareness about ‘attention span’ in relation to playing the game, awareness of ADHD and Adult ADHD and increase in knowledge from the video. Results are presented on the level of public engagement with the game and video, deduced from play statistics, answers to the questions and scores obtained across the screen locations. Awareness of Adult ADHD specifically was similar to ADHD in general and knowledge increased overall for 93 % of video viewers. Furthermore, ratings of knowledge of Adult ADHD correlated positively with ADHD in general and positively with knowledge gain. Average scores varied amongst the sites but there was no significant correlation of question ratings with score. The challenge of interpreting user results from unsupervised platforms is discussed.

Michael P. Craven, Lucy Simons, Alinda Gillott, Steve North, Holger Schnädelbach, Zoe Young
An Investigation of Reward Systems in Human Computation Games

Human Computation Games (HCGs) harness human intelligence to tackle computational problems. As in any game, a fundamental mechanism in a HCG is its reward system. In this paper, we investigate how virtual reward systems evoke perceptions of enjoyment in HCGs. Three mobile applications for location-based content sharing (named Track, Badge and Share) were developed for an experimental study. The Track version offered a points-based reward system for actions such as contribution of content. The Badge version offered different badges for collection while the Share version served as a control which did not have any virtual reward system. The experiment had a counterbalanced, within-subjects design. For each application, participants performed a series of tasks after which a questionnaire survey was administered. Results showed the Track and Badge applications were perceived to have more accurate and complete content than the control (Share) application. Further, participants reported being more engaged when using the former two applications.

Dion Hoe-Lian Goh, Ei Pa Pa Pe-Than, Chei Sian Lee
Is Gamification Effective in Motivating Exercise?

Despite the benefits of exercise, many individuals lack the motivation to integrate it into their daily lives. Recently, there has been a growing interest in the use of game principles in non-game contexts to make an activity that is perceived to be challenging, tedious or boring more enjoyable. With increased enjoyment through the infusion of game elements, it is expected that individuals will be more motivated to partake in the activity. Given this backdrop, the present study seeks to ascertain the utility of gamification for promoting exercise among individuals. We used Fitocracy as the gamification platform. Our results suggest that gamification improves not only attitudes towards and enjoyment of exercise but also shapes behavior in terms of increase in exercise activity. These findings augur well for gamification platforms and their usefulness in motivating exercise among individuals. Finally, our work suggests design implications for applications that aim to gamify exercise.

Dion Hoe-Lian Goh, Khasfariyati Razikin
‘Blind Faith’. An Experiment with Narrative Agency in Game Design

This paper reports on the current field of narrative-based game design through case study analysis with a particular focus on balancing high narrative agency with low production resources.

Deb Polson, Vidhi Shah
Play to Remember: The Rhetoric of Time in Memorial Video Games

This paper examines video games that commemorate historical events, identifying ‘family resemblance’ features and specific rhetorical resources. We argue that the commemorative character of a game derives, typically, from four interrelated qualities: invoking a specific historical event, claiming a truthful representation, inviting empathic understanding, and offering players opportunities for reflection. Starting from the observation that time has an important role in achieving commemorative gameplay, we discuss several games in terms of narrative and procedural rhetoric, with focus on time-related mechanics. We propose a repertoire of design resources to assist the creation of meaningful games for remembrance.

Răzvan Rughiniș, Ștefania Matei
‘Sketchy Wives’ and ‘Funny Heroines’
Doing and Undoing Gender in Art Games

Gender analysis of video games has increased its public visibility through the Gamergate controversy. We examine several casual art games in order to explore the diversity of both conventional and counter-stereotypical gender representations. We find significant reliance on stereotypical presentations, especially in ‘sketchy wife’ characters. Such tropes may offer rhetorical resources to communicate, in brief lapses of gameplay, messages about life, death and the human condition. We also find creative ways of tackling gender displays through character description and game mechanics. Art games may thus serve as a laboratory for experimenting with doing and possibly un-doing gender.

Cosima Rughiniș, Elisabeta Toma
Gamification Effect of Collection System for Digital Photographs with Geographic Information which Utilizes Land Acquisition Game

As digital photos with geographic information are helpful as a new tourism resource, in this study we developed the “Photopolie” digital photo collecting system that includes geographic information. Through GWAP, which utilizes a land acquisition game, Photopolie defines photography targets that are useful as tourism resources, and promotes digital photo submission with accurate position information. Evaluation experiment results showed the following three points: (1) Through clarifying photography targets that are useful as tourism resources, and considering compatible gamification elements, there is the possibility of being able to collect more data. (2) User interaction has the possibility of motivating work. (3) It may be possible to maintain motivation for data submission for dynamic users who enjoy land acquisition games.

Rie Yamamoto, Takashi Yoshino, Noboru Sonehara
A Conceptual Model of Online Game Continuance Playing

Today’s online gaming customers are very demanding, hence there is a need for the game vendors and developers to understand and keep pace with customers’ demands. The purpose of this paper is to survey the current literatures and summarize the reasons why users tend to play a certain online games longer. In this paper, we propose a research model to predict online games continuance play. We believe this framework will help both researchers and practitioners in game research, design and development.

Fan Zhao, Qingju Huang
A Lexical Analysis of Nouns and Adjectives from Online Game Reviews

The objective of this study is to develop playability heuristics by a lexical analysis of nouns and adjectives used in online game reviews. A revised lexical approach is adopted to analyze nouns and adjectives from 821,122 online reviews. Ninety seven (97) factors are extracted from the analysis. Based on the nouns and adjectives highly loaded on these factors, a new heuristic development process is introduced and 116 playability heuristics are developed. This study significantly expands the pool of playability heuristics that can be used by game developers for computer game design. The lexical method in this study demonstrates its effectiveness in developing interface design guidelines when a large number of online reviews are available on a system or product. It can be extended to other fields as well.

Miaoqi Zhu, Xiaowen Fang

HCI in Smart and Intelligent Environments

Frontmatter
A Mashup-Based Application for the Smart City Problematic

A mashup is an application that combines data and functionalities from more than one source. It groups disparate data in ways that enable users to do new things or accomplish common tasks with newfound efficiency. The introduction of mashup applications and their increasing use by users in the field of e-Learning and e-commerce highlights new issues in a context called the “smart city”. Indeed, transportation based on private cars, public transportation services and shared bicycles need appropriate user interfaces, which can be “mashuped” to allow an integrated approach to transportation related to weather conditions, real-time traffic situations and personal preferences. These new needs for composition and combination (orchestration) of existing web services and their underlying user interfaces are good examples of mashuping. First, we provide in this paper some valuable explanations on two kinds of orchestration: service orchestration and HCI (Human Computer Interface) orchestration. Secondly, we apply this global approach to the context of “smart cities”.

Abdelghani Atrouche, Djilali Idoughi, Bertrand David
Design of a Bullying Detection/Alert System for School-Wide Intervention

In this paper we propose a bullying detection/alert system for school-wide intervention that combines wearables with heart rate (HR) monitors, surveillance cameras, multimodal machine learning, cloud computing, and mobile devices. The system alerts school personnel when potential bullying is detected and identifies potential bullying in three ways: (i) by tracking and assessing the proximity of known bullies to known students at risk for bullying; (ii) by monitoring stress levels of students via HR analysis; and (iii) by recognizing actions, emotions, and crowd formations associated with bullying. We describe each of these components and their integration, noting that it is possible for the system to use only a network of surveillance cameras. Alerts produced by the system can be logged. Reviews of these logs and tagged videos of detected bullying would allow school personnel to review incidents and their methods for handling bullying by providing more information about the locations, causes, and actors involved in bullying as well as teacher/staff response rates. In addition, false positives could be marked and fed back to the system for relearning and continuous improvement of the system.

Sheryl Brahnam, Jenifer J. Roberts, Loris Nanni, Cathy L. Starr, Sandra L. Bailey
Improving User Performance in a Smart Surveillance Scenario through Different Levels of Automation

Artificial intelligence could be used to help users to better accomplish certain tasks, especially when critical or subjected to human errors. However, automating tasks could lead to other problems that could affect the final performance of the user. In this paper we investigate - from a Human Factors point of view - how different levels of automation (LOAs) may result in a change of user’s behaviour and performances in smart surveillance systems. The objective is to find a correct balance between automating tasks and asking the user to intervene in the process. We performed tests (using qualitative-quantitative measures) to observe changes in performances, Situation Awareness and workload in relation to different LOAs.

Massimiliano Dibitonto, Carlo Maria Medaglia
Controlling the Home
A User Participatory Approach to Designing a Simple Interface for a Complex Home Automation System

This paper presents our experience with a Participatory Design approach designing an interface for controlling a home automation system. In a Future Workshop, users imagined that a home could be visualized as a graph, with nodes representing the devices in a household, and edges representing the interconnectivity between the devices. Participants later gave feedback on a refined mock-up of the interface, confirming that the idea of using a graph would be suitable for presenting the devices in a household. In the third iteration, users assessed a high-fidelity prototype. This evaluation focused on the graph interface’s ability to control a home automation system, and its ability to create an overview of the devices. Based on the feedback from the participants, we concluded that the prototype was able to convey an overview of the devices, and that a graph based interface would be suitable for controlling a home automation system.

Martin Eskerud, Anders Skaalsveen, Caroline Sofie Olsen, Harald Holone
Enhancing Human Robot Interaction Through Social Network Interfaces: A Case Study

Recently we have assisted to the rise of different Social Networks, and to the growth of robots for home applications, which represent the second big market opportunity. The use and the integration of robotics services in our daily life is strictly correlated with their usability and their acceptability. Particularly, their ease of use, among other issues, is linked to the simplicity of the interface the user has to interact with. In this sense social networks could enrich and simplify the communication between the user and technology avoiding the multiplication of custom interfaces. In this work the authors propose a system to enHancE human RobOt Interaction through common Social networks (HeROIS). HeROIS system combines the use of cloud resources, service robot and smart environments proposing three different services to help citizens in daily life. In order to assess the acceptability and the usability levels, HeROIS system and services have been tested with 13 real users (24–37 years old) in the DomoCasa Lab (Italy). As regards the usability, the results show that the proposed system is usable for 4 participants (30.77 % M = 79.69 SD = 3.13) and excellent for 9 participants (69.23 % M = 90.05 SD = 3.72). Concerning the acceptability level, the results show that the proposed system is acceptable for 8 volunteers (61.54 % M = 77.02 SD = 4.23) and excellent for 5 participants (38.46 % M = 89.71 SD = 6.06).

Laura Fiorini, Raffaele Limosani, Raffaele Esposito, Alessandro Manzi, Alessandra Moschetti, Manuele Bonaccorsi, Filippo Cavallo, Paolo Dario
aHead: Considering the Head Position in a Multi-sensory Setup of Wearables to Recognize Everyday Activities with Intelligent Sensor Fusions

In this paper we examine the feasibility of Human Activity Recognition (HAR) based on head mounted sensors, both as stand-alone sensors and as part of a wearable multi-sensory network. To prove the feasibility of such setting, an interactive online HAR-system has been implemented to enable for multi-sensory activity recognition while making use of a hierarchical sensor fusion. Our system incorporates 3 sensor positions distributed over the body, which are head (smart glasses), wrist (smartwatch), and hip (smartphone). We are able to reliably distinguish 7 daily activities, which are: resting, being active, walking, running, jumping, cycling and office work. The results of our field study with 14 participants clearly indicate that the head position is applicable for HAR. Moreover, we demonstrate an intelligent multi-sensory fusion concept that increases the recognition performance up to 86.13 % (recall). Furthermore, we found the head to possess very distinctive movement patterns regarding activities of daily living.

Marian Haescher, John Trimpop, Denys J. C. Matthies, Gerald Bieber, Bodo Urban, Thomas Kirste
Synchronization of Peripheral Vision and Wearable Sensors for Animal-to-Animal Interaction

It is considered that it can be one of the methods to use the animal-to-animal communication for getting over the difficulties of field survey. Carrier Pigeon-like Sensing System (CPSS) is one of the systems to realize effective inter-animal communication using wearable devices, but still the data-sharing section of this system is not evaluated comprehensively.

On this study, we evaluated data-sharing system by synchronizing the devices and peripheral vision using video data, and gave the guidance how should improve that.

Ko Makiyama, Keijiro Nakagawa, Maki Katayama, Miho Nagasawa, Kaoru Sezaki, Hiroki Kobayashi
On the Usability of Smartphone Apps in Emergencies
An HCI Analysis of GDACSmobile and SmartRescue Apps

It is very critical that the disaster management smartphone app users be able to interact efficiently and effectively with the app during an emergen-cy. An overview of the challenges face for designing mobile HCI in emergency management tools is presented in this paper. Then, two recently developed emergency management tools, titled GDACSmobile and SmartRescue, are studied from usability and HCI challenges point of view. These two tools use mobile app and smartphone sensors as the main functionality respectively. Both have a smartphone app and a web-based app with different UIs for their different user groups. Furthermore, the functionality of these apps in the format of a designed scenario, fire onboard a passenger ship, will be discussed.

Parvaneh Sarshar, Vimala Nunavath, Jaziar Radianti
An Exploration of Shape in Crowd Computer Interactions

In this paper we explore crowd-computer interactions using a crowd shape generated from participating crowd members, both simulated and non-simulated, in three main shape forms (blobby, precise, and a combination of the two) to explore whether such an interactive form, and which of the three forms, can be both a viable and interesting method of having many people collaboratively interacting with large public displays in public spaces.

Anthony Scavarelli, Ali Arya
COLUMN: Discovering the User Invented Behaviors Through the Interpersonal Coordination

We developed soccer ball-shaped interactive artifacts (COLUMN) consisting of eight modules that are connected to twelve servomotors. Our motivation is to explore a variety of a robot’s body configuration for rolling behaviors which are invented by three user’s coordination. In the interaction, COLUMN becomes a social mediator to prompt the connectivity of the users. We explore how and what are the effects when a robot become a social mediator and investigate the conflict rates and interpersonal coordination of the users. Finally, we discover different body configuration patterns (sequences) from the user’s connectivity in each group. Each sequence of body configurations are directed to extract essential parameters to the rolling behaviors.

Yasutaka Takeda, Shotaro Baba, P. Ravindra S. De Silva, Michio Okada
Multimodal Interaction Flow Representation for Ubiquitous Environments - MIF: A Case Study in Surgical Navigation Interface Design

With the advent of technology, new interaction modalities became available which augmented the system interaction. Even though there are vast amount of applications for the ubiquitous devices like mobile agents, smart glasses and wearable technologies, many of them are hardly preferred by users. The success of those systems is highly dependent on the quality of the interaction design. Moreover, domain specific applications developed for these ubiquitous devices involve detailed domain knowledge which normally IT professionals do not have, which may involve a substantial lack of quality in the services provided. Hence, effective and high quality domain specific applications developed for these ubiquitous devices require significant collaboration of domain experts and IT professionals during the development process. Accordingly, tools to provide common communication medium between domain experts and IT professionals would provide necessary medium for communication. In this study, a new modelling tool for interaction design of ubiquitous devices like mobile agents, wearable devices is proposed which includes different interaction modalities. In order to better understand the effectiveness of this newly proposed design tool, an experimental study is conducted with 11 undergraduate students (novices) and 15 graduate students (experienced) of Computer Engineering Department for evaluating defect detection performance for the defects seeded into the interface design of a neuronavigation device. Results show that the defects were realized as more difficult for the novices and their performance was lower compared to experienced ones. Considering the defect types, wrong information and wrong button type of defects were recognized as more difficult. The results of this study aimed to provide insights for the system designers to better represent the interaction design details and to improve the communication level of IT professionals and the domain experts.

Gul Tokdemir, Gamze Altun, Nergiz E. Cagiltay, H. Hakan Maras, Alp Ozgun Borcek
Backmatter
Metadaten
Titel
Human-Computer Interaction: Interaction Technologies
herausgegeben von
Masaaki Kurosu
Copyright-Jahr
2015
Electronic ISBN
978-3-319-20916-6
Print ISBN
978-3-319-20915-9
DOI
https://doi.org/10.1007/978-3-319-20916-6

Neuer Inhalt