Skip to main content
Top

HCI International 2020 – Late Breaking Posters

22nd International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part I

  • 2020
  • Book

About this book

This book constitutes the poster papers presented during the 22nd International Conference on Human-Computer Interaction, HCII 2020, which was held in July 2020. The conference was planned to take place in Copenhagen, Denmark, but had to change to a virtual conference mode due to the COVID-19 pandemic.

From a total of 6326 submissions, a total of 1439 papers and 238 posters have been accepted for publication in the HCII 2020 proceedings before the conference took place. In addition, a total of 333 papers and 144 posters are included in the volumes of the proceedings published after the conference as “Late Breaking Work” (papers and posters). These contributions address the latest research and development efforts in the field and highlight the human aspects of design and use of computing systems.

The 62 papers presented in this volume are organized in topical sections as follows: HCI theory, methods and tools; mobile and multimodal interaction; interacting with data, information and knowledge; interaction and intelligence; user experience, emotions and psychophysiological computing.

Table of Contents

Next
  • 1
  • current Page 2
  • 3
  • 4
  • 5
Previous
  1. Mobile and Multimodal Interaction

    1. Frontmatter

    2. Littlebits Versus Makey Makey with Scratch: An User Perception for Tangible User Interfaces

      Lucas Barreiro Agostini, Tatiana Aires Tavares
      Abstract
      The main goal of this paper is to compare how different commercial systems with tangible user interfaces (TUIs) [11, 12] impact the user’s experience when targeting the same application. Two commercial systems were compared. The first one uses the littlebits Synth Kit [3] and the second one uses Scratch [18] and Makey Makey [5, 14] together, connected to a computer. The tested application was a MIDI keyboard that was implemented using both systems. The applications were designed with the same functions even with different interfaces provided by both systems. Usability was tested with the intent of assuring that both systems had similar functionalities and, with this aspect in mind, focus primarily on the UX itself. To test the Usability, we used the System Usability Score [2], while the UX was tested using Attrakdiff [8]. As a result of the case study, it was possible to realize that, considering the user’s point of view, the Littlebits Synth Kit is better than the combination of Makey Makey with Scratch. To prove so, the usability was analyzed and, since they were similar, it was possible to compare the UX, which lead to better results by the littlebits than its counterpart, the system with Makey Makey.
    3. Sequence Based Two-Factor Authentication (2FA) Method

      Devansh Amin, Yusuf Albayram
      Abstract
      Two-factor authentication (2FA) provides an extra layer of security by combining two different authentication factors that are unrelated (e.g., password and an authentication code sent to user’s phone). Despite its prevalence is growing across many online services to improve individual account security, its adoption rate remains low due to various reasons (e.g., usability, privacy concerns). Though there are many 2FA methods such as SMS, One-time password (OTP) and Time-based one-time password (TOTP), U2F (Universal 2nd Factor), Push notification, all of these 2FA methods require to have a phone or a device. However, users may not be willing to provide their phone numbers due to privacy concerns about sharing their phone numbers with companies, or they may not have a device for 2FA to work. We present a new type of 2FA method where users derive OTP from a pre-generated sequence of characters. This 2FA method eliminates the need to have a physical device to use 2FA both online and offline (e.g., no cell phone service is available). Also, it can be well integrated into existing password-based authentication systems.
    4. Pilot Study on the Development of a New Wearable Tactile Feedback Device for Welding Skills Training

      Manabu Chikai, Junji Ohyama, Seiichi Takamatsu, Shuichi Ino
      Abstract
      This study aimed to develop a new wearable tactile feedback device for application in a welder training system. This system consists of a head-mount display with its controller, a data measurement system, an open-source microcontroller board, a motion sensing input device, and the proposed wearable tactile feedback device, which is realized using a reel-to-reel microchip mounting system on thick-knitted textiles. The device consisted of a vibration motor with a flexible circuit, and its effect on the welding work of trainees was evaluated. The device provided two types of vibration stimuli to the user’s forearm, based on supervised data derived from the hand motions of an expert welder during a welding task. We performed welding training trials to determine the efficacy of the tactile feedback device and evaluated its effects on the welding speed performance. Three beginners, who were randomly allocated into training and control groups, evaluated the welding speed generated using the tactile feedback device. The training group was required to provide the perceived subjective data (ease of motion), while the welder performed welding using the tactile feedback device. The results suggested that the tactile feedback device enabled the easy understanding of the operating welding velocity. In conclusion, the tactile feedback device influenced the learning process of the beginners by exploiting general information from manuals on welding operation.
    5. User-Specific Interfaces of Teaching Devices for Manipulation of Collaborative Robot

      Jeyoun Dong, Seong Hyeon Jo, Wookyong Kwon, Dongyeop Kang, Yunsu Chung
      Abstract
      As the use of robotics in manufacturing and industrial settings continues to advance, expand, and evolve at a speedy pace, efficient collaboration between robots and workers becomes increasingly important. Known as cobots, collaborative robots are highly designed so they can perform tasks continuously and accurately alongside human workers in a safe mode. However, it is not easy to control cobots proficiently and directly in case of scaling of industrial robots for the beginners and skilled engineers. Thus, research for human-robot interaction (HRI) still remains to be challenged. Particularly, users who work with collaborative robot currently need robust and facile methodologies for human-robot collaboration to enhance user experience (UX) and to use direct teaching affordably. Up to date, teaching devices considering more intuitive graphical user interface (GUI) have been intensively exploited, thereby eliminating the need for safety barriers. However, it is still inevitable to consider safety issues that may arise when users work with cobots. Therefore, it is important to properly assign well-adapted functions to each user. There is still a big challenge to develop a more efficient way to authorization of suitable functions to different users.
      This paper deals with the user-specific interface considering the user’s role for improving the usability and convenience. It provides multiple functions of teaching devices simply used by not only the robot engineers but also developers and operators. We applied the method of user experience such as user journey map and persona to develop the interfaces of teaching device that can operate appropriate tasks with ease by users. In the present study, we introduce the user-specific interfaces for developers, operators, and robot engineers including beginner and experts.
    6. Bridging the Gap Between Desktop and Mobile Devices

      Tyler Kass, John Coffey, Steven Kass
      Abstract
      As cell phones have adapted to becoming personalized computers, the functionality of these two different devices has converged. Conversely, the way people interact with these two platforms is vastly different. With much less space, different controls, and lower specifications, mobile interface design looks and feels significantly different than that of desktop devices. This paper aims to analyze what makes UI design for desktop applications different from mobile devices and how developers can ease this transition when porting from one device to another. The results of a survey found that computing power was a large factor in which device users chose for a particular task, while screen size was not quite as important. Through identifying differences in these applications and suggesting solutions, the awkward transition from bulky, powerful devices to the small screens of everyday mobile devices can be alleviated.
    7. Palm-Controlled Pointing Interface Using a Dynamic Photometric Stereo Camera

      Yoshio Matsuda, Takashi Komuro, Takuya Yoda, Hajime Nagahara, Shoji Kawahito, Keiichiro Kagawa
      Abstract
      In this paper, we propose a user interface that allows pointing operation with a user’s palm orientation by measuring the normal directions in the palm region using a dynamic photometric stereo camera based on a multi-tap CMOS image sensor. Our system allows users to control a pointer smoothly with small hand movement. We implemented two types of user interface designs and an interactive game to show the applicability of hand gesture operation using the proposed system.
    8. Analysis of Multimodal Information for Multi-robot System

      Artem Ryndin, Ekaterina Pakulova, Gennady Veselov
      Abstract
      In this paper, we consider the possible set of modalities for UAVs systems for finding people in case of emergency. Modalities allow to identify and monitor the physiological state of a found person. We consider modalities from acoustic and visual communication channels such as speech, facial structure and skin temperature. We also outline the possible signals of input modalities and describe the interchange format for them.
    9. FAmINE4Android: Empowering Mobile Devices in Distributed Service-Oriented Environments

      Ioanna Zidianaki, Emmanouil Zidianakis, Eirini Kontaki, Constantine Stephanidis
      Abstract
      The domain of distributed services is constantly growing, enabling interoperability between applications that run on different operating systems. The increasing availability and use of wireless mobile devices gave rise to opportunities for new types of mobility-distributed applications. This poster presents the FAmINE4Android middleware, which facilitates the development of service-oriented mobile applications by providing all the necessary mechanism and tools, through a seamless and intuitive Application Programming Interface (API). In addition, it caters for the creation of distributed services by enabling the exposure of software and hardware resources to the service-oriented environment. In order to demonstrate the features of the middleware and its suitability for the inclusion of mobile devices in distributed computing platforms, a mobile museum guide application has been developed as a case study, communicating with a Windows-based person tracking service. The preliminary evaluation of the mobile guide application verified the effectiveness and efficiency of the proposed middleware.
Next
  • 1
  • current Page 2
  • 3
  • 4
  • 5
Previous
Title
HCI International 2020 – Late Breaking Posters
Editors
Prof. Constantine Stephanidis
Dr. Margherita Antona
Stavroula Ntoa
Copyright Year
2020
Electronic ISBN
978-3-030-60700-5
Print ISBN
978-3-030-60699-2
DOI
https://doi.org/10.1007/978-3-030-60700-5

Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.