Skip to main content
Top

2018 | Book

Human Interface and the Management of Information. Interaction, Visualization, and Analytics

20th International Conference, HIMI 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I

insite
SEARCH

About this book

This two-volume set LNCS 10904 and 10905 constitutes the refereed proceedings of the 20th International Conference on Human Interface and the Management of Information, HIMI 2018, held as part of HCI International 2018 in Las Vegas, NV, USA, in July 2018.The total of 1170 papers and 195 posters included in the 30 HCII 2018 proceedings volumes was carefully reviewed and selected from 4373 submissions.
The 56 papers presented in this volume were organized in topical sections named: information visualization; multimodal interaction; information in virtual and augmented reality; information and vision; and text and data mining and analytics.

Table of Contents

Frontmatter

Information Visualization

Frontmatter
VisUML: A Live UML Visualization to Help Developers in Their Programming Task

Developers produce a lot of code and most of them have to merge it to what already exists. The required time to perform this programming task is thus dependent on the access speed to information about existing code. Classic IDEs allow displaying textual representation of information through features like navigation, word searching or code completion. This kind of representation is not effective to represent links between code fragments. Current graphical code representation modules in IDE are suited to apprehend the system from a global point of view. However, the cognitive integration cost of those diagrams is disproportionate related to the elementary coding task.Our approach considers graphical representation but only with code elements that are parts of the developer’s mental model during his programming task. The corresponding cognitive integration of our graphical representation is then less costly. We use UML for this representation because it is a widespread and well-known formalism. We want to show that dynamic diagrams, whose content is modified and adapted in real-time by monitoring developer’s actions can be of great benefit as their contents are perfectly suited to the developer current task. With our live diagrams, we provide to developers an efficient way to navigate through textual and graphical representation.

Mickaël Duruisseau, Jean-Claude Tarby, Xavier Le Pallec, Sébastien Gérard
Web-Based Visualization Component for Geo-Information

Three-dimensional visualization of maps is becoming an increasingly important issue on the Internet. The growing computing power of consumer devices and the establishment of new technologies like HTML5 and WebGL allow a plug-in free display of 3D geo applications directly in the browser. Existing software solutions like Google Earth or Cesium either lack the necessary customizability or fail to deliver a realistic representation of the world. In this work a browser-based visualization component for geo-information is designed and a prototype is implemented in the gaming engine Unity3D. Unity3D allows translating the implementation to JavaScript and to embed it in the browser with WebGL. A comparison of the prototype with the open-source geo-visualization framework Cesium shows, that while maintaining an acceptable performance an improvement of the visual quality is achieved. Another reason to use a gaming engine as platform for our streaming algorithm is that they usually feature engines for physics, audio, traffic simulations and more, which we want to use in our future work.

Ralf Gutbell, Lars Pandikow, Arjan Kuijper
A System to Visualize Location Information and Relationship Integratedly for Resident-centered Community Design

This research aims to realize the resident-centered community design by utilizing of information and communication technology (ICT), and create an opportunity to regain relationship within the community by visualizing media spot defined as the place where communication is active in an area. In order to visualize and analyze the location information and the relationship integratedly, we developed a system using the Web interface. As a result of visualization using the system, it became easy to guess the type of meeting and attendees, which can help analysis. On the other hand, it turned out that there is room for improvement in drawing speed and analysis efficiency.

Koya Kimura, Yurika Shiozu, Kosuke Ogita, Ivan Tanev, Katsunori Shimohara
Reversible Data Visualization to Support Machine Learning

An important challenge for Machine Learning (ML) methods such as the Support Vector Machine (SVM), and others, is the selection of the structure of ML models for given data. This paper shows that the abilities of the pure analytical ML methods to address this challenge are limited. It is due to the fundamental nature of the ML methods, which rely on the available training data, which can result in overgeneralized or overfitted model. In the proposed visual analytics approach, domain experts are put into the “driving seat” of the ML model development to control the model overgeneralization and overfitting. In this approach, domain experts work interactively with multidimensional data, and the ML data classification models, presented in the lossless reversible visualizations. This paper shows that it enhances the ML classification models, and decreases the use of external and irrelevant-to-the-domain assumptions in the ML models.

Boris Kovalerchuk, Vladimir Grishin
Segmented Time-Series Plot: A New Design Technique for Visualization of Industrial Data

Time-series plots have been widely used in the fields of data analysis and data mining because of its good visual characteristics. However, when researching and analyzing the massive data formed in industrial, some shortcomings of the traditional time-series plot make the visualization of big data ineffective, which is not conducive to data analysis and mining.In this paper, the traditional time-series graph is improved and a segmented time-series plot that can be used for massive industrial data analysis is proposed. In addition, this paper describes in detail the steps of making the segmented time-series plot. The method can reduce the information overload and the interface issues by limiting the amount of information presented.

Tian Lei, Nan Ni, Ken Chen, Xin He
Research on the Fuzziness in the Design of Big Data Visualization

In consecution to use and process information immediately, the relationship among a huge number of information is necessary to be read and understand. Information visualization as an effective method to optimize this process, using the charts to help people comprehend and process information intuitively and quickly. The accuracy of the information in the visualization chart is based on the readability and integrity of the information transition, once the chart does not meet this requirement, the accuracy of the information will be greatly reduced, and even may be misunderstood or cannot obtain the problem of information.This paper will analyze and deduce the causes of ambiguous in the information visualization from the aspects of ambiguity definition and fuzziness experimental research. To solve this problem, the investigation collects 30 samples based on five complex information visualization charts, we will use infographic as the research object to explore the impact of fuzziness on the user in the visualization process and explore the causes and mechanisms of this effect by quantitative experiments.

Tian Lei, Qiumeng Zhu, Nan Ni, Xin He
Interactive Point System Supporting Point Classification and Spatial Visualization

Point system is structured marketing strategy offered by retailers to motivate customers to keep buying goods or paying for the services. However, current point system is not enough for reflecting where points come from. In this paper, concept of point classification is put forward. Points are divided into different categories based on source. We introduce mission into point system. Mission content is designed to guide consumption. In our system, point, mission and virtual pet will be spatially visualized using AR technique. The state of virtual pet depends on the evaluation for users. Users need to adjust themselves to keep their pets in a good state. Users can manipulate on the GUI or use gestures to interact with system.

Boyang Liu, Soh Masuko, Jiro Tanaka
A Topological Approach to Representational Data Models

As data accumulate faster and bigger, building representational models has turned into an art form. Despite sharing common data types, each scientific discipline often takes a different approach. In this work, we propose representational models grounded in the mathematics of algebraic topology to understand foundational data types. We present hypergraphs for multi-relational data, point clouds for vector data, and sheaf models when both data types are present and interrelated. These three models use similar principles from algebraic topology and provide a domain-agnostic framework. We will discuss each method, provide references to their foundational mathematical papers, and give examples of their use.

Emilie Purvine, Sinan Aksoy, Cliff Joslyn, Kathleen Nowak, Brenda Praggastis, Michael Robinson
Trade-Off Between Mental Map and Aesthetic Criteria in Simulated Annealing Based Graph Layout Algorithms

Dynamic graph visualization is a key component of interactive graph visualization systems. Whenever a user applies filters or a graph is modified by other reasons, a new visualization of the modified graph should support the user’s Mental Map of the previous visualization to facilitate fast reorientation in the new drawing. There exist specialized graph layout algorithms which adopt the concept of Mental Map preservation to create recognizable layouts for similar graphs. In this work we used Simulated Annealing algorithms to calculate layouts which fulfill aesthetic and Mental Map requirements simultaneously. We investigated criteria of both types and conducted an experiment to examine the competition and trade-off between aesthetics and mental map preservation. Our findings show that even without explicitly optimizing Mental Map criteria, recognition can be supported by simply using the previous layout as a starting point, rather than a new layout with randomly allocated vertices. This results in better aesthetic quality as well as lower algorithm runtime. Another finding is that a simple weighted sum between aesthetic and the Mental Map may not be as effective as one might expect, especially if the weight assigned to the Mental Map is higher than the weight for aesthetics. Finally, we propose approaches for changing other aspects of the Simulated Annealing algorithm to obtain better graph layouts.

Armin Jörg Slopek, Carsten Winkelholz, Margaret Varga
Analysis of Factor of Scoring of Japanese Professional Football League

In “The Japan Professional Football League (J League)”, the number of customers is increasing every year since 2011. The Japanese football team participates in the Russian World Cup in 2018. Therefore, J league market is expected to become more active. In this research, we analyze the score trend of the league with the aim of proposing tactics and training for the J League team. In each piece of data, position information obtained by dividing a field into an X-axis and a Y-axis are given. Therefore, in the data to use, this research pay attention to the data on the play involved in the score and the position where the play started. In this study, first, cluster analysis is performed to classify the start position of play involved in scores. After that, factor analysis and covariance structure analysis are carried out, and the play highly relevant to the score is discovered. Before analysis, data cleaning is carried out so that similar variables did not exhibit a strong correlation. First, the start position of the play involved in scores is classified by cluster analysis. From the score data, play related to the score was extracted and classified. After this, good results have been obtained with clusters showing mainly attacks from their own field. Therefore, in Japanese professional football, it can be predicted that there is some tendency in score from own field. Next, factor analysis/covariance structure analysis is performed on each cluster, and tactics related to scores are discovered. Factor analysis was conducted and latent variables related to the score were extracted. Define that latent variable as a score-related tactic and analyze the relationship between different tactics using covariance structure analysis. Those with low relevance are considered independent tactics. From the analysis results, in J League found that “side attack” and “pass to empty space” are strongly related to the score. Also, on the left side of the field, the score tendency using “dribbling” was weak. Japanese players, this can be expected to be related to having few players using left foot more than the right. Therefore, it is possible to propose “training of side attacker with excellent physical strength and speed” and “counterattack/strengthen side attack”. Furthermore, we found out that it is a task to put emphasis on cultivating left-handed players. This analysis focused on the attack from the own field. The future task is to analyze the attack pattern from the enemy field and judge whether it is haste from the length of attack time.

Taiju Suda, Yumi Asahi
Analysis of Trends of Purchasers of Motorcycles in Latin America

The Latin American economy experienced the currency crisis and the associated confusion from the early 1990s through the early 2000s. Since 2003, rapid economic growth has been achieved. As a result, in Latin America “A” country, the impact of external demand led to the expansion of the consumer finance market. Furthermore, financial services expanded due to income disparity correction policy implemented from 2003 to 2010. By these, purchasers due to loans increased of motorcycle and automobile, but on the other hand rate of loans outstanding increased. In this research, we look for factors of loans outstanding from customer data. The data used in this study is customer data of anonymized motorcycles in Latin America “A” country from September 2010 to June 2012. From the usage data it turns out that the proportion of loans standing is high. Therefore, it is necessary to extract variables that are factors of loans outstanding. From there, it is necessary to grasp the characteristics of loans outstanding. The analysis flow is data cleaning, basic aggregation, grouping of data, variable extraction, binomial logistic regression analysis. Data is organized by data cleaning. Data was grouped by income amount by grouping of data Basic aggregation allows to determine the characteristics of the data. Next, we extract the variables that cause the factor of loans outstanding by AUC. Finally, binomial logistic regression analysis finds out how the variables extracted by AUC affect loans outstanding. In addition, analysis results and Beforehand studies have considered that specific variables greatly affect loans outstanding. Therefore, this studies deeply dig up that variable. Based on the results of the analysis, we explore the tendency of loans outstanding.

Rintaro Tanabe, Yumi Asahi
Factor Analysis of the Batting Average

This study is factor analysis of the batting average in the professional baseball in Japan. We analyze the factor influencing the batting average using the Japanese professional baseball data. There is no established method to ensure a good shot at Japanese baseball. Based on the results, clarify factors that prevent pitchers from hitting hits and factors that batters increase hits. And establish baseball teaching methods based on the clarified factor. Finally, we aim to improve the level of the professional baseball world of Japan.The data used are the one-ball data in the regular season of Japanese professional baseball in 2015 and 2016. One-ball data is data every time a pitcher throws one ball to a batter. This time, we used only the data of the battle of right-handed pitcher and right-handed batter. The reason for limiting the data is that it is judged that it is easier to extract the characteristics of the factor when narrowing down the conditions.In this research, factor analysis is performed first, and covariance structure analysis is performed based on extracted factors. Factor analysis extracts pitcher and batters how to approach the ball. In the covariance structure analysis, we analyze how the extracted factor affects variables.The result of the factor analysis is that the pitcher can extract four factors, the batter can extract two factors. We named the extracted pitcher’s factors “throw down low”, “throw falling balls”, “throw balls to escape outside”, “attack in-course”. We named the extracted batter’s factors “upper swing”, “down swing”. When covariance structure analysis was performed using the result of the factor analysis, three models could be created. The three models can know how each factor influences hits, outs, batting average. From the results of these models, upper swing had a positive influence on hits, and it turned out that it had a bad influence on outs. It also proved to have a positive effect on latent variable batting time consisting of hits and outs. In summary, it turns out that doing an upper swing has a good influence on increasing the batting average.From the analysis result, it turned out that the upper swing is important for improving the batting average. The future task can be to analyze also combinations other than right-handed pitcher versus right-handed batter who could not be done this time. In addition, we clarify the explanatory variable which has the most influence on improving batting average among latent variable upper swing.

Hiroki Yamato, Yumi Asahi

Multimodal Interaction

Frontmatter
Classification Method of Rubbing Haptic Information Using Convolutional Neural Network

In previous research, we proposed a method to collect accelerations in daily haptic behaviors using a ZigBee-based microcomputer. However, the method for classifying the collected data was not sufficiently implemented. We therefore propose applying collected data to classify rubbing haptic information. In this paper, we implemented a classification approach for haptic information collected by our method. We used a convolutional neural network (CNN) to classify the information. We performed a classification experiment in which the CNN classified 18 types of information, 93.2% on average. We also performed an experiment to classify rubbed objects in real-time. The CNN was able to classify five types of objects, about 67.7% on average.

Shotaro Agatsuma, Shinji Nakagawa, Tomoyoshi Ono, Satoshi Saga, Simona Vasilache, Shin Takahashi
Haptic Interface Technologies Using Perceptual Illusions

With virtual reality now accessible to anyone through high-end consumer headsets and input devices, researchers are seeking cost-effective designs based on human perceptual properties for virtual reality interfaces. The author has been studying a sensory-illusion-based approach to designing human-computer interface technologies. This paper overviews how we are using this approach to develop force displays that elicit illusory continuous force sensations by presenting asymmetric vibrations and kinesthetic displays based on a cross-modal effect among visual, auditory, and tactile cues of self-motion.

Tomohiro Amemiya
Assessing Multimodal Interactions with Mixed-Initiative Teams

The state-of-the-art in robotics is advancing to support the warfighters’ ability to project force and increase their reach across a variety of future missions. Seamless integration of robots with the warfighter will require advancing interfaces from teleoperation to collaboration. The current approach to meeting this requirement is to include human-to-human communication capabilities in tomorrow’s robots using multimodal communication. Though advanced, today’s robots do not yet come close to supporting teaming in dismounted military operations, and therefore simulation is required for developers to assess multimodal interfaces in complex multi-tasking scenarios. This paper describes existing and future simulations to support assessment of multimodal human-robot interaction in dismounted soldier-robot teams.

Daniel Barber
Animacy Perception Based on One-Dimensional Movement of a Single Dot

How humans perceive animacy based on movement is not well understood. In the present study, we conducted an experiment to investigate how humans perceive animacy based on the one-dimensional movement of a single dot. Ten participants were asked to generate 60 s of one-dimensional movement with three assumptions: randomness, inanimacy and animacy. The time-series analysis revealed that the movements generated with the assumption of randomness were similar to white noise, the movements generated with the assumption of inanimacy were periodic, and the power spectra of the movements generated with the assumption of animacy were located between pink and brown noise with trajectories with autocorrelations but no clear periodicity.

Hidekazu Fukai, Kazunori Terada, Manabu Hamaguchi
Experimental Observation of Nodding Motion in Remote Communication Using ARM-COMS

Considering the critical issues of remote communication, this study proposes an idea of remote individuals’ virtual connection through augmented tele-presence systems called ARM-COMS (ARm-supported eMbodied COmmunication Monitor System). Several ideas of robot-based remote communication systems have been proposed to challenge the telepresence issue of remote participants. However, it does not cover the issue of relationship. An idea of robotic arm-typed system and/or an idea of anthropomorphization draw researchers’ attentions to challenge the lack of relationship with remote participants. However, usage of the human body movement of a remote person as a non-verbal message, or cyber-physical media in remote communication is still an open issue. Under these circumstances, this paper describes the system configuration of ARM-COMS based on the proposed idea and discusses the feasibility of the idea using the experimental observations.

Teruaki Ito, Hiroki Kimachi, Tomio Watanabe
Hands-Free Interface Using Breath Residual Heat

Most user interfaces have been studied based on hand gestures or finger touches, but the interface using the user’s hands does not reflect the user’s various situations. In this paper, we propose a hands-free user interaction system using a thermal camera. The hands-free interface proposed in this paper exploits user’s breath heat and thermal camera, thus it is very useful for users who have difficulty in using their hands. In addition, the thermal camera is not affected by background color and lighting environment, so it can be used in various complex situations. For hands-free interaction, the user creates a residual heat on the surface of the object to interact, and the thermal camera senses the residual heat. This paper has observed that the residual heat from breath is most suitable for the interaction design. For this observation, several different methods were tested for how to generate strong residual heat on the various materials. According to the tests, it was verified that the residual heat generated from breath with hollow rod (straw) is most stable for sensing and interaction. This paper demonstrates its usefulness by implementing an interaction system using camera projection system as an application example.

Kanghoon Lee, Sang Hwa Lee, Jong-Il Park
A Study of Perception Using Mobile Device for Multi-haptic Feedback

As developments are made to mobile devices, advances are also made to vibration feedback technology to help visually impaired and elderly users. At present, mobile devices still use motor technology to provide vibration feedback. Therefore, in order to explore the possible applications of motor vibration feedback, two experiments were carried out in this study. In both of the experiments, four motors were used. In the first experiment, four motors were installed in each corner of two prototype devices of different sizes (5.5 inches and 9.7 inches). These devices were placed on top of a desk and the motors were randomly activated. The subjects then touched the center of the prototypes with their index finger, and had to identify which motor was vibrating. The results showed that age difference had a significant difference in the perception of the vibration position, but the difference between the two sizes was not significant. The second experiment compared the perception of the vibration position in hand-held devices by using a 5.5-inch prototype. The results showed that the different age groups showed a minor difference in how the prototype was used. However, the different ways of using the prototype had a significant difference in the identification of the vibration position.

Shuo-Fang Liu, Hsiang-Sheng Cheng, Ching-Fen Chang, Po-Yen Lin
Realizing Multi-Touch-Like Gestures in 3D Space

In this paper, our purpose is extending 2D multi-touch interaction to 3D space and presenting a universal multi-touch gestures for 3D space. We described a system that allows people to use their familiar multi-touch gestures in 3D space without touching surface. We called these midair gestures in 3D as 3D multi-touch-like gestures. There is no object or surface for user to touch in 3D space, so we use depth camera to detect fingers’ state and estimate whether finger in the “click down” or “click up”, which show user’s intention to interact with system. We use machine learning to recognize hand shapes. While we do not need to precessing the recognition all the time, we only recognize hand shape between “click down” or “click up”.

Chunmeng Lu, Li Zhou, Jiro Tanaka
Effects of Background Noise and Visual Training on 3D Audio

Spatial audio or 3D audio as an information channel is increasingly used in various domains. Compared to the multitude of synthetic visual systems and 3D representations, audio interfaces are underrepresented in modern aircraft cockpits. Civil commercial aircraft rarely use spatial audio as a supplementary directional information source. Although, different research approaches deal with the benefits of spatial audio. In 3D audio simulator trials, pilots express concern over distractions from background noise and possibly mandatory training requirements. To resolve this, the author developed and tested a 3D audio system to support pilots in future cockpits, called Spatial Pilot Audio Assistance (SPAACE).The experiment took place at the German Aerospace Center’s Apron and Tower Simulator. The developed system creates a three-dimensional audio environment based on normal non-spatial audio. The 27 participants heard the sound through an off-the-shelf aviation-like stereo headset. The main subject of investigation was to evaluate if air traffic control background noise affects spatial perception. The non-normally distributed location error with background noise ($$Mdn=6.70^\circ $$) happened to be lower than the location error without air traffic control background noise ($$Mdn=7.48^\circ $$). The evaluation the effect of visual feedback-based training was the second part of the experiment. In comparing the training session with the no-training session, the location error with training ($$Mdn=6.51^\circ $$) is only moderately lower than the location error without training ($$Mdn=7.96^\circ $$).The results show that humans can perceive the SPAACE audio with high precision, even with distracting background noise as in a busy cockpit environment. The effect of training was not as high as expected, primarily due to the already precise localization baseline without training.

Christian A. Niermann
Development of an End Effector Capable of Intuitive Grasp Operation for SPIDAR-W

This paper proposes a new grasp operation end effector for the wearable 6 DoF haptic device SPIDAR-W. With the new end effector, users can intuitively perform grasp operations in a virtual environment. Traditional end effectors with a button type interface, are only able to be held by hand and a button must be pushed to lift a virtual object. With the new end effector, the hand can be opened and closed naturally as well as lifting a virtual object, grasping it with significant force. The experiment was made with a pressure sensor monitoring the gripping force and a Velcro belt fixing the end effector to the hand. Performance measurements were made using the new end effector. As a result, users were able to perform grasp operations to some extent arbitrarily. However, there were some unintended operational errors as well as points of improvement that were noted.

Kanata Nozawa, Ryuki Tsukikawa, Takehiko Yamaguchi, Makoto Sato, Tetsuya Harada
Proposal of Interaction Using Breath on Tablet Device

We would like to propose an interaction which is operated by blowing a breath on a screen of information terminal. Therefore, we propose and evaluate a device for detecting breath and an algorithm for identifying the breath. While it has been studied conventionally about expiration input device operated by a breath, users are not supposed to blow a breath on a touch panel like an ordinary manual operation on the touch panel but required to blow a breath toward a dedicated input sensor. In our proposed system, it has become possible for a user to perform operations such as selection and determination of objects displayed on a screen by detecting a breath blown out toward a screen of information terminal. In this study, a breath interaction is proposed by allocating various breaths to various operations of a tablet terminal.

Makoto Oka, Hirohiko Mori
Effectiveness of Visual Non-verbal Information on Feeling and Degree of Transmission in Face-to-Face Communication

Recently, the importance of non-verbal information is getting attention. Generally, it is believed that the more non-verbal information is exchanged, the better partner’s message can be understood. In this field, much research on effectiveness of non-verbal information in communication is performed. However, among these presents doubts about this effect. Prof. Sugiya investigated quality of information’s transmission from two points of view; degree of transmission and feeling of transmission, and she suggests that non-verbal information sometimes does not help us to understand partner’s message. We try to verify effectiveness of non-verbal information and types of communication on feeling or degree of transmission from these views. For this purpose, two experiments were conducted. The experimental results of the three communication modes—text chat, voice chat, and face-to-face communication—showed that the degree of transmission was lowest in face-to-face communication as evaluated with the listeners’ test accuracy rates and consistency of character impressions. Conversely, according to the questionnaire results, feeling of transmission was ranked highest for face-to-face communication, followed by voice chat, and lastly text chat. These results suggested that the communicability of information should be considered using feeling of transmission and degree of transmission as two separate factors.

Masashi Okubo, Akeo Terada
Investigation of Sign Language Recognition Performance by Integration of Multiple Feature Elements and Classifiers

Sign languages are used by healthy individuals when communicating with those who are hearing or speech impaired as well by those with hearing or speech impediments. It is quite difficult to acquire sign language skills since there are vast number of sign language words and some signing motions are very complex. Several attempts at machine translation have been investigated for a limited number of sign language motions by using KINECT and a data glove, which is equipped with a strain gauge to monitor the angles at which fingers are bent, to detect hand motions and hand shapes.One of the key features of our proposed method is using an optical camera and colored gloves for detection of sign language motion. The optical camera is implemented in a smartphone. This makes it possible to remove the limitation of using area and occasion as a machine translation tool.The authors propose two new schemes. One is to add the two feature elements, that is, hand direction obtained from the angle between the wrist and fingertips, and hand rotation calculated from the visible size of the palm and wrist incorporating the four conventional elements comprising motion trajectory, motion velocity, hand position and hand shape. The other is integrating the results which is obtained by each classifier to enhance the recognition performance. The six kinds of classifiers have been applied to 35 sign language motions.A total of 3150 pieces of motion data, that is, 2100 pieces of motion data as training data and 1050 pieces as evaluation data, were used to evaluate the proposed method. The recognition results were examined by integrating the feature elements and classifier. The success rate for 35 words was respectively 76.2% and 94.2%, for the selection of the first ranked answer, and the selection of the first, second or third ranked answers. These values suggest that the proposed method could be used as a review tool for assessing how well learner have mastered sign language motions.

Tatsunori Ozawa, Yuna Okayasu, Maitai Dahlan, Hiromitsu Nishimura, Hiroshi Tanaka
Smart Interaction Device for Advanced Human Robotic Interface (SID)

Robotic assets used by dismount Soldiers have usually been controlled through continuous and effortful tele-operation; however, more autonomous capabilities have been developed that reduce the need for continuous control of movements. While greater autonomy can make robotic systems more useful, users still need to interact with them, and operator control units (OCUs) for deployed robots still primarily rely on manual controllers such as joysticks. This report describes an evaluation of a multi-modal interface that leverages speech and gesture through a wrist worn device to enable an operator to direct a robotic vehicle using ground guide-inspired or infantry-inspired commands through voice or gesture. A smart watch is the primary interaction device, allowing for spoken input (through the microphone) and gesture input using single-arm gestures.

Rodger Pettitt, Glenn Taylor, Linda R. Elliott
Gestural Transmission of Tasking Information to an Airborne UAV

A system is presented that enables an authorized person on ground to transmit mission information to an airborne UAV within line of sight by using gestural expressions of both arms without the need for additional devices on ground. A miniaturized processing board with a discrete GPU is used to detect the body movements via a high resolution onboard camera and to translate them into relevant tasking information. Individual task elements are transmitted consecutively, including numerical and non-numerical information. A context aware gesture recognition approach is implemented to enable the reuse of gestures for different contexts in order to maintain a small gesture set. The system further features a bidirectional communication which allows to dispatch visual feedbacks and to query missing information visually via a LED matrix. Two experiments with different briefing contents in a static and dynamic setup have been conducted to proof the feasibility under real-life conditions.

Alexander Schelle, Peter Stütz
A Video Communication System with a Virtual Pupil CG Superimposed on the Partner’s Pupil

Pupil response plays an important role in expression of talker’s affect. Focusing on the pupil response in human voice communication, we analyzed the pupil response in embodied interaction, and demonstrated that the speaker’s pupil was clearly dilated during the burst-pause of utterance. In addition, it was confirmed that the pupil response is effective for enhancing affective conveyance by using the developed system in which an interactive CG character generates the pupil response based on the burst-pause of utterance. In this study, we develop a video communication system with a virtual pupil CG superimposed on the partner’s pupil for enhancing affective conveyance. This system generates a virtual pupil response in synchronization of the talker’s utterance. The effectiveness of the system is demonstrated by means of sensory evaluations of 12 pairs of subjects in video communication.

Yoshihiro Sejima, Ryosuke Maeda, Daichi Hasegawa, Yoichiro Sato, Tomio Watanabe
bRIGHT – Workstations of the Future and Leveraging Contextual Models

Experimenting with futuristic computer workstation design and specifically tailored application models can yield useful insights and result in exciting ways to increase efficiency, effectiveness, and satisfaction for computer users. Designing and building a computer workstation that can track a user’s gaze; sense proximity to the touch surface; and support multi-touch, face recognition etc meant overcoming some unique technological challenges. Coupled with extensions to commonly used applications to report user interactions in a meaningful way, the workstation will allow the development of a rich contextual user model that is accurate enough to enable benefits, such as contextual filtering, task automation, contextual auto-fill, and improved understanding of team collaborations. SRI’s bRIGHT workstation was designed and built to explore these research avenues and investigate how such a context model can be built, identify the key implications in designing an application model that best serves these goals, and discover other related factors. This paper conjectures future research that would support the development of a collaborative context model that could leverage similar benefits for groups of users.

Rukman Senanayake, Grit Denker, Patrick Lincoln
Development of Frame for SPIDAR Tablet on Windows and Evaluation of System-Presented Geographical Information

When viewing a map, we understand the terrain by the symbols marking the buildings, roads, and landmarks. However, these pieces of information are in planar form. The original road has a slope and is irregular in shape. In the event of disaster, the ways in which people can safely evacuate must be carefully considered, so terrain characteristics must be well understood. In this study, we use not only visual information but also information from other senses. To present information for the other senses, we used a force-sense presentation device designed for tablet PCs, known as the SPIDAR tablet. We developed an application that can display maps on the tablet screen and present sensory information regarding the slope when the user traces the road on the map with a finger. Then, we evaluated the amount of road information that can be understood and which sensory presentation was most effective. The subjects of the evaluation were adults and children who completed a questionnaire regarding their degree of comprehension. The children participants were third graders at the Aijitsu elementary school. The results of the questionnaire reveal noticeable differences in the comprehension of adults and children with respect to sensory information presented. Based on this result, we plan to present more helpful information in future work. Moreover, we identified the need to thoroughly consider the modality of the sensory information presented.

Yuki Tasaka, Kazukiyo Yamada, Yasuna Kubo, Masanobu Saeki, Sakae Yamamoto, Takehiko Yamaguchi, Makoto Sato, Tetsuya Harada

Information in Virtual and Augmented Reality

Frontmatter
The Lessons of Google Glass: Aligning Key Benefits and Sociability

This article presents a case study of the user experience of Google Glass when it was initially introduced in 2013. By applying the combined methods of on-line data research, semantic network analysis and field research, it is argued that awkwardness of form factor and use, and failures of Google Glass’s user interface explain the low acceptability of the device. From a methodological perspective that combines big data analysis and qualitative research, this article discusses the user needs and preferences that should inform development of new tech.

Leo Kim
Study of Virtual Reality Performance Based on Sense of Agency

In recent years, virtual reality (VR) technology has been applied to various needs and problems. However, there are as yet few guidelines for providing VR from user’s characteristics. Therefore, we aimed to make design guideline for VR from an ergonomics viewpoint and by observing the characteristics of the user’s performance in a virtual environment during a task. Thus, the task was designed to be performed in both in reality and a virtual environment. Using the task, we observed undesirable performances by the user which could be affected by aspects of the virtual environment. First, 15 participants completed the task in reality and their performance was measured based on the movement of their hand and a surface electromyogram on their fifth finger and lower arm. We therefore obtained the characteristics of the task. Second, seven participants performed the task in a virtual environment and their performance was observed. When analyzing the results, we found undesirable performance by the participants. We therefore consider the unusual phenomena related to aspects of our VR to be based on the concept of a sense of agency (SoA). Consequently, we estimated that knowledge or predefined significance is limited in a virtual environment and it is not useful to perform tasks in VR and keep SoA to some extent. In this context, we confirmed that introducing the concept of SoA is useful when explaining performance in VR. However, our conceptual consideration should be confirmed in further research.

Daiji Kobayashi, Yusuke Shinya
Airflow for Body Motion Virtual Reality

The present study investigates the characteristics of cutaneous sensation evoked by airflow to the face of the seated and standing user during the real and virtual walking motion. The effect of airflow on enhancement of a virtual reality walk was demonstrated. The stimulus condition provided in the evaluation involved the airflow, the visual, and the vestibular presentations, and the treadmill and walk-in-place real motions. The result suggested that the cutaneous sensation of air flow was suppressed while the movement was performed actively with visual information provided. The equivalent speed of air flow for the participants was 5 ~ 29% lowered from the air flow speed in the real walk.

Masato Kurosawa, Yasushi Ikei, Yujin Suzuki, Tomohiro Amemiya, Koichi Hirota, Michiteru Kitazaki
Designing Augmented Sports: Merging Physical Sports and Virtual World Game Concept

It is important to encourage people to play sports and physical activities to keep their own health and well-being. However, not many people can keep playing sports regularly. In addition, people in today tend to become physically inactive. Thus, a novel way to motivate people to become physically active, by playing sports, is desired. Augmented sports are novel sports that integrates concepts of computer games into existing physical sports. Physical sports are played in our physical, real world. Thus, physical law limits the methods. On the other hand, such methods for computer games are limitless. Augmented sports are novel sports that integrates various methods to fill or reduce the unwanted gap between humans, to make sports enjoyable regardless of their physical skill or conditions. It will contribute every sports player to feel fun and enjoyment more, that should also lead to motivate people to play sports more. In this paper, detail concept of augmented sports is described. Then, we developed an augmented dodgeball, a proof-of-concept of augmented sports. The detail of the system is also described.

Takuya Nojima, Kadri Rebane, Ryota Shijo, Tim Schewe, Shota Azuma, Yo Inoue, Takahiro Kai, Naoki Endo, Yohei Yanase
Comparison of Electromyogram During Ball Catching Task in Haptic VR and Real Environment

The objective of this study was to construct systems for haptic virtual reality (VR) environment and to conduct an experiment to compare muscular activity during ball catching tasks in real and VR environments, where the level of the presence was evaluated. A ball catching task was demonstrated in two environments, where head-mounted display and SPIDAR-HS, the haptic presentation device using tensile force of the wire, were applied for constructing VR environment. As an index of dynamic muscular activity, forearm EMG signals were measured in the time course of a ball catching task. Average peak RMS value for forearm EMG in VR environment was 45.2% smaller than that in real environment. This difference was apparent because the amount of force generated by SPIDAR-HS was relatively lower than that made by the gravity force of the ball. On the other hand, the trends in dynamic muscular activities were similar for both environment, indicating that two tasks were fairly unique regardless the type of environments. It was concluded that the presence of VR was observable by the dynamic muscular changes during VR tasks with further adjustment of force levels required for the task in VR environment.

Issei Ohashi, Kentaro Kotani, Satoshi Suzuki, Takafumi Asao, Tetsuya Harada
A Virtual Kitchen for Cognitive Rehabilitation of Alzheimer Patients

This article presents an innovative interactive tool that has been designed and developed in the context of the preventive treatment of Alzheimer’s disease. This tool allows simulating different cooking tasks that the patient has to perform with the computer mouse. The virtual environment is visualized on a simple computer screen. Gradual assistance is provided to the patient so that he/she trains and learns to perform the tasks requested. In order for the training to be relevant and effective, no errors are allowed by the system.

Paul Richard, Déborah Foloppe, Philippe Allain
Emotion Hacking VR: Amplifying Scary VR Experience by Accelerating Actual Heart Rate

An emotion hacking virtual reality (EH-VR) system is an interactive system that hacks one’s heartbeat and controls it to accelerate a scary VR experience. The EH-VR system provides vibrotactile biofeedback, which resembles a heartbeat, from the footrest. The system determines a false heartbeat frequency by detecting the user’s heart rate in real time. The calculated false heart rate is higher than the user’s actual heart rate. This calculation is based on a quadric equation mode that we created. Using this system, we demonstrated at emerging technologies at Siggraph Asia 2016. Approximately 100 people experienced the system and we observed that for all participants, the heart rate was more elevated than in the beginning. Additional experiments endorsed that this effect was possibly caused by the presentation of a false heartbeat by the EH-VR.

Ryoko Ueoka, Ali AlMutawa
The Nature of Difference in User Behavior Between Real and Virtual Environment: A Preliminary Study

In this study, we examined the effect of different types of behavioral strategy on performance as well as on behavior in three types of different information representation method such as real task environment, VR-based task environment, and MR-based task environment in order to identify some features that enable to be applied for performance-based/behavioral-based measurement for the characterization of the SoE and its sub-components. As the results, we found that there was a significant difference in task performance such as time completion time, and parameter of time-to-collision distribution, as well as on user behavior such as decomposed motion data.

Takehiko Yamaguchi, Hiroki Iwadare, Kazuya Kamijo, Daiji Kobayashi, Tetsuya Harada, Makoto Sato, Sakae Yamamoto
A Fingertip Glove with Motor Rotational Acceleration Enables Stiffness Perception When Grasping a Virtual Object

We developed a 3D virtual reality system comprising two fingertip gloves and a finger-motion capture device to deliver a force feedback sensation when grasping a virtual object. Each glove provides a pseudo-force sensation to a fingertip via asymmetric vibration of a DC motor. In this paper, we describe our algorithms for providing this illusionary force feedback, as well as visual feedback, which involved deforming the shape of a virtual object. We also conducted an experiment to investigate whether presenting pseudo-force sensation to the tip of the thumb and index finger during grasping enabled participants to interpret the material stiffness of a virtual object. We changed the initial vibration amplitude, which represents the reaction force when the thumb and the index finger initially contact the surface of an object, and asked participants to match each haptic feedback condition with a visual feedback condition. We found that most participants chose the rubber or wood material (task 1) and highly deformable material (task 2) when the initial vibration was weak, and chose the wood or aluminum (task 1) and non-deformable material (task 2) when the initial vibration was strong.

Vibol Yem, Hiroyuki Kajimoto

Information and Vision

Frontmatter
A Study for Correlation Identification in Human-Computer Interface Based on HSB Color Model

In recent years, visual perception has been paid more attention by many researchers in the field of data visualization. The study of visual perception has become another research hot spot in the research of visualization. As an important tool for visualization, this paper focuses on the scatterplots. Series of scatterplot were generated by programming and then used in the experiment. The results of experiments indicate that the influence of the color, amount and correlation of the interference points on the reaction time is significant under the white background and suitable combination is found which is important for designing the scatterplots.

Yikang Dai, Chengqi Xue, Qi Guo
Investigating Effects of Users’ Background in Analyzing Long-Term Images from a Stationary Camera

Images recorded over a long term using a stationary camera have the potential for revealing various facts regarding the recorded target. We have been developing an analyzing system with a heatmap-based interface designed for visual analytics of long-term images from a stationary camera. In our previous study, we experimented with participants who were recorded in the images (recorded participants). In this study, we conducted a further experiment with participants who are not recorded in the images (unrecorded participants) to reveal the discoveries that participants obtain. By comparing the results of participants with different backgrounds, we investigated the difference between discoveries, functions used, and analysis process. The comparison suggests that unrecorded participants could discover many facts about environment, and recorded participants could discover many facts about people. Moreover, the comparison also suggests that unrecorded participants could discover many facts comparable to recorded participants.

Koshi Ikegawa, Akira Ishii, Kazunori Okamura, Buntarou Shizuki, Shin Takahashi
Decreasing Occlusion and Increasing Explanation in Interactive Visual Knowledge Discovery

Explanation and occlusion are the major problems for interactive visual knowledge discovery, machine learning and data mining in multidimensional data. This paper proposes a hybrid method that combines the visual and analytical means to deal with these problems. This method, denoted as FSP, uses visualization of n-D data in 2-D, in a set of Shifted Paired Coordinates (SPC). SPCs for n-D data consist of n/2 pairs of Cartesian coordinates, which are shifted relative to each other to avoid their overlap. Each n-D point is represented as a directed graph in SPC. It is shown that the FSP method simplifies the pattern discovery in n-D data, providing the explainable rules in a visual form with a significant decrease of the cognitive load for analysis of n-D data. The computational experiments on real data has shown its efficiency on both training and validation data.

Boris Kovalerchuk, Abdulrahman Gharawi
Visual Guidance to Find the Right Spot in Parameter Space

The last few decades brought upon a technological revolution that has been generating data by users with an ever increasing variety of digital devices, resulting in such an incredible volume of data, that we are unable to make any sense of it any more. One solution to decrease the required execution time of these algorithms would be the preprocessing of the data by sampling it before starting the exploration process. That indeed does help, but one issue remains when using the available Machine Learning and Data Mining algorithms: they all have parameters. That is a big problem for most users, because a lot of these parameters require expert knowledge to be able to tune them. Even for expert users a lot of the parameter configurations highly depend on the data. In this work we will present a system that tackles that data exploration process from the angle of parameter space exploration. Here we use the active learning approach and iteratively try to query the user for their opinion of an algorithm execution. For that an end-user only has to express a preference for algorithm results presented to them in form of a visualisations. That way the system is iteratively learning the interest of the end-user, which results in good parameters at the end of the process. A good parametrisation is obviously very subjective here and only reflects the interest of an user. This solution has the nice ancillary property of omitting the requirement of expert knowledge when trying to explore an data set with Data Mining or Machine Learning algorithms. Optimally the end-user does not even know what kind of parameters the algorithms require.

Alexander Brakowski, Sebastian Maier, Arjan Kuijper
Analyzing Reading Pattern of Simple C Source Code Consisting of Only Assignment and Arithmetic Operations Based on Data Dependency Relationship by Using Eye Movement

Some programming learners in the lowest achievement group do not have even a minimum skill to read a simple program correctly. Reading programs would be an essential programming learning. To efficiently support learners in the lowest group, firstly we should conduct a fundamental analysis of reading programs to unveil their features. Therefore, the authors focused on eye tracking as a method to carry out the idea. The authors have thought that utilizing eye movement helps to clarify the reasons for making programming learning difficult. Therefore, the purpose of this study is to investigate the possibility of learner’s program comprehension process based on the pattern of eye movement, not the eye distribution during reading source code. In this paper, we first measure the data of eye movement during reading some source codes and propose a modeling method to represent the feature of eye movement. Then we design an experimental protocol for analyzing eye movement based on program structure. The experiment of this paper focuses on source codes based on four types of data dependency relationship that can be generated by three lines of assignment statement only. As the analysis result, we confirmed that the data dependence of each pattern appeared as the unique eye behavior of program reading.

Shimpei Matsumoto, Ryo Hanafusa, Yusuke Hayashi, Tsukasa Hirashima
Development of a Pair Ski Jump System Focusing on Improvement of Experience of Video Content

“Ski Jumping Pairs” is a video content of an imaginary sport in which two players jump using a pair of skis. The highlight of the content is incredible aerial style of players. We have already developed a VR ski jump system using HMD. In this study, we have developed a system in which users can experience Ski Jumping Pairs. First, we propose a design method by introducing an idea of composing make-believe play to enhance the VR experience of the world of video content. Then, we developed a prototype of the system. An experimental evaluation was used to demonstrate the effectiveness of the method.

Ken Minamide, Satoshi Fukumori, Saizo Aoyagi, Michiya Yamamoto
Risk Reduction in Texting While Walking with an Umbrella-Typed Device for Smartphone

It is widely known that texting while walking is a dangerous behavior. To reduce the risks, we proposed an umbrella-typed device, called ii-kasa to manipulate smartphones. In this paper, we investigated whether ii-kasa reduce the risks in texting while walking. In the experiment, we recorded the gaze patterns using the eye-mark recorder. As the results, the average time and its variance of the eye fixations with ii-kasa were smaller than the ones of the smartphone. The results also showed that the participants in the ii-kasa condition watched the broader areas of their circumstances and paid more attention to their circumstances than in the smartphone condition. These results indicate that ii-kasa makes the risks of texting while walking and human cognitive loads reduce.

Sohichiro Mori, Makoto Oka
Evaluation of Discomfort Degree Estimation System with Pupil Variation in Partial 3D Images

The purpose of this paper was to examine whether the changes in pupil diameter can refrect on the degree of discomfort by various levels of partial 3D images as well as other validated characteristics. Moreover, we discuss the effectiveness of the systems while guiding visual attention by partial 3D images. Images chosen from IAPS (International Affective Picture System) were used to make 3D images. Power spectrum ratio of the pupil variation, called S/C value, generated by stimuli images to those by control images was calculated. The relationship between VAS scores for the impression regarding projected images and the S/C values was set to the major concern for this study. As a result, the average S/C values in 2D neutral images ranged 0.634 to 1.318, whereas the average S/C values in partial 3D neutral images ranged 0.412 to 1.552. VAS scores in 2D neutral images ranged 3.6 to 8.5 and that in partial 3D neutral images ranged 1.2 to 7.4. Moreover, correlation coefficients between VAS scores and S/C values in 2D neutral images was 0.116 and those in partial 3D neutral images was −0.114. In partial 3D images, this negative correlation coefficient was found in consistent with the previous study, whereas the correlation coefficients for both images were relatively low. It was suggested that S/C value was likely to use as a candidate for discomfort measure with a modification of collection technique for VAS scores during the experiment.

Shoya Murakami, Kentaro Kotani, Satoshi Suzuki, Takafumi Asao
Can I Talk to a Squid? The Origin of Visual Communication Through the Behavioral Ecology of Cephalopod

The quest of modernity has come to its final phase in the form of post modernism. Many past attempts to define “individualism” and “self” encountered the wall of linguistics structure and categorization, the governing principals of human consciousness. Postmodernism tends to recycle the façade of preexisting methods and theories, thereby creating fragmentation and dislocation. Simultaneously, computer technology is rapidly reshaping our visual culture by offering more streamlined production and distribution possibilities. Considering this environment, it is essential to investigate its effect and implication on the visual culture, by asking existential questions such as: Why do we make images? Where do they come from and what is their primary function? In order to pursue these rather difficult questions, my work focuses on the adaptive coloration of cephalopods’ (squid, octopus and cuttlefish) as comparative models that can code and re-map visual information such as paintings, photographs, and videos. The genetically and evolutionally pure empirical data of the squid and cuttlefish not only uncover certain key information needed to understand the origin of visual communication, but also function as a catalyst that can redirect our culture away from the over-stimulated hyper reality. This, in turn, can create a valuable interdisciplinary platform to discuss the current trends in both art and science.

Ryuta Nakajima

Text and Data Mining and Analytics

Frontmatter
Discovering Significant Co-Occurrences to Characterize Network Behaviors

A key aspect of computer network defense and operations is the characterization of network behaviors. Several of these behaviors are a result of indirect interactions between various networked entities and are temporal in nature. Modeling them requires non-trivial and scalable approaches. We introduce a novel approach for characterizing network behaviors using significant co-occurrence discovery. A significant co-occurrence is a robust concurrence or coincidence of events or activities observed over a period of time. We formulate a network problem in the context of co-occurrence detection and propose an approach to detect co-occurrences in network flow information. The problem is a generalization of problems that are encountered in the areas of dependency discovery and related activity identification. Moreover, we define a set of metrics to determine robust characteristics of these co-occurrences. We demonstrate the approach, exercising it first on a simulated network trace, and second on a publicly-available anonymized network trace from CAIDA. We show that co-occurrences can identify interesting relationships and that the proposed algorithm can be an effective tool in network flow analysis.

Kristine Arthur-Durett, Thomas E. Carroll, Satish Chikkagoudar
Exploring the Cognitive, Affective, and Behavioral Responses of Korean Consumers Toward Mobile Payment Services: A Text Mining Approach

The purpose of this study was to examine the cognitive, affective, and behavioral responses of Korean consumers toward mobile payment services based on the tri-component model by using a text-mining technique. Samsung Pay was chosen it is used in both online and offline transactions. We targeted social media data posted during the period between 1 July 2016 and 31 December 2016, which was about one year after Samsung Pay was launched. We conducted word frequency analysis, clustering analysis, and association rules using R programming. The results were the following. First, the 50 most frequently used words referenced the brand names of the mobile devices, payment methods, and the procedures and unique functions of Samsung Pay compared to other types of payment methods. Second, we classified the terms into 24 categories (11 categories of cognitive responses, 10 categories of affective responses, and 3 categories of behavioral responses) based on the tri-component model. The results of the clustering analysis based on the 24 categories showed a clear split between positive and negative responses at the macro level. The positive responses were further clustered into four groups, while the negative responses were fused into two groups at the micro level. Third, the association rules produced 65 rules, and we found that economic benefits played a great role in the positive feelings and continuous use of mobile payment services. This study offers valuable implications that may help mobile payment marketers with delivering services that correspond to consumer values and expectations, thus increasing consumer utility and satisfaction.

Minji Jung, Yu Lim Lee, Chae Min Yoo, Ji Won Kim, Jae-Eun Chung
An Exploration of Crowdwork, Machine Learning and Experts for Extracting Information from Data

The growing use of data to derive insights and information presents many challenges and opportunities. Further, the increased awareness of the potential of crowdworking and machine learning technologies has created a need to understand the benefits and caveats of these approaches. By reviewing current research and then comparing a novice based crowdworking approach against experts and machine learning benchmarks we seek to assess the trade-offs. The task specifically requires users to interpret satellite imagery and determine the location of residences or businesses. We are able to demonstrate that a novice approach can provide value where the data collected meets an accuracy tolerance that closely matches the expert users. Further, the potential for equivalent results is shown to be possible based on potential improvements to the system and user familiarity with the task.

Fabion Kauker, Kayan Hau, John Iannello
Correcting Wrongly Determined Opinions of Agents in Opinion Sharing Model

This paper aims at achieving a stable high accuracy of opinion sharing in a distributed network with the agents which have initial opinions. Specifically, the network is composed of multi-agents, and most agents form their opinions according to the neighbors opinions which may be incorrect while a few agents only can receive outside information which is expected to be correct but may be incorrect with noise. To order for the agents to form the correct opinions, we employ Autonomous Adaptive Tuning algorithm (AAT) which can improve the rate of correct opinion shared among the agents where incorrect opinions are filtered out during the opinion sharing process. However, AAT is hard to promote agents to form the correct opinions when all agents have their initial opinions. To tackle this problem, we proposed Autonomous Adaptive Tuning Dynamic (AATD) for the network where initial opinions of all agents are unknown. The intensive experiments have revealed, the following implications: (1) the accuracy rate of the agents with AATD is stably $$70\%$$–$$80\%$$ regardless initial opinion state in small network, while the accuracy rate with AAT varies from 0$$\%$$ to $$100\%$$ depending on the state of the initial opinion; and (2) AATD is robust to different complex network topology in comparison with AAT.

Eiki Kitajima, Caili Zhang, Haruyuki Ishii, Fumito Uwano, Keiki Takadama
Prediction of Standing Ovation of TED Technology Talks

This research aims at the prediction of whether speeches of TED talk can cause audience standing ovation after the end of the talk. The phenomenon of audience standing ovation that we can see in TED talk is one of the objective evidence of the effect that speeches give to audience. We gathered TED talk data that we used as data to experiment the prediction. The methods of this present research consist of quantitative analysis according to speech content and machine learning technique by convolutional neural network. As a result, we achieved 77.11% accuracy and 0.63 F-measure from the prediction using TED talks of Technology topic. Our method used in this study is useful to predict occurrences of standing ovations, although improvement is necessary. Compared to other studies, our contribution, on the one hand, is that we focused on speech content as the effect of standing ovation. On the other hand, we incorporated quantitative analysis especially in terms of what features are effective to standing ovation and eventually apply those features to machine learning technique.

Shohei Maeno, Tetsuya Maeshiro
Interacting with Data to Create Journalistic Stories: A Systematic Review

With the increasing amount of data available in digital media, new professional practices emerged in journalism to gather, analyze and compute quantitative data that aims to yield potential pieces of information relevant to news reporting. The constant evolution of the field motivated us to perform a systematic review of the literature on data-driven journalism to investigate the state-of-art of the field, concerning the process, expressed by the “inverted pyramid of data journalism”. We aim to understand what are the techniques and tools that are currently being used to collect, clean, analyze, and visualize data. Also, we want to know what are the data sources that are presently being used in data journalism projects. We searched databases that include publications from both fields of computing and communication, and the results are presented and discussed through data visualizations. We identified the years with the highest number of publications, the publications’ authors and the fields of study. Then, we classified these works according to the changes in quantitative practices in journalism, and to the contributions in different categories. Finally, we address the challenges and potential research topics in the data journalism field. We believe the information gathered can be helpful to researchers, developers, and designers that are interested in data journalism.

Daniele R. de Souza, Lorenzo P. Leuck, Caroline Q. Santos, Milene S. Silveira, Isabel H. Manssour, Roberto Tietzmann
Data Mining for Prevention of Crimes

Preemptive measures are of utmost importance for crime prevention. Law enforcement agencies need to have an agile approach to solve everchanging crimes. Data analytics has proven to be an effective deterrent in the field of crime data analysis. Various countries like the United States of America have benefitted by this approach. The Government of India has also taken an initiative to implement data analytics to facilitate crime prevention measures. In this research paper, we have used R Studio, an open source data mining tool to perform the data analysis on the crime dataset shared by the Gujarat Police Department. To develop predictive model and study crime patterns we used various supervised and unsupervised data mining techniques such as Multiple Linear Regression, K-Means Clustering and Association Rules Analysis. The scope of this research paper is to showcase the effectiveness of data mining in the domain of crime prevention. In addition, an effort has been put forth to help the Gujarat Police Department to analyze their crime records and provide meaningful insights for decision making to solve the cases recorded.

Neetu Singh, Chengappa Bellathanda Kaverappa, Jehan D. Joshi
An Entity Based LDA for Generating Sentiment Enhanced Business and Customer Profiles from Online Reviews

The accelerated growth of the Web2.0 has led to an abundance of accessible information which has been successfully harnessed by many researchers for personalizing products and services. Many personalization algorithms are focused on analyzing only the explicitly provided information and this limits the scope for a deeper understanding of the individuals’ preferences. However, analyzing the reviews posted by the users seeks to provide a better understanding of users’ personal preferences and also aids in uncovering business’ strengths and weaknesses as perceived by the users. Topic Modeling, a popular machine learning technique addresses this issue by extracting the underlying abstract topics in the textual data. In this study, we present entity-LDA (eLDA), a variation of Latent Dirichlet Allocation for topic modeling along with a dependency tree based aspect level sentiment analysis methodology for constructing user and business profiles. We conduct several experiments for evaluating the quantitative and qualitative performance of our proposed model compared to state-of-the-art methods. Experimental results demonstrate the efficacy of our proposed method both in terms topic quality and interpretability. Finally we develop a framework for constructing user and business profiles from the topic probabilities. Further we enhance the business profiles by extracting syntactic aspect level sentiments to indicate sentimental polarity for each aspects.

Aniruddha Tamhane, Divyaa L. R., Nargis Pervin
Backmatter
Metadata
Title
Human Interface and the Management of Information. Interaction, Visualization, and Analytics
Editors
Sakae Yamamoto
Hirohiko Mori
Copyright Year
2018
Electronic ISBN
978-3-319-92043-6
Print ISBN
978-3-319-92042-9
DOI
https://doi.org/10.1007/978-3-319-92043-6