Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 12th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2020. Due to COVID-19 pandemic the conference was held virtually.

The 19 full papers were selected from 49 submissions and present novel, and innovative work in areas including in art, science, design and engineering regarding computer-based systems or devices that provide intelligent human interaction or entertainment experience.
The papers are grouped in sessions on thematical issues on Big Ideas and Ethics; Haptics, Audio, and Internet of Things (IoT); Industry and Government; Machine Learning (ML); and Extended Reality (XR) and Human Computer Interaction (HCI).



Big Ideas and Ethics


Designing Serious Games for the Mitigation of the Impact of Massive Shootings in a Mexican Environment

Serious games have proven to be effective methods of communication and learning. Qualities that had been taken advantage of in Disaster risk management by developing serious games that mitigate the impact of natural disasters by incurring in preparedness. Design Against Crime and its derived methodologies and tools have proven effective in the reduction of fear of crime in Mexican communities. By combining the approach previously applied in Disaster Risk Management (DRM) by game creators and researchers with Design Against Crime, the current research project proposes the use of the resulting methodology in the design of serious games as interventions for crime related incidents with similar characteristics to natural disasters like massive shootings in public spaces.
Juan Chacon Quintero, Hisa Martinez Nimi

An Ethical Code for Commercial VR/AR Applications

The commercial VR/AR marketplace is gaining ground and is becoming an ever larger and more significant component of the global economy. While much attention has been paid to the commercial promise of VR/AR, comparatively little attention has been given to the ethical issues that VR/AR technologies introduce. We here examine existing codes of ethics proposed by the ACM and IEEE and apply them to the unique ethical facets that VR/AR introduces. We propose a VR/AR code of ethics for developers and apply this code to several commercial applications.
Erick Jose Ramirez, Jocelyn Tan, Miles Elliott, Mohit Gandhi, Lia Petronio

On Trusting a Cyber Librarian: How Rethinking Underlying Data Storage Infrastructure Can Mitigate Risksof Automation

The increased ability of Artificial Intelligence (AI) technologies to generate and parse texts will inevitably lead to more proposals for AI’s use in the semantic sentiment analysis (SSA) of textual sources. We argue that instead of focusing solely on debating the merits of automated versus manual processing and analysis of texts, it is critical to also rethink our underlying storage and representation formats. Further, we argue that accommodating multivariate metadata exemplifies how underlying data storage infrastructure can reshape the ethical debate surrounding the use of such algorithms. In other words, a system that employs automated analysis typically requires manual intervention to assess the quality of its output, and thus demands that we select between multiple competing NLP algorithms. Settling on an algorithm or ensemble is not a decision that has to be made a priori, but when made, involves implicit ethical considerations. An underlying storage and representation system that allows for the existence and evaluation of multiple variants of the same source data, while maintaining attribution to the individual sources of each variant, would be a much-needed enhancement to existing storage technologies, as well as, facilitate the interpretation of proliferating AI semantic analysis technologies. To this end, we take the view that AI functions as (or acts as an implicate meta-ordering of) the SSA sociotechnical system in a manner that allows for novel solutions for safer cyber curation. This can be done by holding the attribution of source data in symmetrical relationship to its further multiple differing annotations as coexisting data points within a single publishing ecosystem. In this way, the AI program allows for the annotations of individual and aggregate data by means of competing algorithmic models, or varying degrees of human intervention. We discuss the feasibility of such a scheme, using our own infrastructure model, (MultiVerse), as an illustrative model for such a system, and analyse its ethical implications.
Maria Joseph Israel, Mark Graves, Ahmed Amer

ToDI: A Taxonomy of Derived Indices

Advancements in digital technology have eased the process of gathering, generating, and altering digital data at large scale. The sheer scale of the data necessitates the development and use of smaller secondary data structured as ‘indices,’ which are typically used to locate desired subsets of the original data, thereby speeding up data referencing and retrieval operations. Many variants of such indices exist in today’s database systems, and the subject of their design is well investigated by computer scientists. However, indices are examples of data derived from existing data; and the implications of such derived indices, as well as indices derived from other indices, pose problems that require careful ethical analysis. But before being able to thoroughly discuss the full nature of such problems, let alone analyze their ethical implications, an appropriate and complete vocabulary in the form of a robust taxonomy for defining and describing the myriad variations of derived indices and their nuances is needed. This paper therefore introduces a novel taxonomy of derived indices that can be used to identify, characterise, and differentiate derived indices.
Maria Joseph Israel, Navid Shaghaghi, Ahmed Amer

Haptics, IoT, and Audio


Plug-and-Play Haptic Interaction for Tactile Internet Based on WebRTC

Tactile Internet promises a widespread adoption of haptic communication over the Internet. However, as haptic technologies are becoming more diversified and available than ever, the need has arisen for a plug-and-play (PnP) haptic communication over a computer network. This paper presents a system for enabling PnP communication of heterogeneous haptic interfaces. The system is based on three key features: (i) a haptic metadata to make haptic interfaces self-descriptive, (ii) a handshake protocol to automatically exchange haptic metadata between two communicating devices, and (iii) a multimodal (haptic-audio-visual) media communication protocol. Implemented using WebRTC, the PnP communication is evaluated using a Tele-Writing application with two heterogeneous haptic interfaces, namely Geomagic Touch and Novint Falcon. Our findings demonstrate the potential of the system to be employed in any Tactile Internet scenario.
Ken Iiyoshi, Ruth Gebremedhin, Vineet Gokhale, Mohamad Eid

SwingBeats: An IoT Haptic Feedback Ankle Bracelet (HFAB) for Dance Education

Dance choreography is often synchronized with music. Thus, a major challenge for learning choreography is moving the correct body part to a signified rhythm in the music surrounding the beat. However, the rhythm is often more complex than a metronome. SwingBeats is a real time, haptic feedback system under research and development with the goal of helping learners of any dance style to a) focus on learning the various dance moves, steps, patterns, and dynamics without the need to keep constant track of the music’s beat pattern, and b) to condition any choreography for the dance through custom built Internet of Things (IoT) wearables.
This paper is to report on the development and preliminary success of custom Haptic Feedback Ankle Bracelets (HFABs) for the SwingBeats system. HFABs enable learning the footwork for any dance through conditioning the learner to move their feet in accordance to the choreography which follows the beat of the music. Thus HFABs condition muscle memory in the same way learning to play the piano conditions the musician’s finger muscles to anticipate each move ahead of time and play the notes in perfect harmony. Thus far, the custom HFABs have been tested with Tap dancing because this style of dancing is predominantly focused on footwork and includes a relatively small degree of freedom in the directions each foot can travel during dancing. The results are thus easily generalizable to any footwork with the addition of more haptic actuators as needed per degrees of freedom.
Navid Shaghaghi, Yu Yang Chee, Jesse Mayer, Alissa LaFerriere

Mona Prisa: A Tool for Behaviour Change in Renewable Energy Communities

Innovative construction projects, such as Energy Communities, are crucial to meet challenging climate objectives. However, currently residents of shared energy projects receive no feedback about the real-time consumption in the building and they cannot adjust their behaviour according to the needs of the community. In this paper we introduce the “Mona Prisa”, an interactive prototype dashboard with the looks of a painting at the entrance of a building which is part of an Energy Community. The design is based on the results from 51 interviews with 37 experts living or involved in an energy community and 14 non-experts. We question the level of openness of participants to energy behavior change and how this information should be visualized for a community, not on an individual level. We present the translation of these insights into a prototype with real-time energy, water and heat flows. The content is based on three important features of energy consumption feedback: awareness, action-based feedback and gamification. Interaction with the prototype is possible by infrared sensors and a camera for face detection. In this paper we focus on the design process and components of the product. We conclude with future development ideas.
Olivia De Ruyck, Peter Conradie, Lieven De Marez, Jelle Saldien

GrainSynth: A Generative Synthesis Tool Based on Spatial Interpretations of Sound Samples

This paper proposes a generative design approach for the creative exploration of dynamic soundscapes that can be used to generate compelling and immersive sound environments. A granular synthesis tool is considered based on the perceptual self-organization of sound samples by utilizing the t-Stochastic Neighboring Embedded algorithm (t-SNE) for the spatial mapping of sonic grains into a 2D space. The proposed system was able to relate the visual stimuli with the sonic responses in the context of the generic gestalt principles of visual perception. According to user evaluation, the application operated intuitively and also revealed the potential for creative expressiveness both from the user’s perspective and as a standalone, generative synthesizer.
Archelaos Vasileiou, João André Mafra Tenera, Emmanouil Papageorgiou, George Palamas

Modeling Audio Distortion Effects with Autoencoder Neural Networks

Most music production nowadays is carried out using software tools: for this reason, the market demands faithful audio effect simulations. Traditional methods for modeling nonlinear systems are effect-specific or labor-intensive; however, recent works yielded promising results by black-box simulation of these effects using neural networks. This work aims to explore two models of distortion effects based on autoencoders: one makes use of fully-connected layers only, and the other employs convolutional layers. Both models were trained using clean sounds as input and distorted sounds as target, thus, the learning method was not self-supervised, as it is mostly the case when dealing with autoencoders. The networks were then tested with visual inspection of the output spectrograms, as well as with an informal listening test, and performed well in reconstructing the distorted signal spectra, however a fair amount of noise was also introduced.
Riccardo Russo, Francesco Bigoni, George Palamas

Industry and Government


Investors Embrace Gender Diversity, Not Female CEOs: The Role of Gender in Startup Fundraising

The allocation of venture capital is one of the primary factors determining who takes products to market, which startups succeed or fail, and as such who gets to participate in the shaping of our collective economy. While gender diversity contributes to startup success, most funding is allocated to male-only entrepreneurial teams. In the wake of COVID-19, 2020 is seeing a notable decline in funding to female and mixed-gender teams, giving raise to an urgent need to study and correct the longstanding gender bias in startup funding allocation.
We conduct an in-depth data analysis of over 48,000 companies on Crunchbase, comparing funding allocation based on the gender composition of founding teams. Detailed findings across diverse industries and geographies are presented. Further, we construct machine learning models to predict whether startups will reach an equity round, revealing the surprising finding that the CEO’s gender is the primary determining factor for attaining funding. Policy implications for this pressing issue are discussed.
Christopher Cassion, Yuhang Qian, Constant Bossou, Margareta Ackerman

A Tool for Narrowing the Second Chance Gap

The United States has the largest prison population in the world with more than 650,000 ex-offenders released from prison every year, according to the United States Department of Justice. But even after time has been served, criminal records persist, limiting their bearer’s ability to qualify for job, rental, loan, volunteering, and other opportunities available to citizens. It is thus not surprising that the US Department of Justice also reports that approximately two-thirds of those released are rearrested within three years of release. In recent years, many laws have been passed to shield past criminal records from future background checks. The Second Chance Gap Initiative at the Santa Clara University’s Law School ( uses empirical research and analysis to draw attention to the millions of Americans that remain stuck in “the second chance gap” of being eligible for but not receiving their second chance in the realms of expungement, reinfranchisement, and resentencing. In the case of criminal records, it finds that tens of millions of people that have completed their formal sentences are stuck in a “paper prison,”s held back, not by steel bars but bureaucratic and related hurdles that prevent them from assessing a cleaned record. In support of this initiative, the SCU Ethical, Pragmatic, and Intelligent Computing (EPIC) laboratory has developed a flexible tool for ascertaining expungement eligibility. The project hopes to assist those seeking to determine if they qualify via a user-friendly web application containing a rule engine for expungement qualification determination.
Navid Shaghaghi, Zuyan Huang, Hithesh Sekhar Bathala, Connor Azzarello, Anthony Chen, Colleen V. Chien

Machine Learning, Education and Training


Is Learning by Teaching an Effective Approach in Mixed-Reality Robotic Training Systems?

In recent years, there has been an increasing interest in the extended reality training systems (XRTSs), including an expanding integration of such systems in actual training programs of industry and educational institutions. Despite pedagogists had developed multiple didactic models with the aim of ameliorating the effectiveness of knowledge transfer, the vast majority of XRTSs are sticking to the practice of adapting the traditional model approach. Besides, other approaches are started to be considered, like the Learning by Teaching (LBT), but for other kinds of intelligent training systems like those involving service robots. In the presented work, a mixed-reality robotic training system (MRRTS) devised with the capability of supporting the LBT is presented. A study involving electronic engineering students with the aim of evaluating the effectiveness of the LBT pedagogical model when applied to a MRRTS by comparing it with a consolidated approach is performed. Results indicated that while both approaches granted a good knowledge transfer, the LBT was far superior in terms of long-term retention of the information at the cost of a higher time spent in training.
Filippo Gabriele Pratticò, Francisco Navarro Merino, Fabrizio Lamberti

Neuroevolution vs Reinforcement Learning for Training Non Player Characters in Games: The Case of a Self Driving Car

The aim of this project is to compare two popular machine learning methods, a non-gradient-based algorithm such as neuro-evolution with a gradient-based reinforcement learning on an irregular task of training a car to self-drive around 3D circuits with varying complexity. A series of 3D circuits with a physics based car model were modeled using the Unity game engine. The data collected during evaluation show that neuro-evolution converges faster to a solution when compared to the reinforcement learning approach. However, when the reinforcement learning approach is allowed to train for long enough, it outperforms the neuro-evolution in terms of car speed and lap times achieved by the trained model of the car.
Kristián Kovalský, George Palamas

Training Medical Communication Skills with Virtual Patients: Literature Review and Directions for Future Research

Effective communication is a crucial skill for healthcare providers since it leads to better patient health, satisfaction and avoids malpractice claims. In standard medical education, students’ communication skills are trained with role-playing and Standardized Patients (SPs), i.e., actors. However, SPs are difficult to standardize, and are very resource consuming. Virtual Patients (VPs) are interactive computer-based systems that represent a valuable alternative to SPs. VPs are capable of portraying patients in realistic clinical scenarios and engage learners in realistic conversations. Approaching medical communication skill training with VPs has been an active research area in the last ten years. As a result, the number of works in this field has grown significantly. The objective of this work is to survey the recent literature, assessing the state of the art of this technology with a specific focus on the instructional and technical design of VP simulations. After having classified and analysed the VPs selected for our research, we identified several areas that require further investigation, and we drafted practical recommendations for VP developers on design aspects that, based on our findings, are pivotal to create novel and effective VP simulations or improve existing ones.
Edoardo Battegazzorre, Andrea Bottino, Fabrizio Lamberti

XR and HCI


Handheld vs. Head-Mounted AR Interaction Patterns for Museums or Guided Tours

In recent years, Augmented Reality (AR) technology has been adopted in various fields. The development of handheld devices (HHD) such as smartphones and tablets gives people more chances to use AR technology in their daily lives. However, AR applications using head-mounted devices (HMD) such as Microsoft HoloLens or Magic Leap provide stronger presence experiences than HHD, so that users can immerse themselves better in AR scenarios. While currently there already exist prototypical examples of HMD in museum contexts, widely used interaction patterns are not yet well established, although they would play an important role for accessibility by large user groups. This paper explores existing and potential interaction patterns for guided tours in museums, led by the question how to reconcile AR interaction patterns on HHD and HMD. We use an existing museum showcase for handheld AR from the project “Spirit” to transfer its interaction patterns to an HMD, such as the MS HoloLens. Technical constraints and usability criteria regarding the potential overlaps and applicability have been analyzed in this paper.
Yu Liu, Ulrike Spierling, Linda Rau, Ralf Dörner

Design and Analysis of a Virtual Reality Game to Address Issues in Introductory Programming Learning

The field of computer science has not shied away from employing game-based learning and virtual reality techniques for computer programming education. While a plethora of game-based, virtual reality or combinations of both solutions exist, most are developed as an alternative to traditional lessons where students focus on learning programming concepts or languages. However, these solutions do not cater to problems students face when learning programming that is mainly caused by the abstract nature of programming, misconceptions of programming concepts and lack of learning motivation. Hence, in this paper, a framework to address the abstract nature of programming, common programming misconceptions and motivational issues is developed. The framework consists of three modules that correspond to each issue powered by a simulation engine. To address the abstract nature of programming, programming concepts will be represented with concrete objects in the virtual environment. Furthermore, to address common programming misconceptions, simulation techniques such as interactions and player perspective will be utilised. Lastly, motivational game elements will be employed into the simulation to engage students when learning through the system. Results gathered from questionnaires indicated that users were generally satisfied with the virtual experience developed from the framework.
Chyanna Wee, Kian Meng Yap

Low-Complexity Workflow for Digitizing Real-World Structures for Use in VR-Based Personnel Training

Since the advent of virtual reality (VR), there has existed a need for digital assets to populate virtual environments. Virtual training scenarios have risen in popularity in recent years, increasing the need for digital environments resembling real-world structures. However, established techniques for digitizing real-world structures as VR-ready 3D assets are often expensive, complicated to implement, and offer little to no customization/ To address these problems, a “low-complexity” digitization workflow adapted from existing research and based on procedural modeling is proposed. Procedural modeling allows for non-destructive customization and control over the digital asset throughout the front end of the digitization workflow. A real-world VR training project using this workflow is outlined, demonstrating its advantages over other established digitization techniques.
Mason Smith, Andre Thomas, Kerrigan Gibbs, Christopher Morrison

Acceleration of Therapeutic Use of Brain Computer Interfaces by Development for Gaming

Brain computer interfaces (BCI) are the foundation of numerous therapeutic applications that use brain signals to control programs or to translate into feedback. While the technical creation of these systems may be done in the lab with limited design expertise, the translation into a therapeutic calls for the engagement of game designers. This is evermore true for BCI in virtual reality (VR). VR has the potential to elevate BCI in embodiment and immersiveness. These traits are key for neurofeedback therapies for neurobehavioral conditions like anxiety. The cooperation between game designers and scientists overcomes the hurdle in transforming an experiment into a tool. More often than not, BCI on the road to therapeutics or other practical applications are launched in original or adapted games to demonstrate the usability of the platform. In the absence of partnerships like this, slow or stalled progress ensues on the scientific translation. We demonstrate this principle in a range of examples and in-depth with Mandala Flow State—a VR neurofeedback system that first served as an interactive installation in an art museum.
Julia A. Scott, Max Sims

KeyLight: VR System for Stage Lighting

Training in lighting design for theater is increasingly grounded in new technologies. A growing momentum towards the incorporation of new digital tools including computer-based “magic sheets” and digital lighting consoles simplifies the work of lighting designers while also supporting diverse talent through accessibility offerings. As the industry also moves away from traditional classroom education, there is a need for alternative options that will allow future lighting designers to practice their trade.
KeyLight leverages Alexa voice control, Unity Engine visualization, and virtual reality (VR) technologies to train designers to create lighting looks using industry standard terminology and commands. KeyLight’s voice user interface (VUI) bypasses the issue of learnability prevalent in other VUIs by enforcing use of theatrical commands that already require specific verbiage in industry contexts.
Through the medium of virtual reality, design students can practice their craft without the constraints of lighting equipment, space, or personnel availability. With this tool, junior lighting designers develop their fundamental technical and communicative skills. Testimonials from industry professionals suggest that KeyLight can supplement the education of aspiring lighting designers by enabling them to practice their communication through digital design work. Through KeyLight, junior lighting designers can learn the fundamental skills of additive color mixing, the efficacy of different lighting angles, the division of lighting fixtures into channels and groups, and to communicate their designs to a board operator. Results also indicate that there are applications of this voice technology to the workflow of professional lighting designers.
Madeline Golliver, Brian Beams, Navid Shaghaghi


Weitere Informationen

Premium Partner