Skip to main content

2019 | Buch

Advances in Artificial Intelligence, Software and Systems Engineering

Joint Proceedings of the AHFE 2018 International Conference on Human Factors in Artificial Intelligence and Social Computing, Software and Systems Engineering, The Human Side of Service Engineering and Human Factors in Energy, July 21–25, 2018, Loews Sapphire Falls Resort at Universal Studios, Orlando, Florida, USA

insite
SUCHEN

Über dieses Buch

This book focuses on emerging issues following the integration of artificial intelligence systems in our daily lives. It focuses on the cognitive, visual, social and analytical aspects of computing and intelligent technologies, highlighting ways to improve technology acceptance, effectiveness, and efficiency. Topics such as responsibility, integration and training are discussed throughout. The book also reports on the latest advances in systems engineering, with a focus on societal challenges and next-generation systems and applications for meeting them. It also discusses applications in smart grids and infrastructures, systems engineering education as well as defense and aerospace. The book is based on both the AHFE 2018 International Conference on Human Factors in Artificial Intelligence and Social Computing, Software and Systems Engineering, The Human Side of Service Engineering and Human Factors in Energy, July 21–25, 2018, Loews Sapphire Falls Resort at Universal Studios, Orlando, Florida, USA.

Inhaltsverzeichnis

Frontmatter

Software and Systems Engineering Applications

Frontmatter
Development of a Web Based Framework to Objectively Compare and Evaluate Software Solutions

We created a web based framework to objectively evaluate different software solutions using different approaches to the same problem. Solutions can either be single algorithms, software modules or entire programs. The evaluation strongly relies on the human component. In this paper we describe the conceptual structure as well as the evaluation of the first field test of the developed framework by utilizing the implementation of two distinct georouting algorithms as well as their validation with test users. No modification to the tested software needs to be done, as the state of each software component is saved and synchronized by the framework. The aim of the investigation of our implementation is to gather usage data, which then allows us to derive proposals for improvements upon the area of usability and user experience and to identify properties that increases the productivity of users as well as finding out about bottlenecks. The framework is validated by assessing the collected metrics and the user feedback produced by the test, as distinctions in usage, user experience and usability between the two test algorithms were identified.

Maximilian Barta, Sigmund Schimanski, Julian Buchhorn, Adalbert Nawrot
A Case Study of User Adherence and Software Project Performance Barriers from a Sociotechnical Viewpoint

A marine propeller company and a technical university collaborated to optimize the company’s existing propeller design software. This paper reviews the project based on a sociotechnical perspective to organizational change on (a) how the university-company project and user involvement were organized, and (b) what the main management barriers were and why they may have occurred. Fieldwork included interviews and observations with university and company stakeholders over thirteen months. The data was analyzed and sorted into themes describing the barriers, such as lack of a planned strategy for deliverables or resource use in the project; the users exhibited low adherence towards the optimized software, as well as there was limited time and training allocated for them to test it. Lessons learned suggest clarifying stakeholder roles and contributions, and engaging the users earlier and beyond testing the software for malfunctions to enhance knowledge mobilization, involve them in the change and increase acceptance.

Nicole A. Costa, Florian Vesting, Joakim Dahlman, Scott N. MacKinnon
Objectification of Assembly Planning for the Implementation of Human-Robot Cooperation

This article will show how the planning of assembly processes by translating the planning problem into computer-readable domains contributes to the objectification of planning. Especially in the planning of human-robot cooperation, this is necessary due to the complexity of the relationships of the planning contents. For this purpose, the planning problem is transformed into an ontology domain of which an appropriate reasoner can draw logical conclusions from.

Rainer Müller, Richard Peifer, Ortwin Mailahn
DevOps for Containerized Applications

The term DevOps is a combination of the words “development” and “operations”, referring to a model of software development and delivery that is cross-functional, spanning from development to running it in production. Containerized applications require unique toolchain and pipeline processes for deployment that are different from conventional applications. This paper provides an overview of the tools, processes and anti-patterns for DevOps with containerized applications.

Adam S. Biener, Andrea C. Crawford
Modelling of Polymorphic User Interfaces at the Appropriate Level of Abstraction

Polymorphic user interfaces (UIs) can offer different modes of display and interaction for different devices, situations and user needs. This increased variety adds complexity to UI development, which is often addressed by model-based UI development approaches. However, existing approaches do not offer an attractive balance of required abstraction and a graphical and vivid representation for developers. In this paper, we present the Model-with-Example approach that combines abstract interaction modelling with a wireflow-like concrete visualization. The results of a user study with industrial front-end developers show that this concrete visualization can improve development efficiency.

Daniel Ziegler, Matthias Peissner
Guided Terrain Synthesis Through Distance Transforms

We present a novel approach for terrain synthesis where the terrain is created only through procedural techniques and an initial input sketch. We employ a heightmap function that creates a distance map of the source image. The sketch is semantically annotated so that sections of the image are seen as a terrain type. Once a distance map of the source image is created, the algorithm can begin defining the heights inside the heightmap based on the distance values of each underlying pixel, the terrain type of that underlying pixel, and which terrain type that is different from its own it is nearest to. The results we obtain are promising as the terrain creation is fast, and the input system is non-complex.

Caleb Holloway, Ebru Celikel Cankaya
Interactive Mining for Learning Analytics by Automated Generation of Pivot Table

This paper describes a method to reproduce and visualize student course material page views chronologically as a basis for improving lessons and supporting learning analysis. Interactive mining was conducted on Moodle course logs downloaded in an Excel format. The method uses a time-series cross-section (TSCS) analysis framework; in the resulting TSCS table, the page view status of students can be represented numerically across multiple time intervals. The TSCS table, generated by an Excel macro that the author calls TSCS Monitor, makes it possible to switch from an overall, class-wide viewpoint to more narrowly-focused partial viewpoints. Using numerical values and graph, the approach enables a teacher to capture the course material page view status of students and observe student responses to the teacher’s instructions to open various teaching materials. It allows the teacher to identify students who fail to open particular materials during the lesson or who are late opening them.

Konomu Dobashi
Slot-Ultra-Wideband Patch Antenna for Wireless Body Area Networks Applications

The patch antennas in wireless networks of the body are widely used in the health sector and for the monitoring of a person, for the diagnosis and control of diseases. Its applicability is due to its small size and its low power, but designing this type of antennas has some disadvantages, since by contacting the human body it is very difficult to transmit the data correctly. Therefore, a slot-ultra-wideband (S-UWB) patch antenna will be analyzed and designed for applications in biomedicine that were used in the UWB wireless body area network (WBAN) and where its reliability is tested. Its frequency of operation is 4.9 and 7.1 GHz. The dimensions of the S-UWB patch antenna are 27 mm × 27 mm × 1.1 mm. This antenna was designed and simulated to verify the results of the measurements. The S-UWB has been simulated with a mathematical software to obtain the possible results.

Javier Procel-Feijóo, Edgar Chuva-Gómez, Paúl Chasi-Pesántez
Multi-view Model Contour Matching Based Food Volume Estimation

In this paper, an automatic food volume estimation method based on outer contour matching is proposed, which avoids the complicated calculation. We pre-defined a simple 3D model library and stored the projections and the user’s hand. Users took three images containing their hands from three views. The contour of segmented image was compared with the projections to find the best match. Meanwhile, we took the user’s hand as the scale and calculated the volume. As the method is easy to operate, less space-consuming, it is quite suitable for integrated application in the mobile app.

Xin Zheng, Yifei Gong, Qinyi Lei, Run Yao, Qian Yin
Visual Analysis on Macro Quality Data

Quality inspection and quarantine processing is an important process with extensive business impacts. Data visualization methods failed to fully combine standards and certifications or accreditation. While inspection, quarantine and other quality related measures are hard to analyze due to the multi-factor dimension and therefore it is difficult to reflect the macro quality status. Thus, in order to integrate quality data from quality supervisory inspection and quarantine departments, this paper propose three visualization methods to analyze quality data, such as tree models and histogram visualization, map and histogram visualization, the models implement quality visualization system based on the methods above to realize the comprehensive analysis of macro quality data.

Gang Wu, Chao Zhao, Wenxing Ding, Fan Zhang, Jing Zhao, Haitao Wang
Gamified Approach in the Context of Situational Assessment: A Comparison of Human Factors Methods

Decision support tools are increasingly common in daily tasks, with a core component of enhancing user Situational Awareness required for an informed decision. With the goal of informing the design of automated reasoners or data fusion algorithms to be included in those tools, the authors developed the Reliability Game. This paper compares the Reliability Game to other Human Factor methods available in the context of Situational Awareness assessment. Although the Reliability Game shares many common elements with HF methods it also presents some unique features. Differently than those methods, the former focuses on the Situational Assessment process and on how source factors might influence human beliefs, which are considered as basic constructs building up Situational Awareness. Moreover, the gamified approach introduces an engaging component in the setup and the specific design of the method allows the collection of data expressing second-order uncertainty.

Francesca de Rosa, Anne-Laure Jousselme, Alessandro De Gloria
Estimation of Risks in Scrum Using Agile Software Development

In this era a number of latest trivial software development process methods have been developed. Agile software development is one of them. Agile is a time-dependent approach to software delivery. Scrum is the agile software development methodology process that is extensively utilized today in many of the software companies. It is an agile technique to handle a project, usually in agile software development. Agile software development by way of Scrum is frequently perceived as a methodology; however than showing Scrum as methodology, consider of it as a framework for handling a process. This paper will initiate with the background, it will cover the characteristics and definition of agile software development and highlights the major different agile software techniques. Various agile techniques will also elaborate in this paper. The core aim of this paper is to identify risk on agile development and improve the quality of the software by using agile methodologies.

Muhammad Ahmed, Babur Hayat Malik, Rana M. Tahir, Sidra Perveen, Rabia Imtiaz Alvi, Azra Rehmat, Qura Tul Ain, Mehrina Asghar
Software Development Practices in Costa Rica: A Survey

In recent years, many studies have focused on software development practices around the world. The HELENA study is an international effort to gather quantitative data on software development practices and frameworks. In this paper, we present the Costa Rican results of the HELENA survey. We provide evidence of the practices and frameworks used in 51 different projects in Costa Rica. Participants in this survey represent companies ranging from 50 or fewer employees to companies with more than 2500 employees. Furthermore, the industries represented in the survey include software development, system development, IT consulting, research and development of IT services and software development for financial institutions. Results show that Scrum, Iterative Development, Kanban and Waterfall are the most used software development frameworks in Costa Rica. However, Scrum doubles the use of Waterfall and other methods.

Brenda Aymerich, Ignacio Díaz-Oreiro, Julio C. Guzmán, Gustavo López, Diana Garbanzo
Convincing Systems Engineers to Use Human Factors During Process Design

A controlled between-groups experiment was conducted to demonstrate the value of human factors for process design. Twenty-four Sandia National Laboratories employees completed a simple visual inspection task simulating receipt inspection. The experimental group process was designed to conform to human factors and visual inspection principles, whereas the control group process was designed without consideration of such principles. Results indicated the experimental group exhibited superior performance accuracy, lower workload, and more favorable usability ratings as compared to the control group. The study provides evidence to help human factors experts revitalize the critical message regarding the benefits of human factors involvement for a new generation of systems engineers.

Judi E. See
Knowledge Management Model Based on the Enterprise Ontology for the KB DSS System of Enterprise Situation Assessment in the SME Sector

In the paper, the knowledge management model based on the original idea of the enterprise ontology is presented. This model is the basis of construction of the Knowledge Based Decision Support System (KB DSS) for evaluation of situation of enterprises in the SME sector. In the model, the SECI model of knowledge creation proposed by I. Nonaka and H. Takeuchi is applied. The model consists of a cycle of creating evaluation of situation of enterprises in the potential-risk space of activity. To design the enterprise ontology, ideas of Polish philosophers (J. Bochenski and R. Ingarden) are applied. Taxonomies of classes of the enterprise potential and risk are presented in the OWL language (the Protege editor). The KB DSS architecture is consistent with the Case Based Reasoning (CBR) methodology.

Jan Andreasik

The Human Side of Service Engineering: Advancing Smart Service Systems and the Contributions of AI and T-Shape Paradigm

Frontmatter
Re-defining the Role of Artificial Intelligence (AI) in Wiser Service Systems

Advances in Artificial Intelligence (AI) are raising important questions for companies, employees, consumers and policy makers. Researchers predict that intelligent machine will outperform humans in a wide range of tasks in the coming decade. Our purpose is to re-define the role of AI technologies and their relationship with people by re-thinking the concept of Intelligence Augmentation (IA), an interaction between AI technologies and people that, more than amplifying human capacities, produce a cognitive transformation. This transformation modifies the structure of humans thought, changing people’s cognitive processes and providing new tools to optimize interpretative schemas, useful to analyze the real world. In line with the need to define new directions in Service Science, new rules should be formulated to create a synergic and collaborative processes between humans (people) and machines. In order to design a wiser service system, this paper proposes T-shaped professionals as especially well-adapted for augmented and collaborative intelligence.

Sergio Barile, Paolo Piciocchi, Clara Bassano, Jim Spohrer, Maria Cristina Pietronudo
An Approach for a Quality-Based Test of Industrial Smart Service Concepts

Using machine data for improving the service business offers new potentials for differentiation to manufacturing firms. However, the development of industrial Smart Service concepts is a rather complex task: Different elements (e.g. technologies, digital services and personal services) have to be considered together and customer requirements for these new offers are often unknown. In this context, testing the newly developed Smart Service concepts can help manufacturing companies to avoid development failures and to ensure their value. The focus of our paper is to introduce a new approach for testing industrial Smart Service concepts on their perceived quality for potential customers. The approach includes a process model, an evaluation framework that includes criteria derived from existing approaches and specific Smart Service items.

Jens Neuhüttler, Inka Woyke, Walter Ganz, Dieter Spath
Smart Services Conditions and Preferences – An Analysis of Chinese and German Manufacturing Markets

Smart Services are individually configurable bundles of physically delivered services and digital services, based on data collected in the Internet of Things. Due to the increasing equipment of products with sensing technologies and communication modules, Smart Services become more and more important to manufacturing companies. Since German and Chinese manufacturing firms possess a strong trading relationship, it is important to understand the market conditions and customer requirements of the two country markets in order to develop and provide Smart Services successfully. In this context, our paper provides a first overview about these aspects, based on literature analysis and a small qualitative survey among four Chinese and four German experts.

Wenjuan Zhang, Jens Neuhüttler, Ming Chen, Walter Ganz
Using Digital Trace Analytics to Understand and Enhance Scientific Collaboration

Social interaction and idea flow have been shown to be important factors in the collaboration work of scientific and technical teams. This paper describes a study to investigate scientific team collaboration and activity through digital trace data. Using a 27-month electronic mail data corpus from a scientific research project, we analyze team member participation and topics of discussion as a proxy for interaction and idea flow. Our results illustrate the progression of participation and conversational themes over the project lifecycle. We identify temporal evolution of work activities, influential roles and formation of communities throughout the project, and conversational aspects in the project lifecycle. This work is the first step of a larger research program analyzing multiple sources of digital trace data to understand team activity through organic products and byproducts of work.

Laura C. Anderson, Cheryl A. Kieliszewski
Smart University for Sustainable Governance in Smart Local Service Systems

From the Service Science perspective, our work tends to provide a conceptual and methodological contribution to affirm the role of the university as the responsible agent for the growth and development of a local area. In this sense, the purpose is to promote a harmonious growth of the whole local service system, focused on academic quality and accountability through the approach to Social Responsibility, also involving government, business and society. The University as a place for higher education and research, at the same time, represents a privileged space of convergence of different growth perspectives of the local actors. This convergence should be seen as a “place” to share and develop a common sense of value that is smart, ethically, socially, and economically sustainable, i.e. a Smart Local Service System (S-LSS).

Clara Bassano, Alberto Carotenuto, Marco Ferretti, Maria Cristina Pietronudo, Huseyin Emre Coskun

The Human Side of Service Engineering: Innovations in Service Delivery and Assessment

Frontmatter
Using Augmented Reality and Gamification to Empower Rehabilitation Activities and Elderly Persons. A Study Applying Design Thinking

We present the design of a system combining augmented reality (AR) and gamification to support elderly persons’ rehabilitation activities. The system is attached to the waist; it collects detailed movement data and at the same time augments the user’s path by projections. The projected AR-elements can provide location-based information or incite movement games. The collected data can be observed by therapists. Based on this data, the challenge level can be more frequently adapted, keeping up the patient’s motivation. The exercises can involve cognitive elements (for mild cognitive impairments), physiological elements (rehabilitation), or both. The overall vision is an individualized and gamified therapy. Thus, the system also offers application scenarios beyond rehabilitation in sports. In accordance with the methodology of design thinking, we present a first specification and a design vision based on inputs from business experts, gerontologists, physiologists, psychologists, game designers, cognitive scientists and computer scientists.

Oliver Korn, Lea Buchweitz, Adrian Rees, Gerald Bieber, Christian Werner, Klaus Hauer
Method Cards – A New Concept for Teaching in Academia and to Innovate in SMEs

The world of academic teaching is currently characterized by learning material in the form of books and lecture notes. Constantly renewing learning content, new concepts, and innovations require a new and more flexible knowledge base. Numerous initiatives in the e-learning area approach the issue through renewable digital content. Nevertheless, students in the fast changing VUCA (versatile, uncertain, complex, ambiguous) world demand for innovative approaches that focus on imparting competencies in addition to traditional knowledge. This paper presents the concept and prototype of a new blended-learning approach to foster creativity and innovation: the “Method Cards”. We use the well-known format of traditional playing cards to create learning modules, e.g. representing trends, technologies, or methods. Additional content is linked through integrated NFC tags and QR codes. We additionally present the first results of two user studies conducted amongst Master students as well as in a business environment.

Christian Zagel, Lena Grimm, Xun Luo
Correlations Between Computer-Related Causal Attributions and User Persistence

This study used data collected from 2270 participants to investigate the impact of computer-related causal attributions on users’ persistence. Attribution theory deals with subjectively perceived causes of events and is commonly used for explaining and predicting human behavior, emotion, and motivation. Individual attributions may either positively or negatively influence one’s learning behavior, confidence levels, effort, or motivation. Results indicate that attributions indeed influence users’ persistence in computer situations. Users with favorable attribution styles exhibit greater levels of persistence than users with unfavorable attribution styles. The findings can be used in HCI research and practice to understand better why users think, feel, or behave in a certain way. It is argued that an understanding of users’ attributional characteristics is valuable for developing and improving existing computer learning training strategies and methods, as well as support and assistance mechanisms.

Adelka Niels, Sophie Jent, Monique Janneck, Christian Zagel
Emotionalizing e-Commerce Pages: Empirical Evaluation of Design Strategies for Increasing the Affective Customer Response

The interdisciplinary research of neuromarketing shows that the conscious and rational consumer is only an illusion, whereas emotions have a significant influence on consumer behavior. Therefore, this study examines the effect of emotionalized e-com pages on visitors’ emotions as well as on their behavioral intention in hedonic situations. Three landing pages are conceptualized using diverse techniques of emotional boosting along with different procedures of triggering distinct levels of neuronal activity. The impact of these landing pages is examined in an online survey, generating a sample of 391 participants. The resulting dataset is analyzed by using structural equation modeling to test the proposed hypotheses. The results confirm that emotions can be triggered only by seeing a landing page of an e-com store and that these emotions influence the behavioral intentions. Additionally, the study shows a moderating effect of long-term involvement and mood and provides recommendations for appropriate and well-designed websites.

Alexander Piazza, Corinna Lutz, Daniela Schuckay, Christian Zagel, Freimut Bodendorf
Patient-Centered Design of an e-Mental Health App

Clinically diagnosed patients who suffer psychological illnesses are usually well supported during ambulatory treatments. Once back in their personal environment, the likelihood increases for them to relapse into past conscious or subconscious habitual patterns. The key to a successful and long-lasting medical attendance is regarded as an individually tailored and constantly available support. In this respect, an e-Mental Health app, acting as a constant companion, is envisaged to support an ongoing personal treatment of a patient during or after an ambulatory treatment. As part of the “mindtastic” project, the app “mindtastic Phoenix” is being created in a cooperation between the University of Erlangen-Nürnberg’s Chair of Clinical Psychology and Psychotherapy, the related information technology department and the service design company LINC Interactionarchitects. This paper describes the design process of this e-Mental Health app and highlights the deviations from the design process of conventional apps.

Leonhard Glomann, Viktoria Hager, Christian A. Lukas, Matthias Berking

Artificial Intelligence and Social Computing

Frontmatter
New « Intelligence » Coming to the Cockpit…Again?

Adaptive automation/agent has been “a good idea” for 40 years now. Yet it’s hardly used so far. Automations changing in accordance to internal rules are widely distributed and eventually fail to be understandable whenever their inner change can’t be grasped by the operator supposedly “trained & in charge”. All that could change because for technological reasons AI is back with the assumption is that it will fix it all. How can we build cooperative agents capable of helping Humans by building them with a techno-centered view? The epic fail of Ai in the 90’ will just repeat itself. We need to analyze the root of our need when envisioning cooperation with agents. So, what is it that we want from adaptive agents? If you take the example of an assistant surgeon, you have your answer, we want that kind of adaptation. They facilitate the surgeon work without any (verbal) exchanges (not resource demanding to control). They know what to do and when to help. They can interpret any sign from the surgeon as a directive for help. They completely share the same references. They know so well the implicit that collective work seems like the work of a single entity. Alas that is the description of a human being. So definitely what we seek in a cooperative agent are qualities reserved to the living like the ability to adapt. We have misplaced assumptions of humanity on AI without giving it the potential for it: socializing for cooperation through proper communication. It’s called articulation work and it’s been around 30 years at least. It’s the key to enable effective cooperation between agents. DARPA has just realized it and has launched in 2017 a massive research project so as to use AI to digitize the interaction level between an agent and an operator. Modeling with AI what makes a proper cooperation between agents and Human could be the answer.

Sylvain Hourlier
Beyond the Chatbot: Enhancing Search with Cognitive Capabilities

Conversational chatbot interfaces face pragmatic challenges that must be overcome in order for the user to obtain a positive end user experience. For example, users must understand the rules associated with the chatbot (domain, context, understanding of natural language, etc.). A null result will often end with, “Let me search the web for you”, which can lead to dissatisfaction in the technology or system. This paper discusses an enterprise solution, which turns the expectation around for the user. Starting with a traditional search, the cognitive agent can work with the search keywords in the background to determine if intervention will provide value. If the cognitive evaluation has sufficient confidence in a solution, it is provided in addition to the traditional search results, which will likely delight the user. If the cognitive evaluation has insufficient confidence, the user receives traditional search results with no loss in user expectation.

Jon G. Temple, Claude J. Elie
A Comparative Analysis of Similarity Metrics on Sparse Data for Clustering in Recommender Systems

This work shows similarity metrics behavior on sparse data for recommender systems (RS). Clustering in RS is an important technique to perform groups of users or items with the purpose of personalization and optimization recommendations. The majority of clustering techniques try to minimize the Euclidean distance between the samples and their centroid, but this technique has a drawback on sparse data because it considers the lack of value as zero. We propose a comparative analysis of similarity metrics like Pearson Correlation, Jaccard, Mean Square Difference, Jaccard Mean Square Difference and Mean Jaccard Difference as an alternative method to Euclidean distance, our work shows results for FilmTrust and MovieLens 100K datasets, these both free and public with high sparsity. We probe that using similarity measures is better for accuracy in terms of Mean Absolute Error and Within-Cluster on sparse data.

Rodolfo Bojorque, Remigio Hurtado, Andrés Inga
Development of an Integrated AI Platform and an Ecosystem for Daily Life, Business and Social Problems

Artificial intelligence (AI) has been making extraordinary progress. To keep developing the AI, reciprocal feedback-communication between AI and people in many use cases are important. Hence, this study aims to effectively build an AI platform from data collection to data analysis as well as the eco-systems in the field. The platform includes multiple applications for data collection, a cloud database to store data, probabilistic latent semantic analysis and Bayesian network as the AI by which people understand why predictions and recommendations are provided as a white-box. The platform can be easily customized and comfortably deployed for each use case depending on user needs. In the test phase, as part of this study, the system has been deployed in several fields, such as museum events, vending machines, and local Child Guidance Centers that respond to child maltreatment. As part of future studies, the systems should continue to be tested and developed more openly.

Kota Takaoka, Keisuke Yamazaki, Eiichi Sakurai, Kazuya Yamashita, Yoichi Motomura
Entropy and Algorithm of the Decision Tree for Approximated Natural Intelligence

An actual task is the classification of knowledge of a specified subject area, where it’s represented not as information coded in a certain manner, but in a way close to the natural intelligence, which structures obtained knowledge according to a different principle. The well-known answers to the questions should be classified so that the current task could be solved. Thus a new method of decision tree formation, which is approximated to the natural intelligence, is suitable for knowledge understanding. The article describes how entropy is connected to knowledge appearance, classification of previous knowledge and with definitions used in decision trees. The latter is necessary for comparing the traditional methods with the algorithm of the decision tree obtaining approximated to the natural intelligence. The dependency of entropy on the properties of element subsets of a set has been obtained.

Olga Popova, Yury Shevtsov, Boris Popov, Vladimir Karandey, Vladimir Klyuchko, Alexander Gerashchenko
Business Intelligence Analysis, the New Role of Enhancing and Complementing the Internship of Students from Information Technology Program

Internship programs are a vital course for most study programs in terms of integrating and demonstrating the body of knowledge of the real professional work domain. Most schools adopt this program and give it a very high priority due to its significance and core value. However, internship program management consists of several components - especially the work performance of interns, which is one of the most crucial factors. Business Intelligence or BI can address this concern well with its analytical capability to provide multi-dimensional or multi-pivotal reports and highly responsive interaction to any query regarding an internship’s work performance. Moreover, it features adaptive visualization and provides many insightful strategies for interns, advisors and stakeholders enabling them to manage the work of interns in the organization more effectively. One current study is employing Business Intelligence software called Microsoft Power BI. This desktop platform has been handling sets of cumulative data for the past 5 years (from 2013 to 2017) involving 470 students in Information Technology programs. It illustrates graphical outputs in a dashboard format that processes raw data and transforms it into meaningful information and insightful strategies, respectively.

Shutchapol Chopvitayakun
Clustering-Based Recommender System: Bundle Recommendation Using Matrix Factorization to Single User and User Communities

This paper shows the results of a Recommender System (RS) that suggests bundles of items to a user or a community of users. Nowadays, there are several RS that realize suggestions of a unique item considering the preferences of a user. However, these RS are not scalable and sometimes the suggestions that make are far from a user’s preferences. We propose an RS that suggests bundles of items to one user or a community of users with similar affinities. This RS uses an algorithm based on Matrix Factorization (MF). To execute the experiments, we use released databases with high dispersion. The results obtained are evaluated per the metrics Accuracy, Precision, Recall and F-measure. The results demonstrate that the proposed method improves significantly the quality of the suggestions.

Remigio Hurtado Ortiz, Rodolfo Bojorque Chasi, César Inga Chalco
User Input-Based Construction of Personal Knowledge Graphs

Personal knowledge plays a key role in the development of more intelligent applications. Applying knowledge representation techniques like knowledge graphs to the representation of personal knowledge is under active research. However, current knowledge graph construction methods are hindered by problems like absence of knowledge, ambiguity, conflicts and erroneous knowledge when applied to personal knowledge. This is largely due to its unique properties, such as its user-specific, volatile nature and limited data availability. We present in this paper a novel method supporting user input-based construction of personal knowledge graphs. We develop a new knowledge graph structure specifically to counter the said problems, and present a method that uses an iteration-specific subgraph as the intermediate layer between the user and the actual personal knowledge graph for better integration of user input. We also propose a deprecation mechanism to address the volatile nature of personal knowledge.

Xiaohua Sun, Shengchen Zhang
Hierarchical Clustering for Collaborative Filtering Recommender Systems

Nowadays, the Recommender Systems (RS) that use Collaborative Filtering (CF) are objects of interest and development. CF allows RS to have a scalable filtering, vary metrics to determine the similarity between users and obtain very precise recommendations when using dispersed data. This paper proposes an RS based in Agglomerative Hierarchical Clustering (HAC) for CF. The databases used for the experiments are released and of high dispersion. We used five HAC methods in order to identify which method provides the best results, we also analyzed similarity metrics such as Pearson Correlation (PC) and Jaccard Mean Square Difference (JMSD) versus Euclidean distance. Finally, we evaluated the results of the proposed algorithm through precision, recall and accuracy.

César Inga Chalco, Rodolfo Bojorque Chasi, Remigio Hurtado Ortiz
Decision Rules Mining with Rough Set

Decision rule has been widely used for its briefness, effectiveness and favorite understandability. Many methods aiming at mining decision rules have been developed. Rough set theory. Unfortunately, data is split between multiple parties in many cases. And privacy concerns may prevent these parties from directly sharing the data. This paper addresses the problem of how to securely mine decision rules over horizontally partitioned data with rough set approach. This paper integrates a general framework rather than a very specific solution for mining decision rules with rough set approach when privacy is concerned and data is horizontally partitioned.

Haitao Wang, Jing Zhao, Gang Wu, Zhao Chao, Zhang Fan, Xinyu Cao

Social Network Modeling

Frontmatter
Using Information Processing Strategies to Predict Contagion of Social Media Behavior: A Theoretical Model

This study presents the Social Media Cognitive Processing model, which explains and predicts the depth of processing on social media based on three classic concepts from the offline literature about cognitive processing: self-generation, psychological distance, and self-reference. Together, these three dimensions have tremendous explanatory power in predicting the depth of processing a receiver will have in response to a sender’s message. Moreover, the model can be used to explain and predict the direction and degree of information proliferation. This model can be used in a variety of contexts (e.g., isolating influencers to persuade others about the merits of vaccination, to dispel fake news, or to spread political messages). We developed the model in the context of Brexit tweets.

Sara M. Levens, Omar Eltayeby, Bradley Aleshire, Sagar Nandu, Ryan Wesslen, Tiffany Gallicano, Samira Shaikh
GitHub as a Social Network

GitHub is a popular source code hosting and development service that supports distributed teams working on large and small software projects, particularly open-source projects. According to Wikipedia, as of April 2017 GitHub supports more than 20 million users and more than 57 million repositories. In addition to version control and code updates functionalities, GitHub supports a wide range of communication options between users, including messaging, commenting, and wikis. GitHub thus has all markings of an online social network, but how does it compare to other social media such as Twitter or Facebook? Since GitHub supports messaging between users as well as “following” it seems the answer is pretty straightforward. And yet, messaging and following do not account for the bulk of activity in GitHub, which consists largely of user initiated repository “events” related to adding, editing, and fixing the code (as well as other artifacts, such as documentation and manuals). In this sense GitHub is quite unlike Twitter, where information flows rapidly between users by being passed along to others. What information flows in GitHub, besides the actual messaging? In this paper, we discuss preliminary findings that the GitHub community displays many of the characteristics of a social network.

Tomek Strzalkowski, Teresa Harrison, Ning Sa, Gregorios Katsios, Ellisa Khoja
Applications of Fuzzy Cognitive Maps in Human Systems Integration

This paper reviews applications of fuzzy cognitive maps (FCMs) in the in the field of human factors and ergonomics, with special consideration of human systems integration efforts.

Nabin Sapkota, Waldemar Karwowski
Social Sensors Early Detection of Contagious Outbreaks in Social Media

Cascades of information in social media (like Twitter, Facebook, Reddit, etc.) have become well-established precursors to important societal events such as epidemic outbreaks, flux in stock patterns, political revolutions, and civil unrest activity. Early detection of such events is important so that the contagion can either be leveraged for applications like viral marketing and spread of ideas [4] or can be contained so as to quell negative campaigns [2] and minimize the spread of rumors. In this work, we algorithmically design social sensors, a small subset of the entire network, who can effectively foretell cascading behavior and thus detect contagious outbreaks. While several techniques (for example, the friendship paradox [3]) to design sensors exist, most of them exploit the social network topology and do not effectively capture the bursty dynamics of a social network like Twitter, since they ignore two key observations (1) Several viral phenomenal have already cascaded in the network (2) most contagious outbreaks are a combination of network flow and external influence.In light of those two observations, we present an alternate formalism for information where we describe information diffusion as a forest (a collection of trees). Intuitively, our forest model is a more natural metaphor because most social media phenomena that go truly viral have multiple origins, thus are a combination of several trees. We show that our model serves as a solid foundation to foretell the emergence of viral information cascades. We then use the forest model in conjunction with past information cascades, to view the problem under the algorithmic lens of a hitting set and select a subset of nodes (of the social network) by prioritizing their activation time and their occurrence in the cascades.

Arunkumar Bagavathi, Siddharth Krishnan
Analyzing the Single-Use Plastic Bags Ban Policy in California with Social Network Model and Diffusion Model

This study seeks to identify key social network attributes of network micro-level actors and to examine how socio-economic attributes of micro-level actors, cities, contributed to diffusion of the environmental policy, which bans the use of single-use plastic bags in grocers in California. By incorporating social network models, dynamic network model and small world phenomenon, and diffusion theory, leader and laggard model, this study seeks to answer, ‘how the environmental regulatory policy which bans the use of single-use plastic bags in grocers in California has been diffused’.

Sekwen Kim
The Moderating Roles of Network Density and Redundancy in Lurking Behavior on User-Generated-Content Online Communities

Sharing content is one of the important ways of information diffusion in online UGC (User-Generated Content), communities. Most of previous research on the sharing behavior focused on predicting the sharing behavior by the inherent characteristics of the posts. This study addressed the important role of social networking characteristics, including network structure and information density, on users’ sharing behavior. Based on a social network from a large UGC platform in China, this study analyzed the panel data of 10,000 users of their daily activities. The results showed that network density and redundancy jointly influenced users’ sharing behavior. This study contributes to social network theory by providing new empirical evidence on user-generated content diffusion in UGC community. In particular, it explained how network density moderating the effect of users on UGC diffusion. This study also had important management implications for platform managers to design effective product strategies to increase UGC diffusion.

Xingyu Chen, Yitong Wang, Xianqi Hu, Zhan Zhou
Online Social Media Addictive Behavior: Case Study of Thai Military Officers

This research aims to examine the objectives in social media usage and social media usage behavior of Thai military officers. Data was gathered from 95 commissioned officers and 190 non-commissioned officers who are working at 21st Military Circle, by using questionnaire. Percentage, Mean, standard deviation and T-Test were applied for data analysis. The findings showed that most of the respondents access social media at home. The social media mostly used were Facebook, YouTube, Line, and Instagram. The frequency of usage social media’s day were 3–4 days/week, while working between 08.01 A.M.–11.00 A.M., and spending more than 3 h a day was the most usage time. The devices which they used to access the social media were smartphones, PC/Notebook, tablets, and Ipad. Their main purpose of social media usage were for knowledge and self improvement. The comparison of social media usage between commissioned officers and non-commissioned officers was not statistically significant different at .05.

Siriporn Poolsuwan

Human Factors in Energy Systems: Nuclear Industry

Frontmatter
A Guide for Selecting Appropriate Human Factors Methods and Measures in Control Room Modernization Efforts in Nuclear Power Plants

Many of the U.S. nuclear power plants are approaching the end of their 60-year licensing period. The U.S. Department of Energy Light Water Reactor Sustainability Program is conducting targeted research to extend the lives and ensure long-term reliability, productivity, safety, and security of these plants through targeted research, such as integrating advanced digital instrumentation and control technologies in the main control room. There are many challenges to this, one being the integration of human factors engineering in the design and evaluation of these upgrades. This paper builds upon recent efforts in developing utility-specific guidance for integrating human factors engineering in the control room modernization process by providing commonly used data collection methods that are applicable at various phases of the upgrade process. Advantages and disadvantages of each method are provided for consideration of an optimal human factors evaluation plan to be used throughout the lifespan of the upgrade process.

Casey Kovesdi, Jeffrey Joe, Ronald Boring
Quantifying the Contribution of Individual Display Features on Fixation Duration to Support Human-System Interface Design in Nuclear Power Plants

The integration of new digital instrumentation and control (I&C) technologies like advanced human-system interfaces in U.S. nuclear power plant main control rooms is important for addressing long-term aging and obsolescence of existing I&C. Nonetheless, attention should be made to ensure these technologies reflect state-of-the-art human factors engineering (HFE) principles. Often, there is conflicting guidance from one guideline to another, requiring the analyst to make a judgment call on addressing these ‘tradeoffs.’ The objective of this research was to inform the analyst of these tradeoffs through an empirical investigation of how certain display features that characterize common HFE guidelines concerning visual clutter and saliency influence information processing in a naturalistic context. By understanding the unique contribution of each display feature using a multilevel model, the HFE analyst should have an understanding of the interrelations of each feature with its impact on cognitive processes. Results and implications are discussed in this paper.

Casey Kovesdi, Katya Le Blanc, Zachary Spielman, Rachael Hill, Johanna Oxstrand
Autonomous Algorithm for Start-Up Operation of Nuclear Power Plants by Using LSTM

Autonomous operation is one of the technologies of the forth-industrial revolution that is attracting attention in the world due to the development of new computer algorithms and the hardware performance. Its main core technology is based on artificial intelligent (AI). Autonomous control, which is a high level of automation, is having the power or ability of self-governance in the overall system without human intervention. This study aims to develop an autonomous algorithm to control the NPPs during start-up operation by using Long-Short Term Memory (LSTM) that is one of the recurrent neural network (RNN) methods. RNN, which is a type of AI method, is suitable for application to the NPP system because it can help to calculate the interaction of non-linear parameters as well as to capture the pattern of time series parameters. This study suggests a conceptual design for autonomous operation during start-up operation from 2% power to 100% power in nuclear power plants.

Deail Lee, Jonghyun Kim
An Investigation into the Feasibility of Monitoring a Worker’s Psychological Distress

The objective of this study is to investigate the feasibility of developing a worker psychological distress monitoring system using Electroencephalogram (EEG). Psychological impairment has emerged as a key security (insider threat) and safety (human error) issue at Nuclear Power Plants (NPPs) as well as other industries. Although the U.S. Nuclear Regulatory Commission (NRC) highlighted the importance of NPP workers’ Fitness-For-Duty (FFD) to ensure personnel reliability, current FFD programs only consider drug and alcohol testing and fatigue management. However, today’s bio-signals technology makes it possible to monitor the physical and mental state of workers. Thus, this study examines the feasibility of using EEG indicators to identify potentially-at-risk workers, especially those with acute psychological distress. We reviewed historical cases of insider threat and human error at nuclear facilities, and analyzed these cases from the perspective of a suspect’s mental health. Based on bio-signal literature, a variety of EEG indicators identified at risk workers with a psychological impairment. As such, we selected the following: (1) Frontal EEG asymmetry; (2) EEG coherence; and (3) the variations of frequency domain EEG indicators (Theta, Alpha, Beta and Gamma) at certain brain area. To verify the appropriateness of these EEG indicators in realistic situations, this study performed a pilot experiment. The resting states of EEG (Eye Closed and Eye Open) were recorded on 56 student subjects (36 healthy and 20 with a high score for depression and anxiety symptoms). The resting states of EEG results showed a statistically significant difference between at-risk students and healthy students. This means specific EEG indicators can be used to classify the mental status of workers. These results can be applied directly to the mental health monitoring system of nuclear power plants as well as the industries requiring high reliability (aerospace, military and transportation).

Young A Suh, Jung Hwan Kim, Man-Sung Yim
Accident Diagnosis and Autonomous Control of Safety Functions During the Startup Operation of Nuclear Power Plants Using LSTM

Accident diagnosis is regarded as one of the complex tasks for nuclear power plant (NPP) operators. In addition, if the accident occurs during the startup operation, it is hard to cope with the situation appropriately because the initial conditions are different from the normal operation mode. Although operating procedures are provided to operators, accident diagnosis and control for recovery are difficult tasks under extremely stressful conditions. In order to achieve safe operation during the startup operation, this study proposes algorithms not only for accident diagnosis but also for protection control using long short-term memory (LSTM), which is an advanced version of recurrent neural networks, and functional requirement analysis (FRA). Using the LSTM, the network structures of algorithms are built. In addition, FRA is performed to define the goal, functions, processes, systems, and components for protection control. This approach was trained and validated with a compact nuclear simulator for several accidents to demonstrate the feasibility of diagnosis and correct response under startup operation.

Jaemin Yang, Daeil Lee, Jonghyun Kim

Applications in Energy Systems

Frontmatter
Exploring the Potential of Home Energy Monitors for Transactive Energy Supply Arrangements

There has been considerable investment in micro energy generation from both domestic consumers and small-scale providers. However, current metering arrangements and home energy monitoring products are too basic to enable real-time billing and remuneration, limiting the effectiveness of this investment. This paper describes the exploration of home energy monitors as a technical enabler to unlock the local trading potential of the investment in micro energy generation, and the human factors involved in interacting with these products that might pose obstacles to successful uptake. First, a human factors analysis of eight home energy monitors was conducted, which identified a number of usability issues. Next, a range of design concepts were developed to address the key usability problems identified, incorporate the forward-looking facility for alternative energy supply models, and stimulate further investment in energy prosumption. This study contributes an understanding of the potential of home energy monitors for transactive energy supply arrangements.

Andrea Taylor, Bruce Stephen, Craig Whittet, Stuart Galloway
The Operation of Crude Oil Pipeline: Examination of Wax Thickness

Solid and wax precipitation during production, transportation, and storage of petroleum fluids is a common problem faced by the oil industry throughout the world. Continuous wax deposition is a critical issue for offshore transport pipeline and for the oil and gas industry. Significant amount of effort and capital are typically spent annually by the Oil & Gas companies for the prevention and removal of wax in production and transportation lines. This study examines a new technology to monitor wax deposition thickness in sub-sea pipelines. The wax thickness predictions from the wax thermal sensing method is expected to be promising compared with conventional measurement techniques. It is expected that this sensing method can support the operators of Oil & Gas companies to predict accurately wax deposition which enables operators to adjust operation quickly to prevent a complete clogging of pipelines.

Fadi Alnaimat, Bobby Mathew, Mohammed Ziauddin
A Generalized Ergonomic Trade-off Model for Modularized Battery Systems Particularly for ICT Equipment

The author and a co-inventor earlier patented a method and a subsystem to be incorporated into battery management systems for practically optimizing the ergonomics of battery system charging and discharging in a bid to tackle batteries’ long-standing conundrums of slow charging and costly charging infrastructures. The key idea is to modularize battery systems and prioritize charging and discharging of their battery modules so as to minimize periodic human effort to unload, load, and/or (re)charge the battery modules. The author also proposed earlier a mathematical trade-off model to address the ensuing issue of optimizing the extent of modularization, assuming Poisson probability distribution of depleted battery modules in each discharge cycle and zero incremental overhead mass of additional packaging materials and electronics associated with further modularization of a battery system. This article attempts to generalize the model by relaxing these two restrictive assumptions.

Victor K. Y. Chan
Analysis of the Dangers of Professional Situations for Oil and Gas Workers of Various Professional Groups in the Arctic

The reported study was funded by RFBR according to the research project № 18-013-00623. The activities of oil and gas specialists are carried out by the shift method, a number of dangerous situations arise due to extreme climatic, geographical, industrial and social factors. Employees assess these situations differently. To some of them, workers are adapted, they know how to act, which reduces their subjective assessment of the danger, and other part still causes certain difficulties and requires more attention from the management of enterprises. With the help of the questionnaire, a subjective assessment of the dangers of occupational situations that might occur during the shift was conducted. Employees assessed the following 18 situations. The most dangerous for employees are situations where relatives have problems at home, and you cannot help, when it is necessary to perform a work hazardous to health. This study showed the differences in the assessment of the danger of occupational situations by shift workers of various occupational groups.

Yana Korneeva, Natalia Simonova, Tamara Tyulyubaeva
Safety Management Principles in Electric Power Industry Based on Human Factors

Safety is the most crucial factor in energy industry, meanwhile the safety topic is also not an integrity discipline by now. The paper introduced the reasons why safety discipline is incomplete and presented 15 safety principles in human factors, devices factors and safety system aspects. Safety is a subject interdisciplinary and the researchers were hardly proved correct in certain questions, however they can put forward some principles to make our industry safe enough. The paper presented kinds of principles actually applied in the power industry.

Yang Song
A Theoretical Assessment of the Challenges Facing Power Infrastructure Development in Low-Income Countries in Sub-Sahara Africa

Power infrastructure development is the pillar for every nation’s economic development. The constraints facing the development of energy in the Low-Income Countries (LICs) of Sub-Sahara Africa (SSA) have greatly complicated their economic growth. The purpose of the paper is to identify the challenges facing power development in LICs. A confirmatory literature relating to power infrastructure development was undertaken to identify the challenges affecting power development in the LICs of SSA. The factors affecting the investment in power development sector of the LICs were found to be lack of funding, unfavorable policy framework, lack of technological knowledge, low electrification tariffs due to low incomes and lack of preparedness from the government were identified as the major cause of underdevelopment of the power sector in LICs. In order to improve the development of power infrastructure in LICs, favorable policies must be adopted that will bring about active participation of private investment in the energy sector. Likewise, the adoption of the Green-House-Gas (GHG) emission charters must be implemented to attract finance through the Clean Development Mechanisms (CDM) to develop the power infrastructure in these regions. The study contributes to the improvement of power sustainability in the LICs, which will directly improve the economic development, eradicate poverty, and contribute to power development in Africa.

Emmanuel Ayorinde, Clinton Aigbavboa, Ngcobo Ntebo
Backmatter
Metadaten
Titel
Advances in Artificial Intelligence, Software and Systems Engineering
herausgegeben von
Tareq Z. Ahram
Copyright-Jahr
2019
Electronic ISBN
978-3-319-94229-2
Print ISBN
978-3-319-94228-5
DOI
https://doi.org/10.1007/978-3-319-94229-2