Skip to main content
Top

2020 | Book

Advances in Artificial Intelligence, Software and Systems Engineering

Proceedings of the AHFE 2019 International Conference on Human Factors in Artificial Intelligence and Social Computing, the AHFE International Conference on Human Factors, Software, Service and Systems Engineering, and the AHFE International Conference of Human Factors in Energy, July 24-28, 2019, Washington D.C., USA

insite
SEARCH

About this book

This book addresses emerging issues resulting from the integration of artificial intelligence systems in our daily lives. It focuses on the cognitive, visual, social and analytical aspects of computing and intelligent technologies, highlighting ways to improve the acceptance, effectiveness, and efficiency of said technologies. Topics such as responsibility, integration and training are discussed throughout. The book also reports on the latest advances in systems engineering, with a focus on societal challenges and next-generation systems and applications for meeting them. The book is based on two AHFE 2019 Affiliated Conferences – on Artificial Intelligence and Social Computing, and on Service, Software, and Systems Engineering –, which were jointly held on July 24–28, 2019, in Washington, DC, USA.

Table of Contents

Frontmatter

Artificial Intelligence and Social Computing

Frontmatter
Using Information Processing Strategies to Predict Message Level Contagion in Social Media

Social media content can have extensive online influence [1], but assessing offline influence using online behavior is challenging. Cognitive information processing strategies offer a potential way to code online behavior that may be more predictive of offline preferences, beliefs, and behavior than counting retweets or likes. In this study, we employ information processing strategies, particularly depth of processing, to assess message-level influence. Tweets from the Charlottesville protest in August 2017 were extracted with favorite count, retweet count, quote count, and reply count for each tweet. We present methods and formulae that incorporate favorite counts, retweet counts, quote counts, and reply counts in accordance with depth of processing theory to assess message-level contagion. Tests assessing the association between our message-level depth of processing estimates and user-level influence indicate that our formula are significantly associated with user level influence, while traditional methods using likes and retweet counts are less so.

Sara Levens, Omar ElTayeby, Tiffany Gallicano, Michael Brunswick, Samira Shaikh
EmoVis – An Interactive Visualization Tool to Track Emotional Trends During Crisis Events

The goal of this research is to develop a novel tool that can aid social science researchers in inferring emotional trends over large-scale cultural stressors. We demonstrate the usefulness of the tool in describing the emotional timeline of a major crisis event – the 2017 Charlottesville protests. The tool facilitates understanding of how large-scale cultural stressors yield changes in emotional responses. The timeline tool describes the modulation of emotional intensity with respect to how the Charlottesville event unfolded on Twitter. We have developed multiple features associated with the tool that tailor the presentation of the data, including the ability to focus on single or multiple emotions (e.g., anger and anxiety) and also delineate the timeline based on events that precede crises events, in this case, the Charlottesville protests. By doing so, we can begin to identify potential antecedents to various protest phenomena and their accompanying emotional responses.

Samira Shaikh, Karthik Ravi, Tiffany Gallicano, Michael Brunswick, Bradley Aleshire, Omar El-Tayeby, Sara Levens
Social Convos: A New Approach to Modeling Information Diffusion in Social Media

A common approach, adopted by most current research, represents users of a social media platform as nodes in a network, connected by various types of links indicating the different kinds of inter-user relationships and interactions. However, social media dynamics and the observed behavioral phenomena do not conform to this user-node-centric view, partly because it ignores the behavioral impact of connected user collectives. GitHub is unique in the social media setting in this respect: it is organized into “repositories”, which along with the users who contribute to them, form highly-interactive task-oriented “social collectives”. In this paper, we recast our understanding of all social media as a landscape of collectives, or “convos”: sets of users connected by a common interest in an (possibly evolving) information artifact, such as a repository in GitHub, a subreddit in Reddit or a group of hashtags in Twitter. We describe a computational approach to classifying convos at different stages of their “lifespan” into distinct collective behavioral classes. We then train a Multi-layer Perceptron (MLP) to learn transition probabilities between behavioral classes to predict, with high-degree of accuracy, future behavior and activity levels of these convos.

Gregorios Katsios, Ning Sa, Tomek Strzalkowski
Manipulated Information Dissemination and Risk-Adjusted Momentum Return in the Chinese Stock Market

We study the manipulated information dissemination and risk-adjusted momentum return in the Chinese stock market. In this paper, we employ excess media coverage as a proxy for manipulated information dissemination. The raw momentum returns are negative across all degrees of manipulated information dissemination, but turn into significantly positive after controlling for risks. These outcomes hint that the manipulations of information dissemination contribute to price instabilities, so raw momentum returns are negative but turn into positive owing to risk adjustments. Moreover, we also discover that the stocks with high manipulated information dissemination exhibit big size characteristic and resist market risk well.

Hung-Wen Lin, Jing-Bo Huang, Kun-Ben Lin, Shu-Heng Chen
Actionable Pattern Discovery for Tweet Emotions

The most popular form of communication over the internet is text. There are wide range of services that allow users to communicate in the natural language using text messages. Twitter is one such popular Micro-blogging platform where users post their thoughts, feeling or opinion on a day-to-day basis. These text messages not only contain information about events, products and others but also the writer’s attitude. This kind of text data is useful to develop systems, which detect user emotions. Emotion detection has wide variety of applications including customer service, public policy making, education, future technology, and psychotherapy. In this work, we use Support Vector Machine classifier model to automatically classify user emotions. We achieve accuracy in the range of 88%. The Emotional information mined from such data is huge and these findings can be more useful if the system is able to provide some actionable recommendations to the user, which help them, achieve their goal and gain benefits. The recommendations or patterns are Actionable if user can perform action using the patterns to their advantage. Action Rules help discover ways to reclassify objects with respect to a specific target, which the user intends to change for their benefits. In this work, we focus on extracting Action Rules with respect to the Emotion class from user tweets. We discover actionable recommendations, which suggests ways to alter the user’s emotion to a better or more positive state.

Angelina Tzacheva, Jaishree Ranganathan, Sai Yesawy Mylavarapu
Fourth Industrial Revolution: An Impact on Health Care Industry

The World Economic Forum annual meeting, held in Davos, Switzerland, emphasized the Fourth Industrial Revolution as one of the most cutting-edge innovative techniques to be seen in the forthcoming era. This has a greater impact on the future of production and the role of government, business and academia in all developing technologies and innovation where industries, communication and technologies meet. The fourth industrial revolution combines the physical, digital, and biological spaces and is changing the healthcare industry. The FCN-32 semantic segmentation was performed on the brain tumor images which produced better results for identifying the tumors as ground truths and predicted images was achieved. The best calculated loss = 0.0108 and accuracy = 0.9964 for the given tumor images was achieved. The earlier detecting and analysis of any disease can help diagnosing and treatment in better means through artificial intelligence techniques. The healthcare industry can serve better with faster and quality services to remote, rural and unreachable areas and thereafter reduces the cost of hospitalization.

Prisilla Jayanthi, Muralikrishna Iyyanki, Aruna Mothkuri, Prakruthi Vadakattu
Human–AI Collaboration Development: Interim Communication Rivalry of Generation

The technical revolution has been opening unprecedented opportunities, but at the same time it has force people to face unheard of ethical problems. The author believes that the totality of problems will include, on the one hand, ethical aspects of people’s attitudes to AI, and on the other, problems associated with AI itself, which can lead to systemic risks that exceed the danger of using nuclear technologies that humanity was unable to keep under control. In this regard, it is obvious that the potential risks and possible losses and damages predicted by scientific analysis, which are fraught with the reckless use of AI, are to be taken with great attention, although the probability of their occurrence may seem quite low today. Basing on systemic approach, Strauss–Howe generational theory and survey methodology, the author has conducted a pilot research engaging two sample populations – one of Russian university academics belonging to Generation X and the other of Russian university students belonging to Generation Y. The overall number of the pilot survey participants exceeded 100 respondents. The interim research results have revealed a kind of current communication rivalry between AI apps users belonging to Generation Y and AI apps ‘virtual hostesses’ and collaboration trends in the attitudes of the AI apps users belonging to Generation X, which are attributed to the profound differences in the two generations’ value systems and mentalities and might highlight a deeper problem of a possible partial loss of the national value system as the core of the national mentality, as well as a possible mainstream of Human–AI collaboration development based on a kind of communication rivalry and potential intellectual slavery.

Olga Burukina
Detection of Cutaneous Tumors in Dogs Using Deep Learning Techniques

Cytological diagnosis is useful in the practical context compared to the histopathology, since it can classify pathologies among the cutaneous masses, the samples can be collected easily without anesthetizing the patient, at very low cost. However, an experimented veterinarian performs the cytological diagnosis in approximately 25 min. Artificial intelligence is being used for the diagnosis of many pathologies in human medicine, the experience gained by years of work in the area of work allow to issue correct diagnoses, this experience can be trained in an intelligent system. In this work, we collected a total of 1500 original cytologic images, performed some preliminary tests and also propose a deep learning based approach for image analysis and classification using convolutional neural networks (CNN). To adjust the parameters of the classification model, we recommend to perform a random and grid search will be applied, modifying the batch size of images for training, the number of layers, the learning speed and the selection of three optimizers: Adadelta, RMSProp and SGD. The performance of the classifiers will be evaluated by measuring the accuracy and two loss functions: cross-categorical entropy and mean square error. These metrics will be evaluated in a set of images different from those with which the model was trained (test set). By applying this model, an image classifier can be generated that efficiently identifies a cytology diagnostic in a short time and with an optimal detection rate. This is the first approach for the development of a more complex model of skin mass detection in all its types.

Lorena Zapata, Lorena Chalco, Lenin Aguilar, Esmeralda Pimbosa, Iván Ramírez-Morales, Jairo Hidalgo, Marco Yandún, Hugo Arias-Flores, Cesar Guevara
Future Research Method of Landscape Design Based on Big Data

Big data has become a very hot topic in the field of urban research and planning, which can contribute to the full scale, refinement, humanization and experience quantification of urban planning, but it is still rarely applied in the field of landscape architecture. Big data is dynamic and objective, so it is suitable for landscape research. This paper constructs a new approach to landscape research based on big data with reference to the PERSONA approach in Internet products. Then, through the literature review, it is found that Volunteered Geographic Information (VGI) is more suitable for small scale site analysis.

Jingcen Li
Multicriteria Analysis in the Proposed Environmental Management Regulations for Construction in Aurora, Guayas, Ecuador

Studies have shown that the construction industry is one of the biggest polluters on the planet, so environmental standards must be implemented to minimize its impact. The daily increase in housing is shortening the forests and areas once protected. Due to the increase in demand for housing in Guayaquil, the population has migrated to neighboring cantons acquiring villas in urbanizations. This increase in the real estate sector is not ecologically sustainable because harmful materials are used for the environment and health, in addition to the use of machinery that uses fossil fuel. The objective of this research is to determine what would be the new regulations of environmental management in the building systems for the buildings of “La Aurora” by reviewing the pollutants that throw the materials most used in construction in order to establish the margins of contamination allowed and the means to compensate for the damage created. To determine the new regulations, a scenario analysis combined with neutrosophical cognitive maps and the TOPSIS multicriteria method is used. This c ombination allows analyzing the main scenarios and ordering them according to the multiple dimensions of the problem. Among the proposals to mitigate environmental pollution are the use of less polluting materials in construction, clean energies in the illumination of homes and public spaces as well as the increase of green areas and reforestation programs per m2 of areas dedicated to develop.

Christian Zambrano Murillo, Jesús Rafael Hechavarría Hernández, Maikel Leyva Vázquez
Random Samplings Using Metropolis Hastings Algorithm

Random Walks Samplings are important method to analyze any kind of network; it allows knowing the network’s state any time, independently of the node from which the random walk starts. In this work, we have implemented a random walk of this type on a Markov Chain Network through Metropolis-Hastings Random Walks algorithm. This algorithm is an efficient method of sampling because it ensures that all nodes can be sampled with a uniform probability. We have determinate the required number of rounds of a random walk to ensuring the steady state of the network system. We concluded that, to determinate the correct number of rounds with which the system will find the steady state it is necessary start the random walk from different nodes, selected analytically, especially looking for nodes that may have random walks critics.

Miguel Arcos-Argudo, Rodolfo Bojorque-Chasi, Andrea Plaza-Cordero
The Research on Automatic Acquirement of the Domain Terms

There are different features in domain terms on different domain. In this paper, we took TCM clinical symptom terms as example to discuss the acquirement of domain terms due to the particularity and complexity in clinical symptom terms. We analyze the feature of TCM clinical symptom terms, and define the formal representation of the word-formation. Then we use the term in the TCM Clinical Terminology as seed terms, and generate word-formation rule base. We recognize the new TCM clinical symptom terms in the medical records based on the word-formation rule base. Then we verify the recognized terms with statistical method to implement the automatic recognition of TCM clinical symptom terms, as the basis of data analysis and data application in the further.

Liangliang Liu, Haitao Wang, Jing Zhao, Fan Zhang, Chao Zhao, Gang Wu, Xinyu Cao
Behavioral Analysis of Human-Machine Interaction in the Context of Demand Planning Decisions

The trend of digitalization has led to disruptive changes in production and supply chain planning, where autonomous machines and artificial intelligence gain competitive advantages. Besides, the satisfaction of customers’ wishes has reached top priority for demand-driven companies. Consequently, companies implement digital applications, for instance neural networks for accurate demand forecasting and optimized decision-making tools, to cope with nervous operational planning activities. Since planning tasks require human-machine interaction to increase performance and efficiency of planning decisions, this analysis focuses on forms of interaction to determine the right level of collaboration. The paper outlines various levels of interaction and analyses the impact of human reactions in the context of an industrial demand planning algorithm use case at Infineon Technologies AG conducting a behavioral experiment. The results show that a variance in the levels of human-machine interaction has influence on human acceptance of algorithms, but further experiments need to be conducted to outline an overall framework.

Tim Lauer, Rebecca Welsch, S. Ramlah Abbas, Michael Henke
Deep-Learned Artificial Intelligence and System-Informational Culture Ergonomics

System-informational culture (SIC) phenomenology impels human to work in sophisticated scientific space of computer models. Applying computer instrumental systems one has to investigate and compare different fields of knowledge suffering constant cognitive, educational, and intellectual problems. Inter-discipline activity in SIC leans on meanings understanding presented in the utmost mathematical abstractions (UMA). Work in SIC era unites cognition, education, and scientific research. SIC entelechies are to evolve rational part of consciousness. The objective is achievable by means of purposeful labor assisted by deep-learned artificial intelligence (DL IA). Technology is contributed allowing consciousness double helix auto-moulding in order to solve universalities problem. DL IA is to unwind intellectual processes and develop person’s scope of life. System axiomatic method is applied to coordinatization method and continuity property investigation.

Nicolay Vasilyev, Vladimir Gromyko, Stanislav Anosov
Association Matrix Method and Its Applications in Mining DNA Sequences

Many mining algorithms have been presented for business big data such as marketing baskets, but they cannot be effective or efficient for mining DNA sequences, any of which is typically with a small alphabet but a much long sizes. This paper will design a compact data structure called Association Matrix, and give an algorithm to specially mine long DNA sequences. The Association Matrix is novel in-memory data structure, which can be so compact that it can deal with super long DNA sequences in a limited memory spaces. Such, based on the Association Matrix structure, we can design the algorithms for efficiently mining key segments from DNA sequences. Additionally, we will show our related experiments and results in this paper.

Guojun Mao
Deep Learning-Based Real-Time Failure Detection of Storage Devices

With the rapid development of cloud technologies, evaluating cloud-based services has emerged as a critical consideration for data center storage system reliability, and ensuring such reliability is the primary priority for such centers. Therefore, a mechanism by which data centers can automatically monitor and perform predictive maintenance to prevent hard disk failures can effectively improve the reliability of cloud services. This study develops an alarm system for self-monitoring hard drives that provides fault prediction for hard disk failure. Combined with big data analysis and deep learning technologies, machine fault pre-diagnosis technology is used as the starting point for fault warning. Finally, a predictive model is constructed using Long and Short Term Memory (LSTM) Neural Networks for Recurrent Neural Networks (RNN). The resulting monitoring process provides condition monitoring and fault diagnosis for equipment which can diagnose abnormalities before failure, thus ensuring optimal equipment operation.

Chuan-Jun Su, Lien-Chung Tsai, Shi-Feng Huang, Yi Li
Identifying Touristic Interest Using Big Data Techniques

The objective of this paper is to identify the most visited places through a sentiment analysis of the tweets posted by people who visited a specific region of a city. The analyzed data were related to preferences and opinions about tourist places. This paper outlines an architectural framework and a methodology to collect and analysis big data from twitter platform.

Maritzol Tenemaza, Loza-Aguirre Edison, Myriam Peñafiel, Zaldumbide Juan, Angelica de Antonio, Jaíme Ramirez
The Development of the Theory of Quality Assessment of the Information, Taking into Account Its Structural Component

Today is topical the automation of the production part of the search strategy. It is required to study the internal organization of information, which is presented not in a semantic way, but from the position of structuring information. Mathematic formulas reflecting the internal organization of information that affects the effectiveness of the choice of the method for solving the problem become actual. Modern quality indicators do not use the parameters of information structuring. But the structured information can forms the stable links and relationships between the procedural knowledge of the subject area used in modern computer systems to support scientific research, to organize effective selection of the most appropriate method for solving the current applied problem. The following scientific idea is proposed. A structured subject area has an optimal scope for making any right decision. Structured information can’t be superfluous or incomplete, since it takes into account all ideal cases.

Olga Popova, Yury Shevtsov, Boris Popov, Vladimir Karandey, Vladimir Klyuchko
A Meta-Language Approach for Machine Learning

In the last decade, machine learning has increasingly been utilized for solving various types of problems in different domains, such as, manufacturing finance, and healthcare. However, designing and fine-tuning algorithms require extensive expertise in artificial intelligence. Although many software packages wrap the complexity of machine learning and simplify their use, programming skills are still needed for operating algorithms and interpreting their results. Additionally, as machine learning experts and non-technical users have different backgrounds and skills, they experience issues in exchanging information about requirements, features, and structure of input and output data.This paper introduces a meta-language based on the Goal-Question-Metric paradigm to facilitate the design of machine learning algorithms and promote end-user development. The proposed methodology was initially developed to formalize the relationship between conceptual goals, operational questions, and quantitative metrics, so that measurable items can help quantify qualitative goals. Conversely, in our work, we apply it to machine learning with a two-fold objective: (1) empower non-technical users to operate artificial intelligence systems, and (2) provide all the stakeholders, such as, programmers and domain experts, with a modeling language.

Nicholas Caporusso, Trent Helms, Peng Zhang
Cascading Convolutional Neural Network for Steel Surface Defect Detection

Steel is the most important material in the world of engineering and construction. Modern steelmaking relies on computer vision technologies, like optical cameras to monitor the production and manufacturing processes, which helps companies improve product quality. In this paper, we propose a deep learning method to automatically detect defects on the steel surface. The architecture of our proposed system is separated into two parts. The first part uses a revised version of single shot multibox detector (SSD) model to learn possible defects. Then, deep residual network (ResNet) is used to classify three types of defects: Rust, Scar, and Sponge. The combination of these two models is investigated and discussed thoroughly in this paper. This work additionally employs a real industry dataset to confirm the feasibility of the proposed method and make sure it is applicable to real-world scenarios. The experimental results show that the proposed method can achieve higher precision and recall scores in steel surface defect detection.

Chih-Yang Lin, Cheng-Hsun Chen, Ching-Yuan Yang, Fityanul Akhyar, Chao-Yung Hsu, Hui-Fuang Ng
Machine Self-confidence in Autonomous Systems via Meta-analysis of Decision Processes

Algorithmic assurances assist human users in trusting advanced autonomous systems appropriately. This work explores one approach to creating assurances in which systems self-assess their decision-making capabilities, resulting in a ‘self-confidence’ measure. We present a framework for self-confidence assessment and reporting using meta-analysis factors, and then develop a new factor pertaining to ‘solver quality’ in the context of solving Markov decision processes (MDPs), which are widely used in autonomous systems. A novel method for computing solver quality self-confidence is derived, drawing inspiration from empirical hardness models. Numerical examples show our approach has desirable properties for enabling an MDP-based agent to self-assess its performance for a given task under different conditions. Experimental results for a simulated autonomous vehicle navigation problem show significantly improved delegated task performance outcomes in conditions where self-confidence reports are provided to users.

Brett Israelsen, Nisar Ahmed, Eric Frew, Dale Lawrence, Brian Argrow
Research on Accuracy of Flower Recognition Application Based on Convolutional Neural Network

Compared with traditional flower recognition methods, the existing flower recognition applications on the market use advanced deep learning technology to improve the accuracy of plant recognition and solve the problem of plant recognition. The article studied the five applications that users commonly use, comparing and analyzing their recognition accuracy, and finally putting forward the feasibility advice for further improvement of flower recognition applications. The method of sampling survey was adopted, this paper divides the garden flowers and wild flowers into different levels according to their common degrees. Each type of flower was shot from 5 different angles and scenes, and recognized by these five applications separately. The results showed that the rankings of the five applications evaluated were Hua Bangzhu, Hua Banlv, Xing Se, Microsoft’s Flower Recognition, and Find Flower Recognition. At pre-sent, it is necessary to continuously improve from the aspects of technology, products and plant libraries.

Jing-Hua Han, Chen Jin, Li-Sha Wu
Mapping Digital Media Content

The paper suggests that a variety of newly emerged/emerging journalistic forms have enriched online media content. The authors define these as innovative textual, audio, pictorial, and/or video techniques for presenting journalistic production. By examining digital content across both desktop and mobile-based platforms of media organisations, the authors identify 15 ‘hybrid’ and 9 ‘genuine’ new content forms and suggest a draft table-format map for their classification.

Stella Angova, Svetla Tsankova, Martin Ossikovski, Maria Nikolova, Ivan Valchanov
Social Media Competition for User Satisfaction: A Niche Analysis of Facebook, Instagram, YouTube, Pinterest, and Twitter

This paper explores five social media’s media ecological location in the social user through niche analysis of five key SNS services (Facebook, Instagram, YouTube, Pinterest, and Twitter). Based on the results of 224 SNS user’s questionnaire, factor analysis was carried out to extract five common factors of relationship, sociality, convenience, routine, and entertainment. The results of the niche analysis showed that Facebook had the widest niche in sociality (.627) and convenience (.636), and YouTube showed the widest niche in routine (.670) and entertainment (.615). For relationship (.520), Instagram had the widest niche. In terms of five factors, Facebook and YouTube have the greatest overlap in relationship (1.826) and sociality (2.696), while Pinterest and Twitter had the biggest overlap in routine (1.937); entertainment (2.263) and convenience (2.583). Besides, YouTube and Twitter had the most overlap. Facebook, Instagram, and YouTube had a competitive advantage over Pinterest in terms of all factors.

Sang-Hee Kweon, Bo Young Kang, Liyao Ma, Wei Guo, Zaimin Tian, Se-jin Kim, Heaji Kweon
The Diffusion of News Applying Sentiment Analysis and Impact on Human Behavior Through Social Media

The Web is the largest source of information today, a group of these data is the news that is disseminated using Social Networks which are information that needs to be processed in order to know what is its main use in a way that contributes to understanding the impact of these media in the dissemination of news. To solve this problem, we propose the use of data mining techniques such as the Sentiment Analysis to validate the information that comes from social media. The objective of this research is to make a proposal of a method of Systematic Mapping that allows determining the state of the art related to the investigations of the diffusion of news applying Sentiment Analysis and impact on human behavior through Social Media. This initial research presented as a case study a time range until 2017 in research related to the news that uses Data mining techniques like to sentiment analysis for social media in major search engines.

Myriam Peñafiel, Rosa Navarrete, Maritzol Tenemaza, Maria Vásquez, Diego Vásquez, Sergio Luján-Mora
Ensemble-Based Machine Learning Algorithms for Classifying Breast Tissue Based on Electrical Impedance Spectroscopy

The initial identification of breast cancer and the prediction of its category have become a requirement in cancer research because they can simplify the subsequent clinical management of patients. The application of artificial intelligence techniques (e.g., machine learning and deep learning) in medical science is becoming increasingly important for intelligently transforming all available information into valuable knowledge. Therefore, we aimed to classify six classes of freshly excised tissues from a set of electrical impedance measurement variables using five ensemble-based machine learning (ML) algorithms, namely, the random forest (RF), extremely randomized trees (ERT), decision tree (DT), gradient boosting tree (GBT) and AdaBoost (Adaptive Boosting) (ADB) algorithms, which can be subcategorized as bagging and boosting methods. In addition, the ranked order of the variables based on their importance differed across the ML algorithms. The results demonstrated that the three bagging ensemble ML algorithms, namely, RF ERT and DT, yielded better classification accuracies (78–86%) compared with the two boosting algorithms, GBT and ADB (60–75%). We hope that these our results would help improve the classification of breast tissue to allow the early prediction of cancer susceptibility.

Sam Matiur Rahman, Md Asraf Ali, Omar Altwijri, Mahdi Alqahtani, Nasim Ahmed, Nizam U. Ahamed
Cognitive Solutions in the Enterprise: A Case Study of UX Benefits and Challenges

The consumer market has witnessed a proliferation of cognitive solutions. This increase in consumer expectations for AI technology has led enterprise IT leaders to develop cognitive solutions to improve employee productivity, enhance marketing and sales insights, and make better data-driven decisions. As UX designers supporting the enterprise, we have been gaining experience working with cognitive solutions in multiple contexts, from sentiment analysis of people and news for sellers, cognitively-enhanced conflict resolution of conference calls, capability analysis of team performance, to various chatbots. We will discuss several different cognitive solutions that have been created for the enterprise and provide some recommendations and best practices.

Jon G. Temple, Brenda J. Burkhart, Ed T. McFadden, Claude J. Elie, Felix Portnoy
Academic Quality Management System Audit Using Artificial Intelligence Techniques

Quality management systems are a challenge for higher education centers. Nowadays, there are different management systems, for instance: quality, environmental, information security, etc. that can be applied over education centers, but to implement all of them is not a guarantee of education quality because the educational process is very complex. However, a few years ago the Quality Management Systems for higher education centers are taking importance especially in Europe and North America, although in Latin America is an unexplored field. Higher education centers quality is a very complex problem because it is difficult to measure the quality since there are a lot of academic processes as enrollment, matriculation, teaching-learning with a lot of stakeholders as students, teachers, authorities even society; in a lot of locations as campuses, buildings, laboratories with different resources. Each process generates a lot of records and documentation. This information has a varied nature and it is present at a structured and no-structured form. In this context, artificial intelligence techniques can help us to analyze and management knowledge. Our work presents a new approach to audit academic information with machine learning and information retrieval. In our experiments, we used information about syllabus, grades, assessments and online content from a Latin American University. We conclude that using artificial intelligence techniques minimize the decision support time, it allows full data analysis instead of a data sample and it finds out patterns never seen in the case study university.

Rodolfo Bojorque, Fernando Pesántez-Avilés
Axonal Delay Controller for Spiking Neural Networks Based on FPGA

In this paper, the implementation of a programmable Axonal Delay Controller (ADyC) mapped on a hardware Neural Processor (NP) FPGA-based is reported. It is possible to define axonal delays between 1 to 31 emulation cycles to global and local pre-synaptic spikes generated by NP, extending the temporal characteristics supported by this architecture. The prototype presented in this work contributes to the realism of the network, which mimics the temporal biological characteristics of spike propagation through the cortex. The contribution of temporal information is strongly related to the learning process. ADyC operation is transparent for the rest of the system and neither affects the remaining tasks executed by the NP nor the emulation time period. In addition, an example implemented on hardware of a neural oscillator with programmable delays configured for a set of neurons is presented in order to demonstrate full platform functionality and operability.

Mireya Zapata, Jordi Madrenas, Miroslava Zapata, Jorge Alvarez
Ubiquitous Fitting: Ontology-Based Dynamic Exercise Program Generation

In order to reduce the incidence of disease and decrease the proportion of “sub-health”, exercise regularly is one of the most important factors to solve these problems. Regular exercise has many positive effects on body’s systems, while inappropriate forms of exercise can cause problems or even have adverse consequences for health. Therefore, this research aims to develop an ontology-driven knowledge-based system to dynamically generating personalized exercise programs. The generated plan exposing REST style web services, which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. Also, we integrate the system with wearable devices so that we can collect real-time data, for example, heart rate. In the future, break through the limitations of equipment, the accuracy and reliability can be promoted.

Chuan-Jun Su, Yi-Tzy Tang, Shi-Feng Huang, Yi Li
Innovation and Artificial Intelligence

In this article, we endeavor to utilize the advanced workmanship in which computerized innovation remains in the focal point of its imaginative procedure – for instance of inventiveness to outline how innovativeness and AI can increase shared advantage from one another. Moreover, we discuss the distinctive effects of cognitive shifting and Intelligence on innovativeness. AIs create in information, they are likely going to be related with an impressive proportion of our endeavors, and we will extend ourselves with abilities to have the ability to make use of such extra knowledge, moving towards a cyborg situation, whereby the mental and physical spaces end up being dynamically and more clouded with our propelled understanding.

Muhammad Sohaib Shakir, Farhat Mehmood, Zarina Bibi, Maliha Anjum

Software, Service and Systems Engineering

Frontmatter
Service Model Based on Information Technology Outsourcing for the Reduction of Unfulfilled Orders in an SME of the Peruvian IT Sector

In the current market, small- and medium-sized companies (SMEs) face losses due to poor process control. The core activities of information technology (IT) outsourcing service companies are to provide outsourcing services related to technology and information control, which is why it is crucial to work with standardized, efficient processes, to not affect the main process and resources involved. In this document, a case study of an SME is evaluated, related to a deficient billing process, which is not able to fulfill all of its orders. To solve the problem, we propose an IT outsourcing service model, based on the management of processes, knowledge, and change. After the model was validated, it was evidenced that it allowed the integration and finalization of the services provided by the company, increasing the monthly income by 80%.

Renato Bobadilla, Alejandra Mendez, Gino Viacava, Carlos Raymundo, Javier M. Moguerza
A Back-End Method Realizing the Ergonomic Advantage of Modularized Battery Systems: ICT Equipment, Electric Vehicles and Beyond

Whereas the author’s charging and discharging mechanism for modularized battery systems and the associated mathematical trade-off models proposed earlier minimize the human efforts in unloading/loading/(re)charging discharged modules, they per se cannot fully realize the potential ergonomic advantage of modularized battery systems in that considerable time is still necessitated for recharging the unloaded, discharged modules. Such time can apparently be obviated if unloaded, discharged modules are traded and swapped for some already fully charged modules at some battery swapping and charging stations. Battery module trading must be supported by a method to estimate the modules’ different remnant energy storage capacities (hereinafter, capacities) so that such disparity can be financially offset by the trading parties. This article delineates such an estimation method, which comprises Process 1 to calculate an indicator of a module’s capacity and Process 2 to estimate the module’s capacity based on the indicator and some other parameters of the module.

Victor K. Y. Chan
Conceptualizing a Sharing Economy Service Through an Information Design Approach

With the rapid change in technology and the popularity of online-offline activities, the applications of Internet of Things allows various objects, things and services to connected to each other anytime, anywhere. The efficiency and the effectiveness of information enhances both production and management. Better information design motivates users and enriches their experience. This study aims to build a sharing economy service network through a visual information approach. Through the processes of inventory, thinking, planning and building, the research team searches for the touch points, gaps and opportunities throughout the user’s journey in order to understand the current situation and design a new service. The relationship between stakeholders and visitors/users is visualized during the analytical phase. Visual information viewpoints are applied for design development and for preparing service guild kits.

Tingyi S. Lin
Community Slow Transportation Service System Based on Driverless Vehicle

Currently most of the urban transportation systems are planned to be car-centered. Yet very little is known about the research on slow transportation (also called non-motorized transportation), even though it takes account for more than 50% of the whole urban transportation volume and creates more interactions with passengers than urban fast transportation. Residential community is a typical representative of slow transportation environments, and its mobility should also be valued. As one of the biggest future trend, driverless vehicle will have a great impact on the field of transportation and it was also found to be more likely to perform well at slow transportation environments. So how to build up a community slow transportation service system based on driverless vehicle is the key purpose of this research. In this paper, we will introduce the general design process of the service system and represent some simple design results. Meanwhile, we have concluded some design summary and designing approach: (1) three main design types of community slow transportation service, (2) a Kano demand model-based service optimization tool, and (3) the slow transportation service design architecture.

Jintian Shi, Rensi Zheng
Online Service Quality Measurement Utilizing Psychophysiological Responses

This study aims to measure the online service quality in real time utilizing psychophysiological responses of customer experience. Instead of using questionnaires, the psychophysiological responses can reflect the service quality in real time. In the experiment, we designed a searching task on the “Xiaomi” website. We measured the objective experience of the searching task including mental workload and emotional experience of customers by measuring their EEG and EDA, respectively. During the experiment, we used Think-A-Loud to obtain the subjective experience of customers. Eye tracker was used to determine the position of the special point they concerning about. The consistent analysis of the data of psychophysiological responses and the data of Think-A-Loud showed a high consistency. The result showed that the psychophysiological responses can be used to measure the user experience of using the service to reflect the service quality.

Peixian Lu, Lisha Li, Liang Ma
Cubo: Communication System for Children with Autism Spectrum Disorders

The current research, focusing on Social Innovation and Inclusive Design, seeks to understand and explore the interpersonal communication of children with autism spectrum disorders. The main objective is to contribute to the development of the cognitive and social skills of these children, improving and facilitating their difficulties in three domains: verbal language and communication; interpersonal relationships and in the field of thought and behavior. In order to help solving these problems, Cubo has been developing, an innovative system of universal and inclusive communication, composed of a new universal and alternative/augmentative alphabet and digital object that promotes autonomy, social integration, personal development and interpersonal relationships. This first paper aims to inform and promote the discussion of the process and results of the ongoing research, describing the first phases of the design process itinerary and methods applied, as well as the description of the following phases that cause multidisciplinarity and co-creation.

Sofia Espadinha Martins, Paulo Maldonado
Improve Your Task Analysis! Efficiency, Quality and Effectiveness Increase in Task Analysis Using a Software Tool

As the field of Task Analysis (TA) is still fragmented and poorly understood by many, a software-tool, build by HFC Human-Factors-Consult GmbH, has been developed for easy and better TA. Purpose of the study is to evaluate the efficiency, quality and effectiveness of TA performed with the support of this software-tool. In the experiment, 36 participants conducted a total of two hierarchical-TA (HTA) on two given tasks, once using the new software-tool and the second using fundamental methods i.e. paper-&-pencil. The results indicated that the software-tool aided participants in producing good-quality analysis, provided support and guidance during the HTA-process and helped in maintaining a consistent performance-level in terms of both quality and effectiveness. The findings resulted in identifying the strengths and acknowledging the shortcomings of the new software-tool, thus providing a concrete direction for further improvements. Moreover, the study adds to literature by developing checklists to make this assessment, which in turn proposes components that characterizes a good TA.

Vaishnavi Upadrasta, Harald Kolrep, Astrid Oehme
Applying the Ecological Interface Design Framework to a Proactively Controlled Network Routing System

The focus system of this study is the proactively controlled signal routing system used by BBC News Division to broadcast content around its global network. The Work Domain Analysis and Decision Ladder Analysis both associated with the Ecological Interface Design (EID) approach will be used to analyze the system. [7] states the EID approach can improve performance in a variety of domains based on the novel information requirements it uncovers, however we propose that certain alterations to how EID approaches proactively controlled systems would be beneficial. Specifically, when we conducted a work domain analysis, we focused on the object level of the model that specifies that system objects are normally arranged spatially on a display to reflect the actual layout of a system. We feel that explicit reconceptualization of how the objects are arranged is necessary when approaching proactively controlled displays. Specifically, we propose an addition in the form of a temporal arrangement of objects to the current guidelines for the spatial arrangement of objects. The reason for this is so that the trend-based element can be more explicitly stated in the work domain model. We also conducted a Decision Ladder analysis where the system state node of the framework which included the technical specification as well as time orientated usage profiles of resources. The analyses inform subsequent design of a time tunnel visualization to support proactive control of the BBC Broadcast Signal Routing System.

Alexandros Eftychiou
An Instrumented Software Framework for the Rapid Development of Experimental User Interfaces

One of the more demanding aspects of formal evaluations in user-centered system design is the underlying requirement for an experimental platform that accurately reflects the functionality being tested and captures relevant user performance data. While we are unaware of any truly turn-key solutions for avoiding many of the inherent costs associated with the iterative cycle of user-centered system development and formal testing, in our mediated multitasking research, we have found that a multi-application host environment, instrumented for user tracking and a playback function for reviewing what individuals attended to, experienced, and did during the task-performance segments of interaction studies, could be implemented as a reusable infrastructure of services and control functions. In this short paper, we introduce the Testbed Framework System, a flexible and extensible software platform for use in the development and conduct of formal usability testing, and outline a number of its core functions and capabilities.

Hesham Fouad, Derek Brock
Using Virtual Reality and Motion Capture as Tools for Human Factors Engineering at NASA Marshall Space Flight Center

NASA Marshall Space Flight Center (MSFC) Human Factors Engineering (HFE) Team is implementing virtual reality (VR) and motion capture (MoCap) into HFE analyses of various projects through its Virtual Environments Lab (VEL). VR allows for multiple analyses early in the design process and more opportunities to give design feedback. This tool can be used by engineers in most disciplines to compare design alternatives and is particularly valuable to HFE to give early input during these evaluations.These techniques are being implemented for concept development of Deep Space Habitats (DSH), and work is being done to implement VR for design aspects of the Space Launch System (SLS). VR utilization in the VEL will push the design to be better formulated before mockups are constructed, saving budget and time. The MSFC VEL will continue forward leaning implementation with VR technologies in these and other projects for better models earlier in the design process.

Tanya Andrews, Brittani Searcy, Brianna Wallace
Towards a Coherent Assessment of Situational Awareness to Support System Design in the Maritime Context

Information systems in support to Situational Awareness (SAW), such as maritime surveillance systems, are an important family of tools that introduced automation with respect to the human cognitive activities. An integral part of the system design process is the testing and evaluation phase. The formal assessment of the mental state of SAW is a complex task. It appears that SAW assessment in testing and evaluation is often either overlooked by adopting a technology-focused approach or only partially addressed through the use of specific human factors methods. In this paper the authors will discuss how the testing and evaluation of maritime surveillance systems could account both for the system components enabling Situational Awareness and the human element. Furthermore, for such systems a simple and coherent list of key performance indicators and measures of performance is provided.

Francesca de Rosa, Anne-Laure Jousselme
An IT Project Management Methodology Generator Based on an Agile Project Management Process Framework

Information Technology Project Management and Software Project Management in particular depends heavily on the project’s type and constraints. Quality, financial, technical, schedule, complexity and other constraints affect significantly the management process. Over the last two decades project management methodologies have been developed to support the project management effort. Many methodologies cover generic approaches emphasizing on the planning or estimation activities, others on tracking, others on quality and others on very specific management practices that could support the delivery of very specific projects. This paper introduces an adjustable (agile) project management framework for managing information technology projects of any type. The framework divides the management activities into systems engineering management and systems acquisitions management phases and operates as a methodology generator feed by the project constraints. The project management methodology that derives is a combination of management and engineering phases based on the needs and constraints of each project per case.

Evangelos Markopoulos
Analysis on Visual Information Structure in Intelligent Control System Based on Order Degree and Information Characterization

Efficient intelligent control system helps enterprises to create greater economic value and social value, so the rationalization of information structure plays a decisive role in operators’ cognition and operational efficiency. Based on the information characterization method, this paper presents a model of applying the order degree algorithm to the intelligent control system and applies it to the MES production line control system of an enterprise. It not only quantifies the advantages and disadvantages of the information structure by the order degree algorithm, but also provides the direction for the design of the information structure at the beginning of the design process from the microscopic perspective.

Linlin Wang, Xiaoli Wu, Weiwei Zhang, Yiyao Zou, Xiaoshan Jiang
An Analysis of Mobile Questionnaire Layouts

This study examines the effects of mobile questionnaire layouts. The goal is to shed more light on mobile questionnaire usage. Furthermore, we aim to give guidance for researchers implementing online surveys. Researchers in a variety of fields use online questionnaires for their flexibility and efficiency. Poor usability on mobile devices may be associated with underrepresentation of certain target groups or lower data quality. In contrast, the use of well-designed smartphone-based surveys can open up new possibilities for researchers. We developed three different layout variants for comparison with an international sample of N = 204 smartphone users. The results show that grouping questions on separate pages works best with regard to missing values, dropouts, and completion time. However, results also suggest a possible distortion of answering patterns in this layout.

Helge Nissen, Yi Zhang, Monique Janneck
Human Interaction with the Output of Information Extraction Systems

Information Extraction (IE) research has made remarkable progress in Natural Language Processing using intrinsic measures, but little attention has been paid to human analysts as downstream processors. In one experiment, when participants were presented text with or without markup from an IE pipeline, they showed better text comprehension without markup. In a second experiment, the markup was hand-generated to be as relevant and accurate as possible to find conditions under which markup improves performance. This experiment showed no significant difference between performance with and without markup, but a significant majority of participants preferred working with markup to without. Further, preference for markup showed a fairly strong correlation with participants’ ratings of their own trust in automation. These results emphasize the importance of testing IE systems with actual users and the importance of trust in automation.

Erin Zaroukian, Justine Caylor, Michelle Vanni, Sue Kase
Antenna Technology in Wireless Biometric Systems

The article presents basic medical research, as well as measuring devices and methods used in these studies. Human life parameters, which can be obtained after the examination, were also characterized.The concept of a wireless biometric system consisting of a research module, a central unit and an antenna is presented. The first two elements were described theoretically while the antenna was developed in the CST Microwave Studio program. It is a microstrip antenna working in the frequency range from 2.3 GHz to 2.8 GHz. The energy gain of the designed antenna is from 3 dBi to 3.8 dBi. The physical model of the antenna meets the assumption of using it in a wireless biometric system.

Rafal Przesmycki, Marek Bugaj, Marian Wnuk
Parametric Study of Pin-Fins Microchannel Heat Exchanger

MEMS heat exchangers use microchannels to increase the surface area of contact between the moving fluids. MEMS heat exchangers are typically utilized for processor chip cooling applications. This study investigates the influence of geometric and operating parameters on performance of a pin-fins microchannel heat exchangers. ANSYS Workbench is used for modelling. The mathematical model consists of the continuity equation, Navier-Stokes equation, and energy equation. Water is taken to the fluid in this study. The boundary conditions of the model consist of the inlet temperature of the fluids, inlet flow rate of the fluids, fluid velocity on the walls (set v = 0), outlet pressures (set to Pout = zero), and temperature gradient at the outlet (set equal to zero). The influence of hydraulic diameter, pin fins diameters, structural material, and Reynolds number on the effectiveness of heat exchanger is studied. Studies are carried out for Reynolds number varying between 100 and 2000. The materials considered for this study are stainless steel, silicon, and copper. The effectiveness is determined using the inlet and outlet temperatures of the fluids. The thermal performance of MEMS heat exchanger is influenced by the hydraulic diameter and pin-fin diameter. The thermal performance of the MEMS heat exchanger increases with reduction in Reynolds number.

Mohammed Ziauddin, Fadi Alnaimat, Bobby Mathew
Evaluation of the Saudi Construction Industry for Adoption of Building Information Modelling

Building Information Modelling (BIM) has shown great benefits in the building industry on a different level. It helps enhance collaboration, increase efficiency, and reduce waste. However, to achieve good results, BIM requires a customized work plan, a set of regulations, and adequate infrastructure to succeed. The aim of this study is to evaluate the capabilities of the Saudi Construction Industry to adopt BIM. The authors apply results of a previous study completed recently to identify and examine the success factors of BIM implementation in Saudi Arabia. Three main categories are chosen to be examined including infrastructure, process, and existing building policies and regulations. Each category will be further divided into subcategories and evaluated individually. For that purpose, a questionnaire is designed and administered to 150 individuals who are closely involved with the construction market of Saudi Arabia. A preliminary result revealed that many of the participants agreed that the lack of a written procedure is the major reason BIM is not adopted by many. The study further showed that stakeholders are not embracing BIM because it requires many changes to function properly. Lack of knowledge and awareness is another reason BIM is not adopted by many in the field. The output of this study will be used in a future study to help develop a BIM framework that is customized to the Saudi Construction industry.

Obaid Aljobaly, Abdulaziz Banawi
An Experimental Trial of a Novel Ticketing System Using Biometrics

To prevent ticket scalping, we implemented fast identity online (FIDO) technology to authenticate and ensure the correct ticket holder identities at a basketball game held in the city of Tsukuba, Ibaraki, Japan. This paper describes an overview of our system and describes the advantages of our method compared with other ticketing systems. The system was implemented using an open-source electronic commerce platform. The addition of the FIDO module to the platform enables the biometric authentication function. Moreover, we carried out a simple evaluation of the service by asking participants to respond to a questionnaire. We also present the results of the evaluation analysis.

Jun lio, Tadashi Okada
The Role of Change Readiness in Determining Existing Relationship Between TQM Practices and Employee Performance

The significance of Total Quality Management (TQM) in fast changing industries such as the Information Technology (IT) sector cannot be overemphasized. The study was designed to determine the existing relationship between employee performance and Total Quality Management practices within a technological driven sector. Coupled with this, employee willingness to change and Individual Change Readiness (ICR) were explored while considering employee performance. Quantitative study of Software firms in Nigeria was carried out using descriptive statistical method, and the results were subjected to correlation analysis and structural equation modeling, for the data analysis. From the results, it is seen that TQM practices, as well as ICR, serve as a considerable benchmark for measuring employee performance. The findings observed favors the facilitative function of ICR. These observations are critical so long the degree of employee’s readiness to change contributes immensely to organizational transformation. This study contributes to the body of knowledge by revealing the significance of ICR from a TQM perspective as it influences employee performance.

Timothy Laseinde, Ifetayo Oluwafemi, Jan-Harm Pretorius, Ajayi Makinde O, Jesusetemi Oluwafemi
Modeling Decision-Making with Intelligent Agents to Aid Rural Commuters in Developing Nations

More than a billion rural merchants in the developing world depend on hiring on-demand transportation services to commute people or goods to markets. Selecting the optimal fare involves decision-making characterized by multiple alternatives and competing criteria. Decision support systems are used to solve this. However, those systems are based on object-based approaches which lack the high-level abstractions needed to effectively model and scale human-machine communication. This paper introduces AopifyJS, a novel agent-based decision-support tool. We developed a two-agent simulation. One agent makes a request, then another takes a dataset of a stratified sample of 104 Ethiopian commuter criteria preferences and a dataset of fare alternatives. The second agent computes HPA and TOPSIS algorithms to weight, score, rank those alternatives. Once we run the simulation, it returns an interpretable prescription to the first agent, storing all interactions in an architecture that allows developers to program further customization as interactions scale.

Patricio Julián Gerpe, Evangelos Markopoulos
Self-similarity Based Multi-layer DEM Image Up-Sampling

As one of the basic data of GIS, DEM data which expresses the surface elevation data is widely used in many fields. How to obtain a wide range of high-precision elevation data is a big challenge, the simple interpolation algorithm currently used is less accurate. Due to the fractal data characteristics of terrain data, DEM data shows strong self-similarity. Based on this feature, this paper proposes a multi-layer Dem image up-sampling method. Image up-sampling is performed multiple times in layers on the low-resolution DEM image, therefore, high-precision DEM information with less error is obtained. In this paper, elevation data of 30 m is expanded to elevation data of 10 m by gradually using this method. Experimental results show that the algorithm can achieve good results and has a small deviation from the real elevation data of 10 m.

Xin Zheng, Ziyi Chen, Qinzhe Han, Xiaodan Deng, Xiaoxuan Sun, Qian Yin
Automated Clash Free Rebar Design in Precast Concrete Exterior Wall via Generative Adversarial Network and Multi-agent Reinforcement Learning

The adoption of precast concrete elements (PCEs) are becoming popular in civil infrastructures. Since quality of connections determines the structure property, design of rebar in PCEs is a mandatory stage in constructions. Due to large number of rebar, complicated shapes of PCEs and complicated rules for arrangement, it is labor-intensive and error-prone for designers to avoid all clashes even using computer software. With the aid of BIM, it is desirable to have an automated and clash-free rebar design. Taking this cue, we introduce a framework with generative adversarial network (GAN) and multi-agent reinforcement learning (MARL) for generating design and automatically avoiding clash of rebar in PCES. We use GAN to generate 2D rebar designs. Then, 2D rebar designs are transformed into digital environments for MARL. In addition, layout of rebar is modelled as path planning of agents in MARL. An illustrative example is presented to test the proposed framework.

Pengkun Liu, Jiepeng Liu, Liang Feng, Wenbo Wu, Hao Lan
Cognitive Ergonomic Evaluation Metrics and Methodology for Interactive Information System

In the face of complex information interactive system, it is essential to evaluate products achieve system performance within users cognitive capacity. Most of the research about ergonomic evaluation mainly focus on the macro ergonomic method, which not focus on concrete design problem at the micro level. This paper focuses on how to identify and predict cognitive ergonomic problems based user action and cognitive model and establishes the mapping relationship between cognitive ergonomic problems and real-time continuous measured data in order to let the evaluation results play a direct role in the design. The methodology was applied to evaluate the ergonomic quality of IETM used by astronauts in the space station, which including make flight plans, do experiments, in-orbit maintenance, and so on. A series of standardized evaluation procedures were designed to explore the possibility of remote ergonomic measurement for long-term orbiting operation.

Yu Zhang, Jianhua Sun, Ting Jiang, Zengyao Yang
A System for User Centered Classification and Ranking of Points of Interest Using Data Mining in Geographical Data Sets

In this paper we propose a system to automatically extract and categorize points of interest (POIs) out of any given geographical data set. The system is then modified for a user centered approach to work in a web environment. This is done by customizing the order, amount and contents of the previously created categories on a per user basis in real time. The aim of this system is to provide users with a more flexible and less error prone approach to POI data that can then be used in geographical routing and navigation applications, replacing the conventional existing solutions that need to be manually administrated. The generated results are validated using preexisting, manually created, point of interests and their corresponding categories.

Maximilian Barta, Dena Farooghi, Dietmar Tutsch
NASA Marshall Space Flight Center Human Factors Engineering Analysis of Various Hatch Sizes

The NASA Docking System (NDS) is a 31.4961-inch (800 mm) diameter circular hatch for astronauts to pass through when docked to other pressurized elements in space or for entrance or egress on surface environments. The NDS is utilized on the Orion Spacecraft and has been implemented as the International Docking System Standard (IDSS). The EV74 Human Factors Engineering (HFE) Team at NASA’s Marshall Space Flight Center (MSFC) conducted human factors analyses with various hatch shapes and sizes to accommodate for all astronaut anthropometries and daily task comfort. It is believed that the hatch, approximately 32 inches, is too small, and a bigger hatch size would better accommodate most astronauts. To conduct human factors analyses, four participants were gathered based on anthropometry percentiles: 1st female, 5th female, 95th male, and 99th male.

Tanya Andrews, Rebecca Stewart, Walter Deitzler
Ontological Description of the Basic Skills of Surgery

Artificial intelligence (AI) has recently been receiving increasing attention in the medical field. For AI to be successfully applied to surgical training and certification, ontological and semantic description of clinical training procedures will be necessary. This study proposes a method for the ontological analysis of basic endoscopic surgery skills. Surgical procedures learned during basic training using a training box for exercises such as pegboard, pattern cutting, and suturing, were analyzed and described as ontological procedures. Surgical maneuvers for endoscopic surgery performed in the training box were successfully classified and described by referencing ontological concepts. Such ontological descriptions can be applied in many ways, for example, computerized evaluation of medical training and certification, automated robotic surgical systems, and medical safety check and alert systems. The need for simple and practical methods for the ontological description of clinical medicine was also revealed in this study.

Kazuhiko Shinohara
An Alternative Graphic and Mathematical Method of Dimensional Analysis: Its Application on 71 Constructive Models of Social Housing in Guayaquil

In this study, a method of alternative graphic and mathematical analysis is presented to determine important information such as the ratio between length and width of the analyzed orthogonal spaces and their relation with the useful surface, in order to determine the most suitable relationships for the user, also allowing to carry out an analysis of the spaces that are part of the architectural program among the studied models, with the aim of establishing the constructive tendencies and the dimensional characteristics framed in specific periods of time.The methodology has been applied to 71 single-family low-income housing solutions that represent the buildings where 41.3% of the population of Guayaquil inhabits; of which 50 models belong to public and NGO’s promotions and 21 to self-built models by users. It shows the materiality and the used techniques, classifying the analyzed models according to the type of promotion, number of levels, the adopted grouping typology and the architectural program that predominates in most of the cases.

Byron Sebastian Almeida Chicaiza, Jesús Anaya Díaz, Pamela Bermeo Rodríguez, Jesús Hechavarria Hernández, Boris Forero
Improving the Blockchain User Experience - An Approach to Address Blockchain Mass Adoption Issues from a Human-Centred Perspective

Blockchain is seen as one of the most promising technologies that appeared in recent years. However, blockchain-based decentralised apps, DApps, have failed to receive mainstream attraction so far. The objective of this paper is to outline potential solutions for the most severe adoption problems from a human-centred perspective. Existing adoption issues will get analysed via unstructured quantitative as well as qualitative user research, in order to discover causes and consequences of these issues. The identified four major issues for mass adoption are: The motivation to change, the onboarding challenge, the usability problem and the feature problem. The insights gained from this research form the basis for solution approaches, which can be summarized in the following statements: DApps need to offer a distinct user benefit, focus on learnability, be conveniently accessible and its constraints need to be turned into assets.

Leonhard Glomann, Maximilian Schmid, Nika Kitajewa

Human Factors in Energy

Frontmatter
Beyond COSS: Human Factors for Whole Plant Management

A Computerized Operator Support System (COSS) has been envisioned and prototyped for nuclear power plant (NPP) operations. The COSS supports operator decision making during abnormal events by making use of an online fault-diagnostic system known as PRO-AID. PRO-AID uses existing sensors and confluence equations to detect system faults before plant threshold alarms would alert operators. In a full-scope, full-scale nuclear power plant simulator we demonstrated that early diagnosis in conjunction with computer-based-procedures can assist operators in mitigating faults in circumstances that would normally lead to shutting down the plant. Here we conceive of a computerized plant management system (CPMS) that would complement COSS. COSS is intended to aid in the short-time scale operations of a plant. The CPMS is aimed at supporting longer time-scale activities involving maintenance, calibration, and early warning fault detection. Digitization allows staff to manage and monitor assets remotely without being physically present locally at the equipment under surveillance. This allows for specialization by equipment and systems rather than location. Cyber-physical systems and the industrial internet of things have no shortage of possibilities and promises, but often the benefits come with conflicting considerations: availability may tradeoff with increased cyber risk, increased redundancy may come at the cost of increased complexity, outsourcing engineering requires more communication across organizations and the associated administrative burden. In too many instances, humans tend to fill the gaps between what automation cannot reliably or cost-effectively accomplish. Human-centered design using human factors considers the role of the human as part of a human-machine-system. Our COSS concept examined how a control system, a diagnostic system, expert guidance system, and computer-based procedure system could work in concert to aid operator actions. Here we examine the potential for human factors and human-centered design for whole plant management using a CPMS. A CPMS could serve several valuable functions from providing computerized procedures for field operations to using prognostics for early failure detection, or managing information flows from sensors to business networks, system engineers, cyber-security personnel, and third-party vendors.

Roger Lew, Thomas A. Ulrich, Ronald L. Boring
Imagine 2025: Prosumer and Consumer Requirements for Distributed Energy Resource Systems Business Models

The user-centered design and the acceptance of smart grid technologies is one key factor for their success. To identify user requirements, barriers and underlying variables of acceptance for future business models (DSO controlled, Voltage-Tariff, Peer-to-Peer) a partly-standardized interview study with N = 21 pro- and consumers was conducted. The results of quantitative and qualitative data demonstrate that the acceptance of each future energy business model is relatively high. The overall usefulness was rated higher for future business models than the current business model. Prosumers had a more positive attitude towards the Peer-to-Peer model, whereas consumers preferred models in which the effort is low (DSO controlled) or an incentive is offered (Voltage-Tariff). The DSO controlled model is not attractive for prosumers, who criticize the increased dependency and external control. From the results it can be concluded that tariffs should be adapted to the user type.

Susen Döbelt, Maria Kreußlein
Nuclear Power Plant Accident Diagnosis Algorithm Including Novelty Detection Function Using LSTM

Diagnosis of the accident or transient at the nuclear power plants is performed under the judgment of operators based on the procedures. Although procedures given to operators, numerous and rapidly changing parameters are generated by measurements from a variety of indicators and alarms, thus, there can be difficulties or delays to interpret a situation. In order to deal with this problem, many approaches have suggested based on computerized algorithms or networks. Although those studies suggested methods to diagnose accidents, if an unknown (or untrained) accident is given, they cannot respond as they do not know about it. In this light, this study aims at developing an algorithm to diagnose the accidents including “don’t know” response. Long short term memory recurrent neural network and the auto encoder are applied for implementing the algorithm including novelty detection function. The algorithm is validated with various examples regarding untrained cases to demonstrate its feasibility.

Jaemin Yang, Subong Lee, Jonghyun Kim
Application of Economic and Legal Instruments at the Stage of Transition to Bioeconomy

The article presents the results of the analysis of official data on the energy produced from biomass in Ukraine on the basis of application of a linear regression model. The comparative analysis of general trends in bioenergy was conducted for Ukraine and Poland. The analysis showed that while preserving existing trends, there is a high probability of not achieving the strategic goals of a country that has a significant potential for sustainable biomass. The necessity of application of economic and legal instruments that can stimulate the production and consumption of biofuels made on the basis of waste from agricultural enterprises and wood waste is substantiated for the stage of transition to bioeconomy. The bioeconomic approach to the development of bioenergy can only be realized if there is an interest in the rational use of biomass resources as of its founders, as well as the state and end users of biofuels.

Valentyna Yakubiv, Olena Panukhnyk, Svitlana Shults, Yuliia Maksymiv, Iryna Hryhoruk, Nazariy Popadynets, Rostyslav Bilyk, Yana Fedotova, Iryna Bilyk
Backmatter
Metadata
Title
Advances in Artificial Intelligence, Software and Systems Engineering
Editor
Prof. Tareq Ahram
Copyright Year
2020
Electronic ISBN
978-3-030-20454-9
Print ISBN
978-3-030-20453-2
DOI
https://doi.org/10.1007/978-3-030-20454-9

Premium Partners