Skip to main content

2018 | Buch | 1. Auflage

Trends and Advances in Information Systems and Technologies

Volume 2

insite
SUCHEN

Über dieses Buch

This book includes a selection of papers from the 2018 World Conference on Information Systems and Technologies (WorldCIST'18), held in Naples, Italy on March27-29, 2018. WorldCIST is a global forum for researchers and practitioners to present and discuss recent results and innovations, current trends, professional experiences and the challenges of modern information systems and technologies research together with their technological development and applications. The main topics covered are: A) Information and Knowledge Management; B) Organizational Models and Information Systems; C) Software and Systems Modeling; D) Software Systems, Architectures, Applications and Tools; E) Multimedia Systems and Applications; F) Computer Networks, Mobility and Pervasive Systems; G) Intelligent and Decision Support Systems; H) Big Data Analytics and Applications; I) Human–Computer Interaction; J) Ethics, Computers & Security; K) Health Informatics; L) Information Technologies in Education; M) Information Technologies in Radiocommunications; N) Technologies for Biomedical Applications.

Inhaltsverzeichnis

Frontmatter

Software Systems, Architectures, Applications and Tools

Frontmatter
Interpersonal Relationships, Leadership and Other Soft Skills in Software Development Projects: A Systematic Review

Today, Software Development Projects teams require that the professional profile of their members not only consists of calibration techniques, but also non-technical skills such as Interpersonal Relationships, Leadership and other Soft Skills. A systematic review of the literature (SRL) that addresses the Management of Software Development Projects was performed in order to find out how the management of Software Development projects and Interpersonal Relationships/non-technical skills/the constructs correlate. In our review, we selected twenty-three relevant articles, which were analyzed systematically, and identified significant correlations between the psychological variables and the successful management of Software Development Projects.

Rafael Elizalde, Sussy Bayona
Digital Signature Solution for Document Management Systems - The University of Trás-os-Montes and Alto Douro

The University of Trás-os-Montes e Alto Douro (UTAD), in an effort to streamline processes and reduce bureaucracy, decided to develop and use an in-house document management system to handle processes. However, this practice created additional needs such as the actual digital signing of documents associated with the institution business and administrative processes. This paper explores a solution proposal to this problem, documenting what are its functionalities and how it works. An initial application of the developed solution is also described and analyzed in order to demonstrate the overall adequacy of the proposed artefact and its overall impact to the institution administrative operations.

Cláudio Pereira, Luís Barbosa, José Martins, Jorge Borges
Accuracy Comparison of Empirical Studies on Software Product Maintainability Prediction

Software maintainability is a very broad activity which ensures that the software product fulfills its changing requirements and enhancement capabilities once on the client side. Predicting software product maintainability contributes to the reduction of software product maintenance costs. In this perspective, many software product maintainability prediction (SPMP) techniques have been proposed in the literature. Some studies have empirically validated their proposed techniques while others have compared the accuracy of the SPMP techniques. This paper reviews a set of 29 studies, which are identified from eight digital libraries and collected from 2000 to 2017. The present paper is targeted to present the various SPMP techniques used and reveals all about the experimental design of these studies.

Sara Elmidaoui, Laila Cheikhi, Ali Idri
Measurement Based E-government Portals’ Benchmarking Framework: Architectural and Procedural Views

E-government benchmarking can be defined as the process of classifying e-government according to some agreed best practices. It can be used to benchmark, evaluate achievements and identify missing best practices for stakeholders. The purpose of this paper is to propose and build a new benchmarking framework for e-government portals, which is based on measurement of best practices using a best practice model. To achieve this purpose, we have identified useful guidelines to build a new benchmarking framework based on an analysis and discussion of the five most famous e-government benchmarking frameworks in the literature. As a result a new framework referred to as Measurement Based e-Government Benchmarking Framework (MBeGBF) is proposed which moves beyond the actual benefits of these ordinary frameworks by providing guidelines and best practices for agencies to improve their portals’ quality.

Laila Cheikhi, Abdoullah Fath-Allah, Ali Idri, Rafa E. Al-Qutaish
Exploring Factors Affecting Mobile Services Adoption by Young Consumers in Cameroon

With the advancement of mobile devices and sophisticated mobile data transmission technologies nurtured by telecommunication providers of 4G services, m-commerce has become an important platform for easier consumer interactions. It’s in this light that researchers have been paying much attention to how businesses, can reach specific consumer segments such as teens and young adults. This research aims to investigate factors predicting the consumer’s intention to adopt m-commerce in Cameroon, but also the moderating effects of demographic variables on such prediction. Data were collected from 262 Cameroonian respondents aged less than 45, as the category of unconditional IT users in Cameroon. A quantitative approach based on the PLS-SEM algorithm was used. Results showed no significant moderating effect of age and gender for the hypothesis: Behavioural intention positively influences consumer intention to adopt m-commerce. Findings are expected to help companies dealing with m-commerce to better formulate marketing strategies to attract more users.

Frank Wilson Ntsafack, Jean Robert Kala Kamdjoug, Samuel Fosso Wamba
A Systematic Map of Mobile Software Usability Evaluation

Usability evaluation is currently considered critical for the success of mobile interactive applications. This paper presents a Systematic Mapping Study (SMS) that has been conducted to investigate the literature related to the Mobile Usability Evaluation (MUE) techniques. This mapping study builds on the followings classification criteria for the selection of studies: research approaches, research types, research domains, Usability evaluation methods, data collection tools, types of questionnaires that have been used in the empirical evaluations of these studies, and software quality (SQ) models. Publication channels and trends were also identified and 81 papers of MUE were selected.

Karima Moumane, Ali Idri
Sorting Fused Images for Multi-time Analysis of the Area Surrounding the Headwaters of the Meta River

This paper focuses on the process of theme-sorting Landsat images that have been enhanced by means of multispectral-panchromatic fusion. In addition to the assessment of the fusion methodologies, the paper also highlights the changes that have occurred in the area surrounding the headwaters of the Meta River during the last 16 years, near the municipality of Puerto López (Meta – Colombia). To carry out the fusion process, the Wavelet transform was used. The transform captures suitable information about spatial details of a panchromatic image and integrates the resulting image into the multispectral bands. Decision trees were used to classify the set of fused images. Classification with decision trees was based on differential discrimination of the spectral ranges for each of the coverage areas associated to multi-spectral bands. The present study delivers five theme maps showing the changes that have been occurred in related areas which reach a precision in the classification of 93.01% for the fused image in comparison of 74.08% obtained for the image without fusion.

Diego Soler, Harold De La Cruz, Javier Medina
The Gamification Systems Application Elements in the Marketing Perspective

Gamification could be a useful marketing technique that involves the use of game elements and design in real-life contexts. This study seeks to understand the gamification elements used to develop meaningful experiences, and relationships between organizations and their target market, increasing their engagement. The results obtained from the analysis of studies on gamification applications in business, education and health care sectors indicate that the different gamification elements can be applied in any sector of B2C market activity, creating value and competitive advantages. The various elements of gamification are tailored to the sectors of activity that require greater involvement with the target audience, as well as greater participation of consumers in the purchasing decision process, even in areas of activity whose products are commonly purchased, such as the coffee.

Nuno Teotónio, José Luís Reis
Suspended Solids in the Gulf of Urabá Colombia – Annual Average Estimation Using MODIS MYD09Q1 Images

The paper presents the construction of an empirical model, applied to MODIS MYD09Q1 Images, based on in-situ samples of the Total Suspended Solids (TSS) found in the Gulf of Urabá (Colombia) in the period 2011–2015. The study highlights the usefulness of digital image processing when retrieving color data from the oceans, such as sedimentation. The resulting data proves to be relevant for the analyses of environmental care and ecological preservation in such coastal areas, which are known to be important due to their biodiversity but not as known regarding their associated sediment dynamics and concentration. Both spatial and temporal variability of sediments are analyzed in yearly scales. The results show significant season-driven differences in terms of concentration and direction of sediment plumes. It was found that annual average values exceed 100 mg/L at El Rotico Bay due to the contributions from Atrato River at Boca del Roto within the Gulf of Urabá.

Ivan Carrillo, Javier Medina
HUEQUITAS: Web Application for Location of Low Cost Restaurants

Over the years Internet accessibility has increased in a remarkable way and now, besides this, the use of devices such as smartphones, tablets and laptops allows people to access a large variety of applications, among which are those that enable users to locate places such as restaurants, hotels and parks, allowing quick access to that information. This document presents the development and implementation of a web application for the geolocation of low-cost restaurants, commonly called “huequitas”. Our application uses HTML5, CSS, PHP and JavaScript for the development of the web page and implements the Google Maps API to improve the search experience for locating all the “huequitas” in the sector. The application locates the current position of the user; and, based on the users position, shows the nearby “huequitas” along with the route to follow in order to reach each place. This application is characterized for using MySQL, an Open Source database and three layer architecture at the network level. The design of the application is specifically for a web platform, but being responsive it can be viewed from a smartphone or tablet.

Liliana Enciso, David Nodine, Enrique Cueva, Pablo Alejandro Quezada-Sarmiento, Elmer Zelaya-Policarpo
Software Tool for Evaluation of Road Pavement Energy Harvesting Devices

This paper deals with the development of a software tool to evaluate road pavement energy harvesting systems technically and economically, as well as performing cost benefit analysis for the application of this type of energy generation solution as the energy source for different applications. The software also allows the user to perform a sensitivity analysis by selecting a key variable and defining its different values. Some case studies are also presented to demonstrate the software application and how it can be used to evaluate this technology.

Francisco Duarte, Adelino Ferreira, Paulo Fael
Sentiment Analysis of Social Network Data for Cold-Start Relief in Recommender Systems

Recommender systems have been used in e-commerce to increase conversion due to matching product offer and consumer preferences. Cold-start is the situation of a new user about whom there is no information to make suitable recommendations. Texts published by the user in social networks are a good source of information to reduce the cold-start issue. However, the valence of the emotion in a text must be considered in the recommendation so that no product is recommended based on a negative opinion. This paper proposes a recommendation process that includes sentiment analysis to textual data extracted from Facebook and Twitter and present results of an experiment in which this algorithm is used to reduce the cold-start issue.

Felipe G. Contratres, Solange N. Alves-Souza, Lucia Vilela Leite Filgueiras, Luiz S. DeSouza
Virtualization-Based Techniques for the Design, Management and Implementation of Future 5G Systems with Network Slicing

Emerging 5G communications aim to simplify the current inefficient and heterogeneous collection of wireless solutions for future systems. However, contrary to traditional mobile networks, 5G networks must consider many different application scenarios (Internet-of-Things, wearable devices, etc.). In this context it is defined the concept of network slicing, a technique where network resources are packaged and assigned in an isolated manner to set of users according to their specific requirements. The use of Virtual Network Functions and other similar technologies is a first step in this challenge, but deeper changes are required. Therefore, in this paper we present a virtualization-based technique for the design, management and implementation of future 5G systems with network slicing. The proposed technique employs extensively current virtualization technologies such as Docker or Kubernetes in order to create, coordinate and manage slices, services and functional components in future 5G networks. A simulation scenario describing these future mobile networks is also provided, in order to obtain first evidences of their predicted performance.

Borja Bordel, Diego Sánchez de Rivera, Ramón Alcarria
An Intra-slice Chaotic-Based Security Solution for Privacy Preservation in Future 5G Systems

The great heterogeneity of applications supported by future 5G mobile systems makes very difficult to imagine how an uniform network solution may satisfy in an efficient way all user requirements. Thus, several authors have proposed the idea of network slicing, a technique where network resources are packaged and assigned in an isolated manner to set of users according to their specific requirements. In this context, different slices for IoT systems, eHealth applications or standard mobile communications have been defined. For each slice, specific intra-slice solutions for device management, security provision, and other important pending challenges must be investigated and proposed. Therefore, in this paper an intra-slice chaotic-based security solution for privacy preservation is described. The presented solution employs various mathematical procedures to transform the three chaotic signals of Lorenz dynamics into three binary flows, employed to cipher and mask the private information, using a reduced resource microcontroller. A first implementation of the proposed system is also described in order to validate the described solution.

Pilar Mareca, Borja Bordel
Big Data Meets the Food Supply: A Network of Cattle Monitoring Systems

The beef cattle industry generates $78.2 billion of revenues from nearly 100 million head each year in the U.S. alone. Cattle feed efficiency is a measure of animal growth. Animals with better efficiency may grow at the same rate as animals with lower efficiency, but will eat less to do so. This paper introduces a network of sensors in a cattle production operation designed to measure and report feed efficiency to the farmer. The sensors provide data that is used to monitor and control feed rations to the animals and help the farmer make informed decisions regarding animal grouping, control and genetic line building to improve beef stock quality over time. While cattle feed control and monitoring is itself not a new concept, the system described here adds some automated components to enhance and better control the operation that have not yet been done.

Michael A. Chilton
A Mobile Application to Provide Personalized Information for Mobility Impaired Tourists

Mobile applications that are developed to assist disabled people in their tourist activities should go much further than simply providing information about points of interest or recommending places or routes based on the user location. Due the specific needs of each person in this group, they should be able to provide them with information about the most suitable points of interest and recommend them with the most suitable places to visit which should be contextualized according to their specific needs and interests. This work describes a proposal for a personalized system to assist mobility disabled people in their tourist activities. The system considers the accessibility features of each tourism service, or product, and the users’ preferences and disabilities and explores information about their interactions and opinions about each place, to recommend points of interest, routes and services that best suit their specific needs and interests.

Fernando Ribeiro, José Metrôlho, João Leal, Hugo Martins, Pedro Bastos
Using Online Artificial Vision Services to Assist the Blind - an Assessment of Microsoft Cognitive Services and Google Cloud Vision

The visually impaired must face several well-known difficulties on their daily life. The use of technology in assistive systems can greatly improve their lives by helping with navigation and orientation, for which several approaches and technologies have been proposed. Lately, it has been introduced powerful online image processing services, based on machine learning and deep learning, promising truly cognitive assessment capacities. Google and Microsoft are two of these main players. In this work we built a device to be used by the blind in order to test the usage of the Google and Microsoft services to assist the blind. The online services were tested by researchers in a laboratory environment and by blind users on a large meeting room, familiar to them. This work reports on our findings regarding the online services effectiveness, the user interface and system latency.

Arsénio Reis, Dennis Paulino, Vitor Filipe, João Barroso
Single Sign-On Demystified: Security Considerations for Developers and Users

A website of an entity (organization or enterprise) usually provides multiple services to its members. Once a user of the entity signs-on for a service, she can access all services available to her. This is known as single sign-on (SSO). For implementation of SSO, user authentication is separated, at least logically, from services. An identity provider (IDP) authenticates a user and a service provider (SP) delivers each service. Thus, a user has an active IDP session, and one active service session for each SP she is accessing. While SSO eases the life of users and system-administrators, if SSO not implemented carefully, a user may sign-out from all services but still may have an active IDP session, and users might not be aware of existence of the active IDP sessions. In this work, we use state-transition diagrams to trace the steps during a SSO activity, and then show the states that a user’s browser may maintain. We show that even after a user signs-out or timed-out from all service sessions or the IDP server session, active sessions may exist that the user maybe unaware of. This situation may happen because implementer never thought of this possibility or the user is unaware of such possibility or both. We propose some possible remedies to mitigate undesirable information-security situations we have exposed.

Lokesh Ramamoorthi, Dilip Sarkar
XSS Attack Detection Approach Based on Scripts Features Analysis

Cross-Site Scripting (XSS) attacks are type of injection problems in modern Web applications that can be exploited by injecting JavaScript code. By now there have been a variety of defensive techniques to protect web application against XSS injection attack, but XSS still cannot be totally detected by injecting benign code of JavaScript: injecting of existing method calls or overriding an existing method definition. Moreover, the present server-side XSS detection systems are based on source code modification of the supervised application. In this project, we developed a server side XSS detection approach based on scripts features analysis, which permits the detection of a wide range of injected scripts: malicious script or legitimate script which is similar to the benign script, without any modification of application source code. Our approach is evaluated on three web applications. The experimental results prove that our approach detects a wide range of XSS attacks.

Saoudi Lalia, Ammiche Sarah
Recognizing Dynamic Fields in Network Traffic with a Manually Assisted Solution

Payloads of packets transmitted over network contain dynamic fields that represent many kinds of real world objects. In many different applications, there is a need to recognize and sometimes replace these fields. In this paper, we present a manually assisted solution for searching and annotating dynamic fields in message payloads, specifically focusing on web environment. Our tool provides a simple and intuitive graphical user interface for annotating dynamic fields.

Jarko Papalitsas, Jani Tammi, Sampsa Rauti, Ville Leppänen
Online Social Networks Analysis Visualization Using Socii

Nowadays we face an age of massive Internet usage. With Online Social Networks (OSN) we practically live this parallel reality where everything we do and everyone we met is exposed and shared through these online “worlds”. Today, being able to study and understand how information flows and how relationships are built within these online networks is of paramount importance for various reasons, social, educational, political or economical. That is why social networks analysis has become an important scientific and technological challenge. In this paper, we propose the Socii system for social networks analysis and visualization. Socii aims at helping OSNs users to exploit and understand their own networks through a user friendly interface. The system relies in four main principles, namely simplicity, accessibility, OSNs integration and contextual analysis. Socii’s architecture and implementation technological choices are presented, together with its main functionalities.

Jorge Daniel Caldas, Alda Lopes Gancarski, Pedro Rangel Henriques
API Documentation
A Conceptual Evaluation Model

Application Programming Interface (API) is packed functionality to solve specific tasks. In order for developers to learn to use its functionality, APIs include some kind of documentation. Documentation is an important part of the API itself, but, providing high-quality documentation is not a straightforward task. Nowadays, most of the documentation does not include the information expected by users. Another problem is the lack of comprehensive evaluation methods that can help creator to identify missing or incomplete elements in their documentation.This papers presents a set of basic (minimum) elements that any API documentation should consider in order to provide target user the information they need to learn and make proper use of the provided functionality. Then through a survey, we collect the importance that software developers give to each basic element. Using the importance values collected, a conceptual API Documentation evaluation method is proposed that can be used by documentation creator to identify the weakness in their documentation. Finally, the model applicability is tested using it to evaluate some popular online API documentation for various domains.

Sergio Inzunza, Reyes Juárez-Ramírez, Samantha Jiménez
Using Emotion Recognition in Intelligent Interface Design for Elderly Care

In the later stages of the aging process, an elderly person might need the help of a family member or a caregiver. Technology can be used to help to take care of elderly persons. Autonomous systems, using special interfaces, can collect information from elderly people, which might be useful to predict and recognize health related problems or physical security problems in real time. The emerging technology of image processing, in particular, the emotion recognition, can be a good option to use in elderly care support systems. In this article, we implemented a Microsoft Azure – Emotion SDK to recognize emotion of elderly that able to detect faces and recognize emotions in real time and to be used for elderly care support. The analysis is done with an online video stream, which analyzes facial expression, so that in case of a critical emotion, e.g., if an elderly is very sad or crying, it will inform a caregiver or related entity. From the experiment, we concluded that emotion recognition is a reliable technology to be implemented in real time elderly care.

Salik Khanal, Arsénio Reis, João Barroso, Vitor Filipe
Design of an HMI in Web Server of PLC’s S7-1200/1500 for the Control of a Multivariable Process of a Didactic Module

The present project consists of the design of a web page for the monitoring and control of a Profinet network of a multivariate process and the proportional hydraulic control of position of a didactic module controlled by a S7-1200 PLC and the second didactic module controlled with a PLC S7-1500, these programmable PLCs were used as a WEB server and to store the different web pages created by the user, allows to use any HTML text editor for processing. In this case we used an AWP language and JavaScript that is a language similar to the C language that allows to change images, make animations, etc. In addition, this control alternative minimizes the costs in its elaboration when using computer tools with free software, allows better control and supervision of each of the industrial process stations with efficient and secure WEB server.

Sánchez Ocaña Wilson, Almache Barahona Verónica, Salazar Jácome Elizabeth, Freire Llerena Washington, Silva Monteros Marcelo
Manage Software Requirements Specification Using Web Analytics Data

In the context of SaaS (Software as a Service) where software has to be up and running 7 days a week and 24 h a day, keeping the requirements specification up to date can be difficult. Managing requirements in this context have additional challenges that need to be taken into account, for instance, re-prioritize requirements continuously and identify/update new dependencies among them. We claim that extracting and analyzing the usage of the SaaS can help to maintain requirements updated and contribute to improve the overall quality of the services provided. This paper presents REQAnalytics, a recommendation system that collects the information about the usage of a SaaS, analyses it and generates recommendations more readable than reports generated by web analytic tools. The overall approach has been applied on several case studies with promising results.

Jorge Esparteiro Garcia, Ana C. R. Paiva
Defining a Collaborative Platform to Report Machine State

Nowadays, we are seeing the evolution of Industry, and with it, the development of technological solutions that can assure sustainability and competitiveness in the manufacturing environment. Along this evolution, Cyber Physical Systems were developed with the goal to merge both physical and computational processes and to allow predictive, proactive and collaborative maintenance of industrial machines. The work here presented has been integrated in the Cyber Physical System based Proactive Collaborative Maintenance (MANTIS) project and has the main goal of proposing a collaborative platform to report current machine state. This way, it will be possible to facilitate and support the interaction between all stakeholders that will participate in the collaborative decision-making process. With this approach we believe to be possible to reduce machine down-time and the unnecessary waste of machine components and workhand while attempting to solve different machine problems.

Diogo Martinho, João Carneiro, Asif Mohammed, Ana Vieira, Isabel Praça, Goreti Marreiros
Analysis of Heat Transfer Between a Coolant Fluid and a Plastic Blowing Matrix Using the ANSYS CFD Tool

This publication deals with the analysis of heat transfer between a coolant (water) and a plastic blowing matrix in order to improve the finishing process for the production of plastic containers. The inlet temperature and pressure value from the cooling line to the blowing matrix are optimal values for the cooling of the matrix. In a first stage, the data was taken from the working temperatures of the mold, the temperatures taken were inside the mold, outside and also the temperature of the plastic sleeve when it leaves the extruder, these temperatures were measured with the Ecom Ex MP4a equipment, which is a heat gun. Subsequently, the values of pressures and flow rates were observed, with which the cooling line to the mould worked in this case the pressure is 4 bar and a flow rate of 10 m/s, these values are very important to carry out the simulation since they are the edge conditions of the system. Finally, the matrix was modelled in CAD software, this case was SOLIDWORKS, and then the simulation was carried out in the ANSYS software and the CFX module, which allows the simulation of heat transfer between a liquid and a solid.

Sánchez Ocaña Wilson, Robayo Bryan, Rodriguez Pablo, Pazmiño Intriago Monserrate, Salazar Jácome Elizabeth
Fostering Students-Driven Learning of Computer Programming with an Ensemble of E-Learning Tools

Learning through practice is crucial to acquire a complex skill. Nevertheless, learning is only effective if students have at their disposal a wide range of exercises that cover all the course syllabus and if their solutions are promptly evaluated and given the appropriate feedback. Currently the teaching-learning process in complex domains, such as computer programming, is characterized by an extensive curricula and a high enrolment of students. This poses a great workload for faculty and teaching assistants responsible for the creation, delivering and assessment of student exercises. In order to address these issues, we created an e-learning framework - called Ensemble - as a conceptual tool to organize and facilitate technical interoperability among systems and services in domains that use complex evaluation. These domains need a diversity of tools, from the environments where exercises are solved, to automatic evaluators providing feedback on the attempts of students, not forgetting the authoring, management and sequencing of exercises. This paper presents and analyzes the use of Ensemble for managing the teaching-learning process in an introductory programming course at ESEIG - a school of the Polytechnic of Porto. An experiment was conducted to validate a set of hypotheses regarding the expected gains: increase in number of solved exercises, increase class attendance, improve final grades. They support the conclusion that the use of this e-learning framework for the practice-based learning has a positive impact on the acquisition of complex skills, such as computer programming.

Ricardo Queirós, José Paulo Leal
Implementing Resource-Aware Multicast Forwarding in Software Defined Networks

Using multicast data transmissions, data can be efficiently distributed to a high number of network users. However, in order to efficiently stream multimedia using multicast communication, multicast routing protocols must have knowledge of all network links and their available bandwidth. In Software Defined Networks (SDN), all this information is available in a centralized entity - SDN network. This work proposes to utilize the SDN paradigm to perform network-resources aware multicast data routing in the SDN controller. In a prototype implementation, multicast data is routed using a modified Edmonds-Karp algorithm, by taking into account network topology and links load information. This paper presents the algorithm, implementation details, and an analysis of the testing results.

Justas Poderys, Anjusha Sunny, Jose Soler
Immersive Edition of Multisensory 360 Videos

The current technologic proliferation has originated new paradigms concerning the production and consumption of multimedia content. This paper proposes a multisensory 360 video editor that allows producers to edit such contents with high levels of customization. This authoring tool allows the edition and visualization of 360 video with the novelty of allowing to complement the 360 video with multiple stimuli such as audio, haptics, and olfactory. In addition to this multisensory feature, the authoring tool allows customizing individually each of the stimuli to provide an optimal multisensory user experience.A usability evaluation has revealed the pertinence of the editor, where it was verified an effectiveness rate of 100%, only one help request out of 10 participants, and positive efficiency. Satisfaction-wise, results equally revealed high level of satisfaction as the average score was 8.3 out of 10.

Hugo Coelho, Miguel Melo, Luís Barbosa, José Martins, Mário Sérgio, Maximino Bessa
Accessibility and Usability Assessment of a Web Platform: DADS (Doctors And Dyslexic System)

Online tools for dyslexic diagnosis and training are mostly: free-of-charge; children-oriented, consisting in a game-based interface proved a positive correlation between video games and dyslexia [1]; and, provide auto-evaluation questions that are automatically analyzed, given results regarding the need of seeking medical attention, since these tools are not a medical exam or diagnostic. Other platforms consist on exercises/tests for dyslexic people or with speech and language impairments, helping them in the training of the word pronunciation. Although, there is a lack of solutions that allow doctors to register the evolution of their patients. In this article, it is presented an accessibility and usability assessment of a Web platform that allows children to do exercises but also lets their doctors to keep track of their evolution through graphs and detailed statistics, allowing doctors to have information on a digital format, disproving the need for tests normally done on paper [2–4], and not having to manually register variables.

Tânia Rocha, Rui Carvalho, André Timóteo, Marco Vale, Arsénio Reis, João Barroso
An Implementation on Matlab Software for Non-linear Controller Design Based on Linear Algebra for Quadruple Tank Process

This paper is focused on the design of a control algorithm based on linear algebra for multivariable systems and its application to control of a quadruple tank system. In order to design the controller, the system model is approximated by numerical methods and then a system of linear equations are solved by least square to obtain the optimal control actions. The strategy presented in this paper has the advantage of using discrete equations and therefore a direct implementation in most computer-driven systems is feasible. The simulation results developed on Matlab software shows the effective of the proposed method.

Edison R. Sásig, César Naranjo, Edwin Pruna, William D. Chicaiza, Fernando A. Chicaiza, Christian P. Carvajal, Victor H. Andaluz
Prototyping Use as a Software Requirements Elicitation Technique: A Case Study

Prototyping is an agile software development methodology. It has also been proposed as a technique to obtain the software requirements from the stakeholders. However, there are few publications proposing a prescriptive guide and show its use in practice. This article presents a report of a case in which the prototype technique was used to elicit requirements of a software system in the university academic context. For this, authors propose a procedure to apply and they carry out elicitation sessions with two stakeholders who have different familiarity with the domain. The results show that the technique is effective in achieving a high coverage of the requirements and that it seems to perform better with stakeholders who have more familiarity with the domain. Although the results do not have statistical power, the case yields trends that can help development teams adopt this technique to produce the requirements in certain cases.

Dante Carrizo, Iván Quintanilla
Enhancing the Assessment of (Polish) Translation in PROMIS Using Statistical, Semantic, and Neural Network Metrics

Differences in culture and language create the need for translators to convert text from one language into another. In order to preserve meaning, context must be analyzed in detail in translation. This study aims to develop accurate evaluation metrics for translations within the PROMIS (Patient-Reported Outcomes Measurement Information System) process, particularly in the reconciliation step, by providing reviews by experts as additional information following backward translation. The result is a semi-automatic semantic evaluation metric for Polish based on the concept of the human-aided translation evaluation metric (HMEANT). We assessed the proposed metrics using a statistics-based support vector machine classifier and applied deep neural networks to replicate the operation of the human brain. We compared the results of the proposed metrics with human judgment and well-known machine translation metrics, such as BLEU (Bilingual Evaluation Understudy), NIST, TER (Translation Error Rate), and METEOR (Metric for Evaluation of Translation with Explicit Ordering). We found that a few of the proposed metrics were highly correlated with human judgment and provided additional semantic information independent of human experience. This showed that the proposed metrics can help assess translations in PROMIS.

Krzysztof Wołk, Wojciech Glinkowski, Agnieszka Żukowska
Mixing Textual Data Selection Methods for Improved In-Domain Data Adaptation

The efficient use of machine translation (MT) training data is being revolutionized on account of the application of advanced data selection techniques. These techniques involve sentence extraction from broad domains and adaption for MTs of in-domain data. In this research, we attempt to improve in-domain data adaptation methodologies. We focus on three techniques to select sentences for analysis. The first technique is term frequency–inverse document frequency, which originated from information retrieval (IR). The second method, cited in language modeling literature, is a perplexity-based approach. The third method is a unique concept, the Levenshtein distance, which we discuss herein. We propose an effective combination of the three data selection techniques that are applied at the corpus level. The results of this study revealed that the individual techniques are not particularly successful in practical applications. However, multilingual resources and a combination-based IR methodology were found to be an effective approach.

Krzysztof Wołk
Association Rules Mining for Culture Modeling

The difficulty to predict the human’s behavior has caused the need to learn cultural differences between peoples. Although culture is one of the concepts that are difficult to define, best learning these differences depends on the culture modeling quality. In this paper, a new culture modeling is proposed to facilitate the learning process. This modeling allows to generate the frequent cultural characteristics in each region and extract cultural association rules. We create benchmarks of culture, based on a survey accessible from the web. The cultural datasets are analyzed with the Apriori algorithm to extract frequent attributes values. The obtained results show similarities and differences between cultures. Based on these results, we pass to the construction of association rules and their confidence.

Amine Kechid, Habiba Drias
An Information System to Remotely Monitor Oncological Palliative Care Patients

For oncological patients, the introduction of palliative care in the early stages of the disease’s progression can have great benefits. The Portuguese government recently introduced a program to provide home palliative care support by creating specialized mobile teams, able to track, visit and address the patients’ problems. These teams must be available for the patient, when necessary and if necessary. The teams must also have updated knowledge about the daily evolution of the patients’ health. The Douro Sul Healthcare Centers, together with the University of Trás-os-Montes e Alto Douro, developed and implemented an ICT system to track the status of each and every one of the patients. The system has several components, including: a mobile app for the patients or their caregivers to report daily how the patient’s symptoms have evolved over the last 24 h; a web app for the teams to browser their patients’ status.

Arsénio Reis, Eliza Bento da Guia, André Sousa, André Silva, Tânia Rocha, João Barroso
Automatic Directions for Object Localization in Virtual Environments

In order to assist users in the process of locating objects in Virtual Environments (VE), we automatize the process of giving directions through a computational model. This model generates directions in natural language by using spatial and perceptual aspects. It involves three main processes: (1) a computational model of perceptual saliency for 3D objects; (2) a user model and an explicit representation of virtual world semantics; and (3) the algorithm for automatic generation of directions to locate objects in natural language. Reference frames and reference objects support the model. For the selection of the best reference 3D object are considered three criteria: the perceptual saliency of the objects, the probability of the user to remember the object location, and prior knowledge from the user about the object. This paper presents the structure and the processes of the proposed model.

Graciela Lara, Angélica De Antonio, Adriana Peña, Mirna Muñoz
A Survey on the Impact of Risk Factors and Mitigation Strategies in Global Software Development

Global software projects face numerous challenges generated by the geographical, temporal and socio-cultural distribution of the actors involved. Project managers’ primary task is to ensure the project’s success and therefore must use risk management techniques and tools to identify and mitigate risks. The aim of this paper is to evaluate and improve a risk management framework previously presented by the authors of this article, based on a survey of industry’s practitioners. The framework contains 39 risk factors and 58 mitigation strategies classified using Leavitt’s model of organizational change. An online questionnaire was used to gather data from 10 managers and 7 developers and the Spearman’s rank correlation was used to compare the results of the two groups. Results indicate an agreement between managers and developers on risk factors related to communication and technology and on mitigation strategies associated to communication and project management.

Saad Yasser Chadli, Ali Idri
The Application of Change Indicators in Mining Software Repositories

This paper presents a framework to identify a problematic or uncontrollable rise in the number of software change requests and to take right actions to fix it. With this work, we propose the use of an acceptable limit number of change requests as indicators to track the evolution of software change requests. The change indicators are used to identify a periodical sharp rise in demands of change requests fast enough and provide the right fix on time. Not only these indicators track the evolution of change request, but they also help to identify the right solution to address the triggers of these change requests.

Nico Hillah, Thibault Estier
Application of a Risk Management Tool Focused on Helping to Small and Medium Enterprises Implementing the Best Practices in Software Development Projects

Today, SMEs (small and medium enterprises) face difficulties implementing best practices contained in standards and models internationally accepted such as CMMI, ISO, PMBOK, because they lack of economic resources, human resources, skills, tools and techniques focused on the implementation of best practices, in addition, to apply the best practices provided by these models and standards, the SMEs require to adapt them according to their size and type of business. This research work aims to present the results of a case study of the application of the tool that performs basic risk management in in 2 companies. The results show that both of them perceived the use of the tool as useful to implement software engineering best practices in an easy way.

Yolanda-Meredith García, Mirna Muñoz, Jezreel Mejía, Gloria-Piedad Gasca, Antonia Mireles
On the Fly Model-Checking of TPN:

Temporal logic provides a fundamental framework for formally specifying systems and reasoning about them. Furthermore, model checking techniques lead to the automatic verification of some temporal logic specification that a finite state model of a system should satisfy.In this paper, we adapt the extended logic $$TCTL^{\varDelta }_{h}$$; presented in our previous work [9, 10]; to deal with GMECs (Generalized Mutual Exclusion Constraints: specification of Petri net markings) instead of atomic properties. This leads to an extension of TCTL (Timed Computational Tree Logic) on time Petri nets (TPN) called $$TPN-TCTL^{\varDelta }_{h}$$. Then, we prove the PSPACE-completeness of this new logic on bounded TPNs. Finally, we propose symbolic verification algorithms which are based on the concept of on the fly verification and implement them using Romeo tool.

Naima Jbeli, Zohra Sbai, Rahma Ben Ayed

Multimedia Systems and Applications

Frontmatter
Polynomiography via the Hybrids of Gradient Descent and Newton Methods with Mann and Ishikawa Iterations

In this paper two hybrids of two algorithms, the Newton method and the Gradient Descent method, are presented in order to create polynomiographs. The first idea combines two methods as a convex combination, the second one as the two-step process. A polynomiograph is an image obtained as the result of roots finding method of a polynomial on a complex plain. In this paper the polynomiographs are also modified by using Mann and Ishikawa iterations instead of the standard Picard iteration. Coloring images by iterations reveals dynamic behavior of the used root-finding process and its speed of convergence. The paper joins, combines and modifies earlier results obtained by Kalantari, Zhang et al. and the authors. We believe that the results of the paper can inspire those who are interested in aesthetic patterns created automatically. They can also be used to increase functionality of the existing polynomiography software.

Wiesław Kotarski, Agnieszka Lisowska
Accuracy and Performance Analysis of Time Coherent 3D Animation Reconstruction from RGB-D Video

We present an accuracy and performance analysis of Time Coherent 3D Animation Reconstruction methods from RGB-D video. We analyze the existing methods that can reconstruct a time coherent 3D animation using RGB-D video. We also present a modified algorithm using only the RGB data that extends the analysis of existing methods. We show that using all the methods it is possible to reconstruct a time-coherent 3D animation using either only the color data, color and depth data, or only the depth data. We compare all the methods using a number of error measures and analyze the strength and weaknesses of each method in terms of their accuracy and runtime performance. Our analysis demonstrates that given RGB-D video data, it is possible to select the best algorithm for time coherent 3D animation reconstruction under a number of constraints in terms of the required accuracy and runtime performance.

Naveed Ahmed
The Method Proposal of Image Retrieval Based on K-Means Algorithm

In this paper, we propose a content-based image retrieval system using the improved K-means algorithm with binary indexes of images. The created index, known as binary signatures of image, is based on image features including shape, location, and color. Firstly, we present the method of creating binary signature based on CIE-L*a*b* color space and Discrete Wavelet Frames. After that, the similarity measure between two binary signatures is presented. On the basis of k-means algorithm, we propose several improvements for clustering binary signatures used later to assess similarities between images. From that, the clustering algorithm for binary signatures of images is proposed. Next, we give the image retrieval algorithm based on the partitioned signature clusters. For illustrating our theoretical proposal, some experiments are conducted on common image sets including COREL, CBIRimages, and WANG.

Thanh The Van, Nguyen Van Thinh, Thanh Manh Le
Performance Analysis of Vehicle Detection Techniques: A Concise Survey

Attention towards Intelligent Transportation System (ITS) has increased manifold especially due to prevailing security situation in the past decade. An integral part of ITS is video-based surveillance systems extracting real-time traffic parameters such as vehicle counting, vehicle classification, vehicle velocity etc. using stationary cameras installed on road sides. In all these systems, robust and reliable detection of vehicles is significantly a critical step. Since, several vehicle detection techniques exist, evaluating these techniques with respect to different environment conditions and application scenarios will give a better choice for actual deployment. The paper presents a concise survey of vehicle detection techniques used in diverse applications of video-based surveillance systems. Moreover, three main detection algorithms; Gaussian Mixture Model (GMM), Histogram of Gradients (HoG), and Adaptive motion Histograms based vehicle detection are implemented and evaluated for performance under varying illumination, traffic density and occlusion conditions. The survey provides a ready-reference for preferred vehicle detection technique under different applications.

Adnan Hanif, Atif Bin Mansoor, Ali Shariq Imran
Personalised Dynamic Viewer Profiling for Streamed Data

Nowadays, not only the number of multimedia resources available is increasing exponentially, but also the crowd-sourced feedback volunteered by viewers generates huge volumes of ratings, likes, shares and posts/reviews. Since the data size involved surpasses human filtering and searching capabilities, there is the need to create and maintain the profiles of viewers and resources to develop recommendation systems to match viewers with resources. In this paper, we propose a personalised viewer profiling technique which creates individual viewer models dynamically. This technique is based on a novel incremental learning algorithm designed for stream data. The results show that our approach outperforms previous approaches, reducing substantially the prediction errors and, thus, increasing the accuracy of the recommendations.

Bruno Veloso, Benedita Malheiro, Juan Carlos Burguillo, Jeremy Foss, João Gama
Extending AES with DH Key-Exchange to Enhance VoIP Encryption in Mobile Networks

Due to the huge developments in mobile and smartphone technologies in recent years, more attention is given to voice data transmission such as VoIP (Voice over IP) technologies, e.g., (WhatsApp, Skype, and Face Book Messenger). When using VoIP services over smartphones, there are always security and privacy concerns like the eavesdropping of calls between the communicating parties. Therefore, there is a pressing need to address these risks by enhancing the security level and encryption methods. In this work, we suggest a new scheme to encrypt VoIP channels using (128, 192 & 256-bit) enhanced encryption based on the Advanced Encryption Standard (AES) algorithm, by extending it with the well-known Diffie-Hellman (DH) key exchange method. We have performed a series of real tests on the enhanced (AES/DH) algorithm and compared its performance with the generic AES algorithm. The results have shown that we can get a significant increase in the encryption strength at a very small overhead between 4% and 7% of execution time.

Raid Zaghal, Saeed Salah, Noor Jabali
Weaving the Non-pharmacological Alzheimer’s Disease Therapy into Mobile Personalized Digital Memory Book Application

Advances in mobile technology have lead to a more interesting and inspiring applications that could be used to help people with health problem. Together with rapid progress in mobile devices, new applications to support treatments have been developed tremendously. This technology has also been identified to assist patients with Alzheimer’s disease. This paper describes an application that was developed to improve the quality of life of a person who is suffering from Alzheimer’s disease. It was designed specifically for a patient with personalized contents. It was tested on an Android based mobile device where the patient could easily access the application at anytime and anywhere. Experts were interviewed to evaluate the usability and functionality of the system. The result from this study showed that the application is suitable for non-pharmacological therapy to assist Alzheimer’s patient. It can be used to enhance the reminiscent and stimulate cognitive function of the patient. The application is also being used as a support to improve the quality of life of the patient and encourage communication with the caretakers.

Anis Hasliza Abu Hashim-de Vries, Marina Ismail, Azlinah Mohamed, Ponnusamy Subramaniam

Computer Networks, Mobility and Pervasive Systems

Frontmatter
Integrated Inventory Management System for Outdoors Stocks Based on Small UAV and Beacon

Inventory tracking/auditing is a task which must be carried out continuously to ensure the efficiency of the supply chain and to meet with regulatory requirements. Even though automated methods such as bar code and RFID are being used in most firms, it is time-consuming and labor-intensive, especially, when stocks are bulky, big-sized and stored in multi-tier at outdoors environment. As a remedy to the above situation, the aim of this paper is to find a solution for rapid inventory tracking and auditing using human-controlled UAV (Unmanned Aerial Vehicle) equipped with iBeacon-based ID reader. The system architecture which consists of inventory management server, UAV-based mobile data scanner and iBeacon-based signal transmitter was proposed in this paper. And, based on this architecture, a prototype system was developed.

Kwan Hee Han, Sung Moon Bae, Whayong Lee
Cyclostationary Spectrum Sensing Based on FFT Accumulation Method in Cognitive Radio Technology

In cognitive radio, the detection of an unused band to exploit it opportunistically is the most difficult problem. Cognitive radios must have the ability to detect primary users efficiently and even in any signal-to-noise ratio (SNR). Spectrum sensing based on energy has shown its efficiency only in a high signal-to-noise ratio [1]. While cognitive radio must also be able to effectively detect primary users, even in a low ratio (SNR). This difficulty can be overcome by exploiting the cyclostationary signatures exposed by signal communication. The cognitive method of detection of the radio spectrum considered in this work is the cyclostationary spectral analysis for the detection of the unused bands using the FFT accumulation method. A simulation is performed in the Matlab environment to perform the different steps of the cyclic spectrum estimation technique.

Oualid Khatbi, Zakaria Hachkar, Ahmed Mouhsen
Geomultihoming: A New Connectivity Term Between WiFi Networks and Mobile Networks

During this current research project an alternative is given for the traditional internet connectivity concept implemented by the Android operative system, so it is used multihoming to shift between multiple network services, a terminology we coin. A mobile application was developed using a minimalist and friendly user interface with a single goal, to provide the user with uninterrupted connectivity attenuating the impact on the device’s battery and WiFi/mobile radios, also optimizing resources at lowering the quantity of bandwidth used and so minimizing the monetary cost when using mobile data while maintaining periodic tests, by this way it is possible to counter the lack of internet connectivity on real time if needed. Also, it is available for the user through a variety of options to use the application in the best fitting way, between this features it is the possibility to work on a defined time range daily basis and the action the app should take when this period ends. The network’s max speed is used as base parameter for choosing the wireless network, for that a net speed test is run and so the user can always know the connection’s quality with the access point. By all this we can probe the hypothesis because of the satisfaction the users that tested the application’s accuracy and so the main concept.

Víctor Bauz, Jorge Latacunga, Graciela Guerrero
Indoor Location Using Bluetooth Low Energy Beacons

Location data plays an important role in several applications embedded in our digital living. These applications, usually, take advantage of Global Positioning System (GPS). However, GPS is not targeted for indoor location, therefore this paper presents an alternative system, based on Bluetooth Low Energy (BLE) beacons that together with bluetooth-enabled Smartphones, allows the development of low cost and accurate location-aware applications for indoor scenarios. The paper describes the challenges associated with the system deployment and presents algorithms to improve the distance estimation process as the user moves around the smart space. The evaluation performed shows that this approach has good results on noise reduction and movement adaptation allowing a close tracking of the indoor user position.

Ana Gomes, André Pinto, Christophe Soares, José M. Torres, Pedro Sobral, Rui S. Moreira
Use Case Scenarios of Dynamically Integrated Devices for Improving Human Experience in Collective Computing

Smart city concept emerged as a technology supported response to challenges posed by growing cities. To provide ambient intelligence smart cities rely on ubiquitous and context-aware computing. Given the ubiquity of computing devices, the ability to connect objects and people into a smart context-aware system is one contemporary challenge. Our early research proposed a novel approach for dynamic integration of devices into a system with context-aware behavior inspired by concepts used in role theory. The idea behind our model is to embed the predefined internal structure of a system given the context into a mobile device to allow it owing a certain role in that system. The objective of the present paper is to prepare the ground for further prototyping of the model. We present ontology-based use-case scenarios utilizing the model to demonstrate the capabilities of the model.

Rustam Kamberov, Carlos Granell, Vitor Santos
I Am in Here: Implicit Assumptions About Proximate Selection of Nearby Places

A mobile application whose functionality is somehow based on the ability to connect to locative digital environments, needs to be able to bridge between its current location and a digital counterpart of its current place. This is normally envisioned as a simple proximate selection process, where the application searches the surrounding environment and selects, or assists the user in the selection, of the appropriate place. However, with place-based services becoming truly ubiquitous, for any particular location, there will always be multiple potential target places for selection. In this work, we are concerned with these real-word challenges and their implications for wide-scale place selection. The objective is to investigate the main elements that may affect the reliability of proximate selection of nearby virtual places. Our research design is organized around 3 independent variables that may affect place selection: the characteristics of the place environment, the position of the query in relation to the ground zero position of the target place and the error introduced by positioning systems. The results show that, even for small distances, the ability of the system to identify the target place as the most relevant place or even to return the target place as a possible place for selection, can be severely affected, especially in environments with high place density. The key implication is that virtual discovery, by itself, is not a suitable method for place selection, and should be combined with other techniques, such as detection of physical proximity or explicit user indications.

Ana Inês Xavier, Rui José
Trusting Sensors Measurements in a WSN: An Approach Based on True and Group Deviation Estimation

Quality-of-service (QoS) and accuracy are of prime importance in WSN-based monitoring applications, as they may need to report real-time measurements leading to efficient decision making. The tiny sensors are often subject to measurement errors, say noise, and prone to failures and attacks, as their physical characteristics change easily due to environmental abnormality and mechanical shock. Faulty information may induce erroneous decisions, which may significantly impact the performance of the network and its service quality. Thus, the sensors’ need to be calibrated periodically and its data has to be trustworthy in making a good decision. In this paper, we have proposed a trust management framework based on true and group deviation metrics to analyze the accuracy and trustworthiness of the sensors’ data. We have derived an analytical model to calibrate the sensors periodically and to examine the trustworthiness. Our simulation results on testing a real-time fire monitoring system showed that the proposed trust framework is efficient in producing 95% accurate and trusted measurements by limiting the frequency of sensor calibrations to a very low value and by setting a lower boundary of 5% deviation from the true and group value metrics.

Noureddine Boudriga, Paulvanna N. Marimuthu, Sami J. Habib
How Insufficient Send Socket Buffer Affects MPTCP Performance over Paths with Different Delay

Recently, the multipath transport protocol such as Multipath TCP becomes increasingly important. It allows more than one TCP connections via different paths to compose one Multipath TCP communication. However, it has some problems when those paths have different delay. Especially, the limited buffer space at either sender or receiver may degrade the throughput due to head-of-line blocking. Our previous paper pointed out that insufficient send socket buffer and receive socket buffer provide different situations of performance degradation, and that insufficient send socket buffer gives poorer throughput. This paper extends the performance analysis of our previous paper in the conditions with various combinations of send socket buffer size and transmission delay. It gives more detailed analysis using Multipath TCP level sequence number and congestion window size, and suggests the reasons for performance degradation.

Toshihiko Kato, Adhikari Diwakar, Ryo Yamamoto, Satoshi Ohzahata, Nobuo Suzuki
Modeling and Defining the Pervasive Games and Its Components from a Perspective of the Player Experience

The rapidly growth of the Pervasive Games (PG) field has allowed people to enjoy these games daily in different ways. However, developers and researchers still do not have a complete understanding of its components and features. These games have evolved in many ways depending on their field of application. That is why, for each application field the term PG has a different definition depending on its targets, devices used and research context. That is comprehensible from a technical point of view due to every research field needs to implement their own characteristics. A previous literature review was carried out [1] aiming to find definitions, methodologies to build PG, and software metrics for them. But considering that a definition was not found based on the player experience, we decided to delve into this topic. In this paper, we present a summary of definitions of PG presented by several researchers in order to check and collect their particularities. In addition, we presented a conceptual diagram of the components of PG as well, which has been the starting point to build an ontology to represent the PG meaning and its components.

Jeferson Arango-López, Francisco Luis Gutiérrez Vela, Cesar Alberto Collazos, Fernando Moreira
Ubiquitous Systems as a Learning Context to Promote Innovation Skills in ICT Students

The ubiquitous presence of ICT in our lives calls for professionals who have a deep understanding of technology, but who are also able to understand and frame the relevant contributions that are needed from many other disciplines. In this work, we report on our experience of teaching Ubiquitous Systems to ICT students. We argue that a course on Ubiquitous Systems provides an excellent learning context for introducing ICT students to this broader view of ICT innovation, allowing them to explore the valuable and systemic use of information technology in the real-world. Our study addresses 4 main themes that recurrently emerge as fundamental issues in the course design, more specifically: technical scope, selection of project topics, multidisciplinary work, and project organization. We gathered quantitative and qualitative data over 3 editions of this course to characterise the key design challenges associated with each of these challenges. The result is a structured set of insights that may inform others in the design of similar courses, providing a framework to reason about the key design decisions and their effect on the ability to promote advanced competences for ICT students.

Rui José, Helena Rodrigues

Intelligent and Decision Support Systems

Frontmatter
Extended Association Rules in Semantic Vector Spaces for Sentiment Classification

Sentiment analysis is a field that has experienced considerable growth over the last decade. This area of research attempts to determine the opinions of people on something or someone. This article introduces a novel technique for association rule extraction in text called Extended Association Rules in Semantic Vector Spaces (AR-SVS). This new method is based on the construction of association rules, which are extended through a similarity criteria for terms represented in a semantic vector space. The method was evaluated on a sentiment analysis data set consisting of scientific paper reviews. A quantitative and qualitative analysis is done with respect to the classification performance and the generated rules. The results show that the method is competitive with respect to the baseline provided by NB and SVM.

Brian Keith Norambuena, Claudio Meneses Villegas
Technologies Sustainability Modeling

Nowadays practically every real system that reflects the objective reality, is of a sociotechnical nature. System is the result of collaboration of technical and social subsystems, as well as their symbiosis. The existence of a sociotechnical system is determined by the being of a digital society of technology and knowledge. Technology is one of the pillars of today’s society, so it’s important to predict which technologies will be accepted in society, which will be sustainable and in which of them it is worthwhile to invest. There are yet few validated methodologies that allow for predicting of the credible sustainability forecast of a new or existing technology. The article dealt with on system dynamics simulation based Integrated Acceptance and Sustainability Assessment Methodology (IASAM), which is one of the possible solutions that makes decision making easier for potential investors and policy planners.

Egils Ginters, Dace Aizstrauta
Blood Cell Classification Using the Hough Transform and Convolutional Neural Networks

The detection of red blood cells in blood samples can be crucial for the disease detection in its early stages. The use of image processing techniques can accelerate and improve the effectiveness and efficiency of this detection. In this work, the use of the Circle Hough transform for cell detection and artificial neural networks for their identification as a red blood cell is proposed. Specifically, the application of neural networks (MLP) as a standard classification technique with (MLP) is compared with new proposals related to deep learning such as convolutional neural networks (CNNs). The different experiments carried out reveal the high classification ratio and show promising results after the application of the CNNs.

Miguel A. Molina-Cabello, Ezequiel López-Rubio, Rafael M. Luque-Baena, María Jesús Rodríguez-Espinosa, Karl Thurnhofer-Hemsi
The Informational Platform of the São Paulo State Innovation Ecosystem: A Semantic Computing Model in Support of Innovation

In the state of São Paulo, Brazil, the innovation scenario is based on the regulations by the Sistema Paulista de Ambientes de Inovação (System of Innovation Environments of the State of São Paulo) to all Formal Environments for Innovation that encourage innovation initiatives in organizations. Aiming at expanding the articulation and encouraging collective and collaborative construction processes among the main innovation executors in the state of São Paulo, this research has the purpose of presenting an Information Platform of the Innovation Ecosystem in the State of São Paulo that, from information sources provided by innovation executors and through information technologies, computer and semantics, aggregate and make information services available for innovation performers. Through automatic data extraction resources of digital information environments and the provision of information services generated through semantics and data technologies generated by the platform, it allows organizations to access the data inputs needed to foster innovation projects.

Elvis Fusco, Marcos Luiz Mucheroni
Design and Ex ante Evaluation of an Architecture for Self-adaptive Model-Based DSS

The quality of the decision support of a model-based Decision Support System (DSS) is fundamentally dependent on valid and actual models. A changing business environment can affect the validity of model components which could cause an incorrect model output. This problem is addressed in this paper by focusing on the self-adaptive property as a potential approach. To provide a model for decision support as close as possible to a dynamic business environment, the principles of self-adaptive systems are considered in an interconnected Model-/System-Controller (MoSyCo) architecture which is designed around DSS models. The design of the artifact is driven by a deduction of the problem characteristics to specify components of the intended architecture. The ex ante design evaluation is conducted in accordance to the stepwise evaluation by Sonnenberg and vom Brocke and considers a survey of 50 practitioners from the DSS domain.

Marcel-Philippe Breuer
Generic POLCA: An Assessment of the Pool Sequencing Decision for Job Release

We present a simulation study for assessing the impact of pool sequencing rules for job release in a make-to-order general flow shop under the Generic POLCA order release and materials flow control system. Four pool sequencing rules are tested when the workload released to the shop floor is measured: (1) in jobs; and (2) in processing time units. Performance results based on both, the ability to deliver jobs on time and to provide short delivery times, show that a capacity slack rule based on the corrected aggregate workload perform best.

Silvio Carmo-Silva, Nuno O. Fernandes
Iterative Optimization-Based Simulation: A Decision Support Tool for Job Release

Job release is an essential scheduling function and a core part of every production planning and control system. Essentially, job release has to do with the timing and the jobs to release on to the shop floor, in such way that, a balanced and restricted workload is achieved. In this paper, an Iterative Optimization-based Simulation (IOS) decision support tool is proposed for job release. This is in line with Industry 4.0 paradigm, allowing the autonomous selection of jobs based on the current shop floor situation. This decision support tool is implemented using SIMIO as a simulation manager, MATLAB as an optimization manager and MySQL as a database manager.

Nuno O. Fernandes, Mohammad Dehghanimohammadabadi, S. Carmo Silva
A Dive into the Specific Electric Energy Consumption in Steelworks

The paper describes an application of optimization techniques for the minimization of the specific electrical energy consumption related to the production of steel for a steelworks situated in Italy. The major electrical consumption derives from two internal plants: the Electric Arc Furnace and the Ladle Furnace. This work addresses the problem of understanding the best settings (based on predefined models) to produce a specific steel, which is mainly characterized by its steelgrade and quality, with the minimum energy consumption.

C. Mocci, A. Maddaloni, M. Vannucci, S. Cateni, V. Colla
Short-Term Simulation in Healthcare Management with Support of the Process Mining

Traditionally, simulation models serve long-term decision making and are built based on manually collected statistical data, equipment specifications, and so on. This raises the time for the construction of the models which does not justify the support of the simulation for the short term decision making. However, hospital environments equipped with data collection and storage software contribute to the process mining technique to reliably capture how the processes are being executed and this facilitates the rapid construction of simulation models that justify decision making in the short term. Due to the characteristics of the processes and the high variability of the demand in the first aid, the operational decisions are evidenced, in this way, the study presents a short-term simulation framework with the aid of process mining to meet the demand of patients in the first aid.

Fábio Pegoraro, Eduardo Alves Portela Santos, Eduardo de Freitas Rocha Loures, Gabriela da Silva Dias, Lucas Matheus dos Santos, Renata Oliveira Coelho
Improving Productive Processes Using a Process Mining Approach

Today’s companies face great challenges when attempting to quest business markets with their demands on product quality and price. However, when a company maintains high efficiency levels on its productive processes usually it has this challenge quite simplified. The great availability of data we have currently on industry plants provides a very interesting support to face this challenge, when combined with new technologies such as process mining. This paper presents a case study where the very recent process mining techniques were applied to a very particular productive process characterized for its low frequency and heterogeneity. To do this, we made some changes to the “L * life-cycle model” methodology, for applying process mining in the identification of tasks with unsatisfactory performance levels, and analyzing the most relevant and critical aspects that influence it.

Ricardo Ribeiro, César Analide, Orlando Belo
Data Prediction of a Wind Turbine Using ANNs

The great growth of wind power plants has drawn much attention to operations and maintenance problems. Maintenance in a wind turbine is of paramount importance to minimize potential problems. This work presents a methodology for predicting the behavior of wind turbines based on some data. The prediction was made through the technique of ANNs (Artificial Neural Networks) and a R2 (coefficient of determination) for the adjustment of the best behavior of the data. The database used is a model of a manufacturer of wind turbines called ENERCON, the research is of the exploratory type. Predicted information is essential to assist in preventive maintenance if any anomaly occurs according to the technique. The models used generated good results.

Darielson A. Souza, Alanio F. Lima, Adriano P. Maranhão, Thiago P. Maranhão, Luís O. N. Teles, Flavio R. S. Nunes, Jefferson J. I. Souza, Antonio T. S. Brito
Case Study on Modeling the Silver and Nasdaq Financial Time Series with Simulated Annealing

This paper reports a case study on modeling the SPDR Silver Trust (SLV) and Nasdaq Composite Index timeseries by using a financial agent based system using simulated annealing. We show here how adding financial information to the modeling system can significantly improve the modeling results. The learning system LFABS, previously developed by the author, will be used as a testbed for the empirical evaluation of the proposed methodology on the two case studies.

Filippo Neri
Partitioning and Bucketing in Hive-Based Big Data Warehouses

Hive is a tool that allows the implementation of Data Warehouses for Big Data contexts, organizing data into tables, partitions and buckets. Some studies have been conducted to understand ways of optimizing the performance of data storage and processing techniques/technologies for Big Data Warehouses. However, few of these studies explore whether the way data is structured has any influence on how Hive responds to queries. Thus, this work investigates the impact of creating partitions and buckets in the processing times of Hive-based Big Data Warehouses. The results obtained with the application of different modelling and organization strategies in Hive reinforces the advantages associated to the implementation of Big Data Warehouses based on denormalized models and, also, the potential benefit of adequate partitioning that, once aligned with the filters frequently applied on data, can significantly decrease the processing times. In contrast, the use of bucketing techniques has no evidence of significant advantages.

Eduarda Costa, Carlos Costa, Maribel Yasmina Santos
Big Data Analytics in IOT: Challenges, Open Research Issues and Tools

Terabytes of data are generated day-to-day from modern information systems, cloud computing and digital technologies, as the increasing number of Internet connected devices grows. However, the analysis of these massive data requires many efforts at multiple levels for knowledge extraction and decision making. Therefore, Big Data Analytics is a current area of research and development that has become increasingly important. This article investigates cutting-edge research efforts aimed at analyzing Internet of Things (IoT) data. The basic objective of this article is to explore the potential impact of large data challenges, research efforts directed towards the analysis of IoT data and various tools associated with its analysis. As a result, this article suggests the use of platforms to explore big data in numerous stages and better understand the knowledge we can draw from the data, which opens a new horizon for researchers to develop solutions based on open research challenges and topics.

Fabián Constante Nicolalde, Fernando Silva, Boris Herrera, António Pereira
The Role of Perceived Control, Enjoyment, Cost, Sustainability and Trust on Intention to Use Smart Meters: An Empirical Study Using SEM-PLS

Smart Meters are capable of collecting, storing, and analyzing electricity consumption data in real-time and of electronically transmitting data between the electricity provider and the electricity end user. Despite its potential, smart meter technology is in its early adoption stage in many developing countries, and little is known about residents’ acceptance and usage of smart meters in those countries. Thus, this research aimed to fill this gap by studying the important factors that influence residents’ intentions to use smart meters in Jordan. A quantitative approach was followed by obtaining 242 survey responses and statistically testing the associated hypotheses using SEM-PLS. Results revealed that perceived control, perceived enjoyment, sustainability and trust can significantly and positively influence residents’ intentions to use smart meters. However, perceived cost was not found to have a significant negative influence on intention to use. Theoretical and practical implications are indicated, and directions of future research are specified afterwards.

Ahmed Shuhaiber
Evaluating Decision Support Systems’ Effect on User Learning: An Exploratory Study

The main purpose of this paper was to assess the effect of DSS use on decision makers learning, in the aim of proposing User’s learning as a measure of DSS success. This study extends previous work on both DSS behavioral/cognitive effects on users and DSS evaluation. The data collected during the usage of GASFIN DSS enabled us to assess the learning acquisition process by monitoring changes in the decision making process and outcomes. The results confirmed the improvement of users learning capabilities over regular system usage. Implications for future research on the DSS evaluation are proposed.

Khaoula Boukhayma, Abdellah Elmanouar
A Unified System for Clinical Guideline Management and Execution

There are several approaches to Computer-Interpretable Guidelines (CIG) representation and execution that offer the possibility of acquiring, executing and editing CPGs. Many CIG approaches aim to represent Clinical Practice Guidelines (CPGs) by computationally formalising the knowledge that they enclose, in order to be suitable for the integration in Clinical Decision Support Systems (CDSS). However, the current approaches for this purpose lack in providing a unified workflow for management and execution of CIGs. Besides characterising these limitations and identifying improvements to include in future tools, this work describes the unified architecture for CIG management and execution, a unified approach that allows the CIG acquisition, editing and execution.

António Silva, Tiago Oliveira, Filipe Gonçalves, José Neves, Ken Satoh, Paulo Novais
Computer Aided Wound Area Detection System for Dermatological Images

Research shows that in the last decade, the focus on computer-assisted diagnoses on the skin disorders has increased significantly as a result of the improvements in skin imaging technology and the development of compatible image processing techniques. More accurate treatments provided by means of computer-assisted diagnostic systems increase the patients’ chances of recovery and survival. Image processing techniques used in these systems facilitate the detection of wound areas. In this study, a wound detection system using adaptive weighted median filter (AWMF), Otsu’s thresholding, and an implementation of the Canny edge detection algorithm using the Sobel kernel, respectively, is proposed for the detection of wound areas on dermatological images. The effectiveness of the system is tested on different dermatological datasets. Obtained values are analyzed with Peak Signal to Nose Ratio (PSNR) and Correlation Coefficient (CC) metrics and it was confirmed that the system works accurately on various datasets.

Sümeyya İlkin, Fidan Kaya Gülağız, Fatma Selin Hangişi, Suhap Şahin
A Novel Curvature Feature Embedded Level Set Method for Image Segmentation of Coronary Angiograms

Segmentation methods in medical image processing are usually distorted by low contrast and intensity inhomogeneity. There are several image segmentation methods which are based on region based segmentation. But these algorithms mostly depend on the quality of the image. This paper gives an improved level set method for image segmentation to reduce the effect of noise. In order to achieve this, curvature feature energy function in standard level set energy function has been used. The proposed method is being applied on heart angiograms provided by Cardiac Department ISRA University Hospital, Pakistan. Extensive evaluation of these images depicts the robustness and efficiency of the proposed method over the previous work. Moreover, this method gives better trade-off between accuracy and implementation time over the related work.

Mehboob Khokhar, Shahnawaz Talpur, Sunder Ali Khowaja, Rizwan Ali Shah
StratVision - A Framework for Strategic Vision Formalization

New concepts and software tools for supporting the acquisition of strategical skills in heuristic games are described. The formal authoring of patterns of Chess is presented as a key element that underlies human-like style of playing and as a meta-cognitive task for learners and grand masters to reflect about patterns stored in their long term memory. Multiple external representations concepts are used in the design of the environment. Also, few past works have been found in the scientific literature about applying knowledge, at a strategical vision level, to game engines. Such concepts can be applied to administration sciences and medicine diseases diagnoses. Finally, future research perspectives are discussed.

Luis Carlos Ferreira Bueno, Bruno Muller, Alexandre Ibrahim Direne
AdaptHAs: Adapting Theme and Activity Selections for a Co-creation Process for High Ability Students

High ability or gifted students with difficulties in learning? This is a statement that many people think is incoherent because of the characteristics of these students, but it is more common than we think. One way of helping high ability students to develop their skills and be more motivated about their learning process is to encourage them be more active in creating learning activities through a co-creation process. However, a co-creation process in itself is not enough and so it is important to ensure that the students are the real protagonists by adapting to their characteristics, interests, needs, goals, personalities, multiple intelligences and cognitive styles. This paper presents a theoretical proposal for such an adaptation.

Mery Yolima Uribe-Rios, Teodor Jové, Ramon Fabregat, Juan Pablo Meneses-Ortegón
Brain Waves Processing, Analysis and Acquisition to Diagnose Stress Level in the Work Environment

In recent years, computers and brainwaves acquisition have been a major source to improve human-computer interaction, allowing us to understand the emotions of an individual. In this research, the analysis of a worker stress level is proposed by using a head placed non-invasive device called Emotiv Insight, which has the ability to connect via an interface Brain Computer (BCI) and represent the different facial gestures as well as, interpreting brain signals. Once acquired these electroencephalographic (EEG) signals have been analyzed and the results has allowed to identify the responses generated by an individual during a test with high levels of stress.

Christian Ubilluz, Ramiro Delgado, Diego Marcillo, Tatiana Noboa

Big Data Analytics and Applications

Frontmatter
Improvement of Implemented Infrastructure for Streaming Outlier Detection in Big Data with ELK Stack

Nowadays the usage of internet is constantly increasing the amount of data. As a result the need for analyzing this data has recently emerged as we need to face a new phenomena known as the Big Data. This research is focused in finding appropriate architecture for real-time big data analytics and its main task is to detect anomalies in this real-time data. There are some tools that are used and analyzed by us in order to find the best one, but in this paper we use Timeline and compare it with Fluentd which is the tool we used in previous research [12]. Here we are going to show the reasons why Timelion is better than Fluentd. Anomaly detection in real-time big data is a problem that faces many organizations and it is a challenge for researchers as well. Our research deals with developing infrastructure for monitoring e-dnevnik (education national system in Macedonia) application server and to detect errors in order to scale up the performance. In order to enable this infrastructure to detect anomalies in streaming data we implement different algorithms for anomaly detection in Timelion. Another important thing is to know how to visualize the results. In this paper, we show the visualization of an e-dnevnik log by using Logstash, Elasticsearch, Kibana, and also how Timelion helps us to identify anomalies in real time.

Zirije Hasani, Jakup Fondaj
Setting up a Mechanism for Predicting Automobile Customer Defection at SAHAM Insurance (Cameroon)

As markets become more competitive, companies have realized the need to manage the loss of customers (Churn) especially in terms of its prediction. To achieve this, in datamining framework, the main challenge is the selection of variables and the technique adapted to the studied context. This article examines the case of SAHAM insurance and uses ANOVA, chi-square test and Pearson correlations table for variable selection. To make an objective decision on selection of a technique among others, the multi criteria decision aid method PROMETHEE-GAIA has been used. With the aim to improve the initial model, which results was mitigated; the data set has been separated in two groups: individual customers and corporations. Then, with computation of the new one, we observe that, in general, performance is better on the group of individual customers than on previous global model and on corporations.

Rhode Ghislaine Nguewo Ngassam, Jean Robert Kala Kamdjoug, Samuel Fosso Wamba
Quality Analysis of Urban Transit System in St. Petersburg

A constant problem in big cities is the necessity to develop and enhance urban transit systems at a good pace. The main task is to improve the comfort of passengers using public transport. There are a number of indicators by which to value the convenience of the transport system of the metropolis. To compute these indicators, we analyzed the data obtained from the municipal information systems: the public transport payment system and transport tracking system. We evaluated the following indicators: rush hours during the day, average amount of trips made by passengers of each group per month, transportation comfort index, most and least comfortable districts of the city, interchange coefficient for regular trips and ordinary multimodal trips (average number of single trips within multimodal trip), average regular trip time consumption etc.

Natalia Grafeeva, Innokenty Tretyakov, Elena Mikhailova
Density Based Clustering for Satisfiability Solving

In this paper, we explore data mining techniques for preprocessing Satisfiability problem -SAT- instances, reducing the complexity of the later and allowing an easier resolution.Our study started with the exploration of the variables distribution on clauses, where we defined two kinds of distribution. The first distribution represents a space where variables are divided into dense regions and sparse or empty regions. The second distribution defines a space where the variables are distributed randomly filling almost the entire space.This exploration led us to think about two different and appropriate data mining techniques for each of the two defined distribution. The Density based clustering, for the first distribution, where the high density regions are considered as clusters, and sparse regions as noise. And the Grid clustering, for the second distribution, where the space is considered as a grid and each case represent a cluster.The presented work is a modelling of the density based clustering for SAT, which tend to reduce the problem’s complexity by clustering the problems instance in sub problems that can be solved separately.Experiments were conducted on some well know benchmarks. The results show the impact of the use of the data mining preprocessing, and especially, the use of the appropriate technique according to the kind of data distribution.

Celia Hireche, Habiba Drias
System Evaluation of Construction Methods for Multi-class Problems Using Binary Classifiers

Construction methods for multi-valued classification (multi-class) systems using binary classifiers are discussed and evaluated by a trade-off model for system evaluation based on rate-distortion theory. Suppose the multi-class systems consisted of $$M (\ge 3)$$ categories and $$N (\ge M-1)$$ binary classifiers, then they can be represented by a matrix W, where the matrix W is given by a table of M code words with length N, called a code word table. For a document classification task, the relationship between the probability of classification error $$P_e$$ and the number of binary classifiers N for given M is investigated, and we show that our constructed systems satisfy desirable properties such as “Flexible”, and “Elastic”. In particular, modified Reed Muller codes perform well: they are shown to be “Effective elastic”. As a second application we consider a hand-written character recognition task, and we show that the desirable properties are also satisfied.

Shigeichi Hirasawa, Gendo Kumoi, Manabu Kobayashi, Masayuki Goto, Hiroshige Inazumi
Exploring Data Analytics of Data Variety

The Internet allows organizations managers access to large amounts of data, and this data are presented in different formats, i.e., data variety, namely structured, semi-structured and unstructured. Based on the Internet, this data variety is partly derived from social networks, but not only, machines are also capable of sharing information among themselves, or even machines with people. The objective of this paper is to understand how to retrieve information from data analysis with data variety. An experiment was carried out, based on a dataset with two distinct data types, images and comments on cars. Techniques of data analysis were used, namely Natural Language Processing to identify patterns, and Sentimental and Emotional Analysis. The image recognition technique was used to associate a car model with a category. Next, OLAP cubes and their visualization through dashboards were created. This paper concludes that it is possible to extract a set of relevant information, namely identifying which cars people like more/less, among other information.

Tiago Cruz, Jorge Oliveira e Sá, José Luís Pereira
From KDD to KUBD: Big Data Characteristics Within the KDD Process Steps

Big Data is the current challenge for the computing field not only because of the volume of data involved but also for the amazing promises to analyze and interpret massive data to generate useful and strategic knowledge in various fields such as security, sales and education. However, the massive volume of data in addition to other characteristics of Big Data such as the variety, velocity, and variability require a whole new set of techniques and technologies, which are not yet available, to effectively extract the desired knowledge. The KDD (Knowledge Discovery in Databases) process has achieved excellent results in the classical database context and that is why we examine the possibility of adapting it to the Big Data context to take advantage of its strong and effective data processing techniques. We introduce therefore a new process KUBD (Knowledge Unveiling in Big Data) inspired from the KDD process and adapted to the Big Data context.

Naima Lounes, Houria Oudghiri, Rachid Chalal, Walid-Khaled Hidouci
NoSQL2: SQL to NoSQL Databases

New database models, called NoSQL (Not Only SQL) are considered appropriate alternatives for managing and storing Big Data due to their efficiency, high scalability, availability and performance. Database administration affects tasks such as creating databases and objects, attributing priorities and performing backup activities. The execution of these tasks, in NoSQL databases, requires that database administrators (DBAs) have particular expertise in using these databases, and, subsequently, reveals the problems that arise with DBA unfamiliarity of the NoSQL environments. In order to contribute to this field, this paper presents the middleware, NoSQL2, which performs management tasks using the SQL language running on different NoSQL databases. The NoSQL2 allows DBAs to disassociate themselves from the particularities of accessing each NoSQL, since it provides resources for converting SQL commands to the proprietary NoSQL database syntax.

Jane Adriana, Maristela Holanda
Using Pinterest to Improve the Big Data User Experience - A Comparative Analysis in Healthcare

Technology has improved rapidly since cloud computing and Big Data was first designed. However, research has found the user experience in the information retrieval process to be lacking. To this end, we carried out a comparative study of a single portal design versus existing data search tools to determine if it could potentially increase knowledge gathered from Big Data using healthcare to illustrate our example. Comparisons of user experiences of search results against other Big Data sources such as Google, WebMD and the CDC showed that customizing Pinterest to provide a single portal for search results lead to improvements for the users’ knowledge and experience, concerning their healthcare issues.

Nancy Shipley, Joyram Chakraborty
Uncovering Aspects of Places for Fitness Activities Through Social Media

Nowadays, a growing number of people publicly share information about their fitness activities on social media platforms like Twitter or Facebook. These social networks can furnish people with useful information to get an overview of different geographic areas where people can practice different sport-related activities. In this study, we analyze 14 million tweets to identify places to perform fitness activities and uncovering their aspects from twitterers’ opinions. To this end, we apply clustering analysis to uncover places where twitterers perform fitness activities, and then train a text classifier that achieves a score F1 of $$76\%$$ to discriminate the aspects of fitness places. Using this information, recommender systems can provide useful information to local people or tourists that look for places to do exercise.

Johnny Torres, Kevin Ortiz, Juan García, Carmen Vaca

Human-Computer Interaction

Frontmatter
How Different Versions of Layout and Complexity of Web Forms Affect Users After They Start It? A Pilot Experience

This paper presents a research work that analyzes the effect of redirecting users between two different versions of a web form after they have started the questionnaire. In this case, we used a web form proposed by the Spanish Observatory for Employability and Employment (OEEU) that is designed to gather information from Spanish graduates. These two versions are different as follows: one of them is very simple and the other one includes several changes that appeared in the literature related to users’ trust, usability/user experience and layout design. To test the effect of redirecting users between both versions of the web form, we used a group of users that already have started the questionnaire and redirect them to the other version; this is, we changed the web form version they use to the other version and measure how this change affects them. This experiment has shown some promising results, which lead to enhance and extend the experience to bigger populations and other kind of changes in the user interfaces.

Juan Cruz-Benito, José Carlos Sánchez-Prieto, Andrea Vázquez-Ingelmo, Roberto Therón, Francisco J. García-Peñalvo, Martín Martín-González
Generation of Test Samples for Construction of Dashboard Design Guidelines: Impact of Color on Layout Balance

The metric-based evaluation of user interfaces is a promising way to quickly evaluate their usability and other various design aspects. However, development of such metrics usually requires a sufficiently large training set of realistic-looking user interface samples, which might not be always easy to find. This paper describes a workflow of the preparation of such samples. It presents a configurable generator based on the composition of simple widgets into a screen according to a predefined model. It also describes a reusable library for simple creation of widgets using capabilities of the JavaScript framework Vue.js. The application of the implemented generator is then demonstrated on the generation of dashboard samples which are used to show the significance of color in the measuring of the layout balance.

Olena Pastushenko, Jiří Hynek, Tomáš Hruška
When You Write How People Want There Is No Guarantee of Success

The practice of sharing questions online has been called social query. One important factor that might contribute to increase the chance to receive answers is to attract others’ attention. To investigate this issue, we asked programmers about what they expect to find in “good” programming questions and end up with a list of sixteen suggestions on how to improve programming questions. However, we found the presence of more “good” characteristics correlates with none of question’s performance attributes (views, time for first response and number of answers). This means that, after being shared, other features not necessarily related with the “question form” have more influence in what happen with your question than the presence of these so called “good” characteristics. The experimental results were presented, providing a useful insight to go further in this investigation on social query.

Franck Aragão, Patrick Silva, José Remígio, Cleyton Souza, Evandro Costa, Joseana Fechine
Human-Computer Interaction Communicability Evaluation Method Applied to Bioinformatics

Quality measures have different focuses and importance for distinct roles, like developers, maintainers, purchasers and end users. The focus of a quality evaluation may be internal, external quality or quality in use. This last one is the quality of the software product from the user’s point of view. Computer systems must satisfy determined quality criteria to assure interface characteristics of interest to their users and several evaluation methods can be applied to different objectives of quality in use. In bioinformatics, creating visual tools that can adequately represent the complexity and volume of biological data while preserving a high level of usability is a challenge. It is difficult to combine both interface usability and data expressiveness, particularly with heterogeneous and very connected data. 2Path is a case in point, this is a database of metabolic networks for terpenoid biosynthesis. Using evaluation by observation through Communicability Evaluation Method (CEM), we measured the following quality criteria for an interface designed for 2Path: usability, user experience, accessibility, and communicability. The users were monitored during the use of the system, some of their utterances using the system were identified and mapped. We present here the results obtained and the perspectives of a new interface that aims to meet the post-evaluation requirements fully.

Gabriella Esteves, Waldeyr Silva, Maria Emilia Walter, Marcelo Brigido, Fernanda Lima
3D Virtual System Based on Cycling Tracks for Increasing Leg Strength in Children

A 3D virtual system based on cycling tracks is presented. The virtual environment is developed in Unity 3D. Two games are created with different levels of difficulty. The system is created for improving strength, resistance and muscle activation in children. Operation tests and usability are performed in four children between 5 and 9 years old. After the usability surveys SEQ, the outcomes (54.5 ± 0.34) shows that users feel immersed and enjoy the game. The system motivates the user to continue using it.

Edwin Pruna, Ivón Escobar, Washington Quevedo, Andrés Acurio, Marco Pilatásig, Luis Mena, José Bucheli
3D Virtual System for Strengthening Lower and Upper Limbs in Children

A virtual system is presented for child limb strengthening by using the Kinect device. An interactive environment was created using Unity 3D in order to perform movements with upper and lower limbs. The system was tested by one user, the same one performed every exercise correctly, and this helped to strengthen the affected area. Additionally, graphs were generated to compare therapist movements as an input signal versus user movements as an output signal.

Edwin Pruna, Jenny Tigse, Alexandra Chuquitarco, Marco Pilatásig, Ivón Escobar, Eddie D. Galarza
Fine Motor Rehabilitation of Children Using the Leap Motion Device – Preliminary Usability Tests

A 3D virtual system is presented using the leap motion device and Unity 3D software. The system is based on three playful tasks that allow children to recover their fine motor skills in an entertaining way. This system is designed to perform movements like flexion-extension of fingers, supination-pronation, ulnar and radial deviation. The games stimulate visual motor coordination and concentration, which are important factors in the recovery of fine motor skills. The system was used by ten children aged between 6 and 10 years and they have the support of a rehabilitation specialist. After they have completed all the tasks, the SEQ (Single Ease Question) usability test was applied with results (60.8 ± 0.26). The test allows determining that the system has a good acceptance to be used in rehabilitation.

Ivón Escobar, Andrés Acurio, Edwin Pruna, Luis Mena, Marco Pilatásig, José Bucheli, Franklin Silva, Ricardo Robalino
Decision Making on Human Resource Management Systems

The objective of this study is to describe the interactions between individual-level response (employees’ trust and adhesion to Human Resource (HR) Management practices) and organizational-level processes (managers’ implementation of new practices, for instances technologies). The need to understand employees’ perceptions in an interactional perspective, correlating these variables with the perceptions of HR managers constitutes an important field of research, integrating both perspectives in multi-level studies. In this paper we illustrate our initial research in a multi-level research project, presenting the results of a qualitative study of HR managers’ perceptions of social processes involved in HR and their employees’ acceptance of HR practices. We also analyzed perceptions that might lead to modifications of the HR system. Our results suggest that these perceptions influence the implementation or suspension of HR practices.

Ana Teresa Ferreira-Oliveira, José Keating, Isabel Silva
An Assessment Tool for Measuring Human Centered Design Adoption in Software Development Process

Human Centered Software Engineering (HCSE) is an approach which integrates Human Centered Design (HCD) and software engineering (SE) that emphasize on user’s needs and goals during the software engineering process. The obstacle faced by the organizations is the lack of a systematic approach towards HCD adoption that can guide software organization strategize the HCD adoption in their business operation. The aim of this study to propose a Human Centered Design Adoption Model (HCDAM) that can provide a guideline to measure the organizational capability to adopt HCD. The HCDAM was developed by identifying the HCD adoption level and the HCD adoption processes through literature analysis. The adoption levels describe the stage where the organization adopted the HCD while the adoption process describes the processes that the organization has to go through in order to achieve a certain adoption level. The HCDAM is proposed as a measurement tool that enables a systematic approach to guide SE organizations to strategies the HCD adoption in their business operations in the pursuit of developing quality softwares.

Rogayah Abdul Majid, Nor Laila Md. Noor, Wan Adilah Wan Adnan
Possible Risks in Social Networks: Awareness of Future Law-Enforcement Officers

The main purpose of the article was to investigate the awareness of students’, the future law-enforcement officers, who will be expected to provide security for other people, on risks for their own or personal safety in digital space. The paper presents both theoretical considerations and empiric data from the study (completed in 2016 and in 2017) aimed at investigating whether future law-enforcement students recognize the main risks of the safety in digital space. The study is important in the light that so much of contemporary social, personal and professional life is being carried out in digital spaces. If future law-enforcement officers are unable to recognize the risks on safety, they, as a consequence, will not be professional and ready enough to consult and to provide support to citizens on the issues that start in some cases dominating in the functioning of a contemporary person in a contemporary world.

Edita Butrime, Vaiva Zuzeviciute
Mobile Applications for Active Aging

Many countries, including several European states are aging. This demographic change opens a variety of opportunities for innovation in products and services tailored to the needs of an aging population. This paper focus on how ICT-based mobile application can be used to potentiate active aging. The role that mobile computing can play in the support of everyday activities is increasingly recognized. Several countries are currently faced to the aging of their population. Therefore, it is of major importance to develop solutions that extend the time that elderly can live in their preferred environment by increasing their autonomy, comfort and mobility while limiting associated costs and the effects of a possible lack of caregiver human resources. This paper describes the state of the art of solutions for elderly population active aging through mobile applications, as well as the opportunities that mobile applications offer to improve the quality of life of the elderly and to support a cohesive and inclusive society.

Cláudia Ferreira, J. C. Silva, José Luís Silva
Affective Evaluation of Educational Lexicon in Spanish for Learning Systems

In the last years, several educational systems integrate text-based affective feedback. However, the analysis of educational lexicon from the affective perspective is limited. Previous work classified words in three categories: positive, negative or neutral. For that reason, this work proposes the construction of an educational lexicon in the Spanish and the evaluation of its affectivity using the arousal-valence scale. The educational lexicon was setting up by the suggestions of 166 undergraduate students. Then another group of 185 undergraduate students evaluated each word/phrase in a valence and arousal scale. An analysis by student gender and personality was conducted. Also, a clusterization analysis was performed to categorize the words/phrases.

Samantha Jiménez, Reyes Juárez-Ramírez, Víctor H. Castillo, Alan Ramírez-Noriega, Sergio Inzunza

Ethics, Computers and Security

Frontmatter
Determinants of Cyber Security Use and Behavioral Intention: Case of the Cameroonian Public Administration

The development of information and communication technologies has brought in its wake the upsurge of cybercrime and has raised the need to take cyber security measures at all levels. One of them consists in placing the human being at the center of computer security, notably by studying the individual perceptions behind the desire to perform acts of security, including cyber security. This research work actually aims to use a mixed method to determine the rationale behind the intention of Cameroonian authorities to adopt and implement cyber security measures. The theoretical underpinnings of this research were posed by the Unified Theory of Acceptance and Use of Technology and the Health Belief Model.

Doriane Micaela Andeme Bikoro, Samuel Fosso Wamba, Jean Robert Kala Kamdjoug
Scramble the Password Before You Type It

Password is widely used for digital authentication. There are a few conflicting issues about password: A strong password must be a long string without obvious patterns; The safe and convenient way to store the password is to memorize it; People cannot remember strong passwords easily. To remember their passwords, people usually create weak passwords and reuse the passwords across sites. Password server is not secure as we hope for. Millions of hashed passwords were leaked and cracked in the last few years. It is important that people increase the password strength on the client side. In this work, we propose a mechanism that increases the password strength on the client side. A password and a few simple facts remembered by the user are used as input to create a strong password. The proposed mechanism also allows user to easily create strong password which is site-unique.

Jikai Li, Logan Stecker, Ethan Zeigler, Thomas Holland, Daan Liang
A Secure, Green and Optimized Authentication and Key Agreement Protocol for IMS Network

IP multimedia subsystem (IMS) is a prominent architectural framework for multimedia services delivery in 4G/5G networks. Authentication is a critical security mechanism which accords authorized users access to these services. As defined by 3rd Generation Partnership Projects (3GPP), IMS- Authentication and Key Agreement protocol (IMS-AKA) is the official authentication procedure in IMS. However, the procedure is prone to different weaknesses both on security (disclosure of identities) and performances (complexity) aspects. This paper proposes an IMS-AKA+ protocol that addresses effectively user’s identities protection by using a key-less cryptography. Furthermore, the proposed algorithm significantly reduces the authentication process complexity due to the use of Elliptic Curve Cryptography (ECC). Simulations were carried out for the IMS-AKA+ protocol and the original IMS-AKA protocol. The results showed that using IMS-AKA+ reduces of up to 28% authentication time is possible, and a saving of 53% of the storage space occurs with an increased security and less energy.

Saliha Mallem, Chafia Yahiaoui

Health Informatics

Frontmatter
Obesity Cohorts Based on Comorbidities Extracted from Clinical Notes

Clinical notes provide a comprehensive and overall impression of the patient’s health. However, the automatic extraction of information within these notes is challenging due to their narrative style. In this context, our goal was two-fold: first, extracting fourteen comorbidities related to obesity automatically from i2b2 Obesity Challenge data using the MetaMap tool; and second, identify patients’ cohorts applying sparse K-means algorithms on the extracted data. The results showed an average of 0.86 for recall, 0.94 for precision, and 0.89 for F-score. Also, three types of cohorts were found. The results showed that MetaMap can represent a good strategy for automatically extracting medical entities such as diseases or syndromes. Moreover, three types of cohorts could be identified based on the number of comorbidities and the percentage of patients suffering from them. These results show that hypertension, diabetes, CAD, CHF, HCL, OSA, asthma, and GERD were the most prevalent diseases.

Ruth Reátegui, Sylvie Ratté, Estefanía Bautista-Valarezo, Victor Duque
Business Intelligence for Cancer Prevention and Control: A Case Study at the Brazilian National Cancer Institute

The number of healthcare organizations that successfully use data science in their clinical and operational decision making processes is increasing substantially. Studies show that there is a big potential regarding the use of analytical tools to support epidemiologic cancer research. The use of data mining to support the assessment of early detection programs is one of the main strategies of Brazil’s cancer control program and motivated the Brazilian National Cancer Institute (INCA) to develop applications to improve decision making processes. This article presents the development of Business intelligence (BI) systems employed on the management, processing and analysis of a large-scale data for cancer prevention and control.

Antônio Augusto Gonçalves, Cezar Cheng, Carlos Henrique Fernandes Martins, José Geraldo Pereira Barbosa, Sandro Luís Freire de Castro Silva
Data Mining Techniques in Diabetes Self-management: A Systematic Map

Data mining techniques (DMT) provide powerful tools to extract knowledge from data helping in decision making. Medicine, like many other fields, is using DM in diabetes, cardiology, cancer and other diseases. In this paper, we investigate the use of DMT in diabetes, in particular in diabetes self-management (DSM). The purpose is to conduct a systematic mapping study to review primary studies investigating DMT in DSM. This mapping study aims to summarize and analyze knowledge on: (1) years and sources of DSM publications, (2) type of diabetes that took most attention, (3) DM tasks and techniques most frequently used, and (4) the considered functionalities. A total of 57 papers published between 2000 and April 2017 were selected and analyzed regarding four research questions. The study shows that prediction was largely the most used DM task and Neural Networks were the most frequently used technique. Moreover, T1DM is largely the type of diabetes that is most concerned by the studies so as the Prediction of blood glucose.

Touria El Idrissi, Ali Idri, Zohra Bakkoury
Comparative Analysis Between Different Facial Authentication Tools for Assessing Their Integration in m-Health Mobile Applications

The security and privacy in the access to mobile health applications are a challenge that any company or health organization has to take into consideration to develop reliable and robust applications. In this line, the facial authentication becomes a key piece to improve the access to users and that they do not lose their privacy of their data due to cyber-attacks or fraudulent users. The purpose of the current framework is to compare the two relevant facial-based mechanisms to select the most appropriate one for the authentication security in our under-development Framework for developing M-health APps (FAMAP).

Francisco D. Guillén-Gámez, Iván García-Magariño, Guillermo Palacios-Navarro
SOCIAL Platform

This paper presents the project Social Cooperation for Integrated Assisted Living (SOCIAL), which aims at the development of a platform of ehealth applications to support the care networks of community-dwelling older adults. First the requirements of the SOCIAL platform are discussed, and then the high level system architecture is presented.

Manuel Sousa, Luísa Arieira, Alexandra Queirós, Ana Isabel Martins, Nelson Pacheco Rocha, Filipe Augusto, Filipa Duarte, Telmo Neves, António Damasceno
IAQ Evaluation Using an IoT CO2 Monitoring System for Enhanced Living Environments

Indoor air quality (IAQ) parameters are not only directly related to occupational health but also have a huge impact on quality of life. In particular, besides having a very influence on the public health as it may cause a great variety of health effects such as headaches, dizziness, restlessness, difficulty breathing, increase heart rate, elevated blood pressure, coma and asphyxia, carbon dioxide (CO2) can be used as an important index of IAQ. In fact, due to people spend about 90% of our lives indoors, it is extremely important to monitor the CO2 concentration in real-time to detect problems in the IAQ in order to quickly take interventions in the building to increase the IAQ. The variation of CO2 in indoor living environments is in most situations related to the low air renewal inside buildings. CO2 levels over 1000 ppm, indicate a potential problem with indoor air. This paper aims to present iAirC a solution for CO2 real-time monitoring based on Internet of Things (IoT) architecture. This solution is composed by a hardware prototype for ambient data collection and a web and smartphone compatibility for data consulting. This system performs real-time data collection that is stored in a ThingSpeak platform and has smartphone compatibility which allows easier access to data in real time. The user can also check the latest data collected by the system and access to the history of the CO2 levels in a graphical representation. iAirC uses an open-source ESP8266 for Wi-Fi 2.4 GHZ as processing and communication unit and incorporates a CO2 sensor as sensing unit.

Gonçalo Marques, Rui Pitarma
Relationship Between Upper Arm Muscle Index and Upper Arm Dimensions in Blood Pressure Measurement in Symmetrical Upper Arms: Statistical and Classification and Regression Tree Analysis

Objective: to identify the influence of upper arm muscle index (AMI) and upper arm dimensions on the measurement of blood pressure (BP). Methodology: 489 university students were evaluated in Divinópolis, Brazil and data were collected on anthropometric and BP measurements and elaborated multiple linear regression and regression tree models using data mining techniques. Results: length arm (AL) and arm circumference (AC) showed positive correlation with systolic blood pressure (SBP) and diastolic blood pressure (DBP). The AMI presented positive correlation with SBP and negative correlation with DBP. The regression tree showed interactions between BP and AL, AC and AMI. Conclusion: BP values in the upper right upper arm were higher than in the left upper arm in population of healthy young adults. AL and AC were predictors of overestimation of indirect measurement of SBP and DBP. AMI overestimates SBP and underestimates DBP. There were interactions between arm dimensions and BP.

Letícia Helena Januário, Alexandre Carlos Brandão Ramos, Paôla de Oliveira Souza, Rafael Duarte Coelho Santos, Helen Cristiny T. Couto Ribeiro, José Maria Parente de Oliveira, Hevilla Nobre Cezar
Towards a Conceptual Framework for the Evaluation of Telemedicine Satisfaction

Satisfaction is a key factor in evaluating the success of telemedicine systems. The objective of this research is to develop a conceptual framework to aid research in telemedicine satisfaction. This research performs a developmental review of the telemedicine satisfaction literature obtained from PubMed and Google Scholar. Results are synthesized by reviewers using a concept matrix. Findings support a conceptual framework for telemedicine satisfaction that includes: satisfaction dimensions, stakeholders, type of care, type of system, context and methodologies. The framework can be used by future studies for examining and reporting on telemedicine satisfaction.

Robert Garcia, Olayele Adelakun
Miscoding Alerts Within Hospital Datasets: An Unsupervised Machine Learning Approach

The appropriate funding of hospital services may depend upon grouping hospital episodes into Diagnosis Related Groups (DRGs). DRGs rely on the quality of clinical data held in administrative healthcare databases, mainly proper diagnoses and procedure codes. This work proposes a methodology based on unsupervised machine learning and statistical methods to generate alerts of suspect cases of up- and under-coding in healthcare administrative databases. The administrative database, with a DRG assigned to each hospital episode, was split into homogeneous patient subgroups by applying decision tree-based algorithms. The proportions of specific diagnosis and procedure codes were compared within targeted subgroups to identify hospitals with abnormal distributions. Preliminary results indicate that the proposed methodology has the potential to automatically identify upcoding and under-coding suspect cases, as well as other relevant types of discrepancies regarding coding practices. Nevertheless, additional evaluation under the medical perspective need to be incorporated in the methodology.

Julio Souza, João Vasco Santos, Fernando Lopes, João Viana, Alberto Freitas
Data Preprocessing for Decision Making in Medical Informatics: Potential and Analysis

Clinical databases often comprise noisy, inconsistent, missing, imbalanced and high dimensional data. These challenges may reduce the performance of DM techniques. Data preprocessing is, therefore, essential step in order to use DM algorithms on these medical datasets as regards making it appropriate and suitable for mining. The objective is to carry out a systematic mapping study in order to review the use of preprocessing techniques in clinical datasets. As results, 110 papers published between January 2000 and March 2017 were, selected, analyzed and classified according to publication years and channels, research type and the preprocessing tasks used. This study shows that researchers have paid a considerable amount of attention to preprocessing in medical DM in last decade and a significant number of the selected studies used data reduction and cleaning preprocessing tasks.

H. Benhar, A. Idri, J. L. Fernández-Alemán
Assessing the Target’ Size and Drag Distance in Mobile Applications for Users with Autism

Users with Autism Spectrum Disorders (ASD) show great interest in, and operate with facility, technological devices like smartphones and tablets. As a result, the number of applications specially developed for these kinds of users keeps growing. Nevertheless, the creation of an application that adapts to user abilities is not a straightforward process. This article focuses on identifying the optimal target size and drag distance that developers and designers can use when creating applications for users with ASD to allow for easier interaction of users with screen elements. In the experiment performed, different target sizes and drag distances were compared. Based on the results, we suggest that 57 pixels is the minimum target size to support the interaction of level 1 and 2 for users diagnosed with Autism Spectrum Disorders. These results can be used as guidelines for interaction designers of mobile applications for autism. Nevertheless, the creation of an application that adapts to user abilities is not a straightforward process, because users with these conditions have significant sensory-motor problems.

Angeles Quezada, Reyes Juárez-Ramírez, Samantha Jiménez, Alan Ramírez-Noriega, Sergio Inzunza, Roberto Munoz
Perioperative Data Science: A Research Approach for Building Hospital Knowledge

Perioperative care is changing through advances in technology with the aim of maximizing quality and value. Future transformation in care will be enabled by data and consequently by knowledge. This paper describes a knowledge management and data science research project and its results based on a study applied to the perioperative department at Hospital Dr. Nélio Mendonça between 2013 and 2015. Conservative practices, such as manual registry, are limited in their scope for preoperative, intraoperative and postoperative decision making, discovery, extent and complexity of data, analytical techniques, and translation or integration of knowledge into patient care. This study contributed to the perioperative decision making process improvement by integrating data science tools on the perioperative electronic system (PES) assembled. Before the PES implementation only 1,2% of the nurses registered the preoperative visit and after 87,6% registered it. Regarding the patient features it was possible to assess anxiety and pain levels. A future conceptual model for perioperative decision support systems grounded on data science should be considered as a knowledge management tool.

Márcia Baptista, José Braga Vasconcelos, Álvaro Rocha, Rita Lemos, João Vidal Carvalho, Helena Gonçalves Jardim, António Quintal
A Novel Approach in Virtual Rehabilitation for Children with Cerebral Palsy: Evaluation of an Emotion Detection System

Emotional and behavior alterations are common symptoms in children with Cerebral Palsy (CP). Mental health disorders in this pathology are mainly stress, anxiety, and depression. Since there is a strong relationship between behavior and emotion symptoms, children with CP show difficulties to communicate in terms of family, peer, and attention signs. With a high prevalence of emotional and behavior disorders and few studies in Virtual Rehabilitation focused on the enrichments of this alterations, it is necessary to analyze the use of new groundbreaking technological systems. The purpose of this paper is to analyze behavior and emotion symptoms of a child which CP. For this, we use a novel system that recognizes emotions such as neutrality, happiness, and annoyance, together with the Strengths and Difficulties Questionnaire (SDQ). The preliminary outcomes encourage us to continue analyzing enrichments in terms of emotional and behavior problems.

Sergio Albiol-Pérez, Sandra Cano, Marlene Goncalves Da Silva, Erika Giselle Gutierrez, Cesar A. Collazos, Javier López Lombano, Elena Estellés, Mónica Alberich Ruiz
A Groundbreaking Technology in Virtual Rehabilitation to Improve Falls in Older People

Falls are a common alteration in older community dwellers and represent a serious problem for these dwellers. With a high incidence in this type of population and high medical costs to treat them, the traditional sessions need to enrich with low-cost alternatives. Virtual Rehabilitation is a novel research area that provides playful, motivating and customizable environments that are very useful for therapeutic sessions. The sit-to-stand movement is a typical exercise that can reduce fear of falls and lessen fall risks, for this, we have created a groundbreaking technologic system that follows the rules of this exercise. The Neuro Balance (NBAL) system is an accessible tool to support rehabilitation of older people with fall histories. In this study, we have tested NBAL with a user satisfaction questionnaire to analyze the acceptance of our tool. The results showed that NBAL is a satisfactory tool to be tested on neurological patients in a near future.

Marlene Goncalves Da Silva, Sergio Albiol-Pérez, Javier López Lombano, Sonsoles Valdivia Salas, Sandra Cano, Erika Giselle Gutierrez, Nancy Jacho-Guanoluisa, Cesar A. Collazos
Towards to a System for Predicting an Insufficient Wake State in Professional Drivers

Sleep is one of the major responsible for road accidents. Their early detection can prevent many of these types of accidents. The current sleep detection systems are based mainly on the evaluation of the behaviour of the driver, which is expressed by the steering wheel movement, facial expression or eyes movement monitored by a camera. However, these systems only detect a sleep situation when it already exists, what is proved to be insufficient. It is fundamental to anticipate this state more than to realize that the driver is in the state of drowsiness, alerting the driver while he is still awake for the necessity to stop and rest. Nowadays, the technology focused on the health and well-being has been making considerable progress. Mobile and wearable devices, among other equipment with multiple sensors, are being used more frequently in areas which are directly or indirectly related to health. The increased quality and accuracy of these devices improves their reliability and credibility, allowing them to be used in more sensitive contexts, particularly in the health area. The current wearable devices have a set of sensors that allow the evaluation of biometric parameters, as well as information about body movement that can help to predict sleep state. This technology can be used to prevent accidents.

Joaquim Gonçalves, Ablulay Abreu, Sandro Carvalho

Information Technologies in Education

Frontmatter
Proposal of a Knowledge Management Model and Virtual Educational Environment in the Degree of Law-Business

The technological evolution involves important changes in the knowledge generation and transmission processes. In this context, the development of virtual learning environments permits the establishment of learning-based instructions, where student-centered learning is emphasized. The objective of this paper is to analyze the creation of explicit and tacit knowledge that entails the virtual learning environment established in a subject of Economics. Results of the questionnaires indicate that an online platform is a tool that provides explicit and tacit knowledge in our students, in which students’ capacity improvement is emphasized. Besides, information and communication technologies allow students to obtain a better comprehension and retention of knowledge that is reflected in the students’ grades. Hence, the necessity of developing educational policies that promotes the use of these technologies.

María Teresa García-Álvarez, Gustavo Pineiro-Villaverde, Laura Varela-Candamio
Designing Documentary Videos in Online Courses

The present study describes a new educational methodology conducted in educational programs through the elaboration of documentary videos by students. This project links the field of academic research with the acquisition of skills for the labor market. The monitoring of the guided project is based on the use of information and communication technology (ICT) and e-learning. Participants were undergraduates students (n = 37) from the University of A Coruna (Spain). The final presentation of these videos is uploaded to the Moodle platform, the most widely used Learning Management System (LMS) in the world, which allows all students to discuss the results of each partner and reach a global perspective of group work in the form of social network, by virtual chats. An achievement questionnaire was used to put forward their learning performances. Results show the benefits gained from conducting student research activities in an online environment in terms of self-learning, cooperative experience and other labor market skills by students.

Laura Varela-Candamio, Fernando Rubiera Morollón, María Teresa García-Álvarez
Are University Professors of the South American Countries Preparing Students for Digital Transformation?

The higher education provided by universities plays a fundamental role in transforming society in all its dimensions, namely teaching and educational management. The combination of higher education and the increase of mobile technologies (one of the pillars of digital transformation) leads to an increase in training opportunities and the development of new teaching methods. Due to this, several concepts from the e-learning, m-learning to u-learning appeared. Therefore, it is critical to understand whether users (students and professors) are receptive and aware to adapt to this new paradigm before deciding to implement teaching-learning methods based on mobile technology. In this context, the aim of this paper is to investigate the perception if the South American universities professors are using Mobile Learning with gamification and augmented reality apps and how they can be used to promote student’s engagement inside and outside of the classroom, in special to prepare the students for digital transformation.

Fernando Moreira, Natércia Durão, Carla Santos Pereira, Maria João Ferreira
Learning State Estimation Method by Browsing History and Brain Waves During Programming Language Learning

Various factors affect misstep learning, such as the quality and difficulty of the learning contents and the learner’s proficiency level. A system for acquiring the browsing history at the time of learning has been proposed. However, it may be insufficient to refer only to the learner’s browsing time. For example, when the browsing time is short, it can not be determined whether the learning contents were too easy for the learner or whether learning was abandoned because the learning contents were too difficult. Therefore, in this paper, we propose a method of determining the learning state of learners by simultaneously analyzing learning history information and brain wave information, not using history information and brain wave information individually. And we will show that the learning state for each learner will be able to be successfully estimated by our proposed method.

Katsuyuki Umezawa, Tomohiko Saito, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Designing Collaborative Strategies Supporting Literacy Skills in Children with Cochlear Implants Using Serious Games

Literacy is a process where children with auditory impairments face serious difficulties in acquiring oral language, due to they must learn to listen each one of the phonetic sounds that are part of one word, so that they can learn to pronounce it and write it. Collaborative Learning (CL) applied in literacy teaching may be very favorable for children’s learning. Therefore, a model is proposed to design collaborative strategies using serious games.The model is applied to a case study that was carried out with a group of 14 children with cochlear implants, in the pre-kinder and transition stages at Institute of Blind and Deaf Children of Cauca Valley (INCSVC), in Cali, Colombia, where a collaborative serious game called Phonomagic (Fono-Mágica in spanish) was designed using a learning method called invariant method, which teachers use in the process of literacy learning to children with cochlear implant. FonoMagic is a serious game, which is formed by a physical board and a digital application, that allow children to interact with the application through activities such as completing words, recognizing sounds or identifying vowels and consonants.

Sandra Cano, César A. Collazos, Leandro Flórez Aristizabal, Fernando Moreira, Victor M. Peñeñory, Vanessa Agredo
The Use of Technology in Portuguese Higher Education: Building Bridges Between Teachers and Students

The development of technologies together with the new formats, pedagogies and methodologies that they make possible and the characteristics of the youngsters are forcing higher education institutions to change. In order to fill the gap between school and the world outside school, some teachers are adopting and using these technologies. However, there is not enough information available about how they are doing that, if they really use them and, when they do, how do they do it. In this quantitative study, teachers and students were questioned about the use of technologies in the classroom. Results show that although teachers know how to use technology, and advocate they use it, it is still not clear the purpose they use it. The reasons advanced for this situation are the technology itself and, in some cases, with the curriculum. Future work should focus on the analysis of the way teachers really use the technology, thus it will be possible to provide the training and support to help them to fully take advantage of the technologies with pedagogical purposes.

Anabela Mesquita, Paula Peres, Fernando Moreira
Let the Learning Management System Grow on You – The Effect of Experience on Satisfaction

In this research the effect of users’ experience with an Information System, i.e. the time past since the introduction of a new Information System to the users, on their satisfaction from it is examined. Students’ satisfaction from Learning Management System (LMS) in two higher education institutes was examined via a short questionnaire. The questionnaire was distributed at three points of time – after several years of use of an LMS, after one year of use of a new LMS, and after three years of use of the new LMS. Results show that satisfaction with the two LMSs was similar after a few years of experience, but lower with shorter experience period, with no statistical differences between the two institutes, and higher satisfaction for women. These results suggest that long term use may be a determinant of user satisfaction. Some implications and limitations of the study are presented in the concluding section.

Gali Naveh, Amit Shelef
Virtual Reality in the Learning Process

Knowledge is essential and experience teaches. Therefore, new learning trends in schools are based on experience, making use of technologies such as Virtual Reality (VR) that allow to experience situations that are very close to reality. However, unlike computers, this technology does not present the same level of integration in schools. This article presents a systematic literature review of research conducted into virtual reality and the learning process in order to identify the important characteristics of virtual reality technology and its effect on the learning process. A total of 30 articles published between 1999 and 2017 were selected. The results show that the characteristics of interaction and immersion are the most important to consider in a virtual reality technology and its implementation would support the learning process.

Bayron Chavez, Sussy Bayona
GAMIFY-SN: A Meta-model for Planning and Deploying Gamification Concepts Within Social Networks - A Case Study

Gamification strategies aligned with social network features increase student’s engagement and motivation, along its interactions. However, there is a lack of models or processes that aids the instructors to deploy gamification and social network features within their educational environments in an effective way, since most of these instructors do not have knowledge regarding gamification design nor resources and time to deploy it. The literature also lacks of studies that considers those instructors as variables in the implantation process, since their participation is essential in order to promote beneficial effects of gamification. Based on this premise we propose a meta-process to aid in gamification planning and implantation with social network concepts, called GAMIFY-SN. We conducted a case study to validate our approach, within a programming course with 40 students. We collected the student’s opinion through qualitative and quantitative approaches, and interviewed the instructor to get feedback from the meta-process. Our results demonstrated that the gamification within social network was well-accepted by the students and instructors, however there are still some obstacles to overcome.

Armando M. Toda, Ricardo M. C. do Carmo, Alan Pedro da Silva, Seiji Isotani
Opportunities and Challenges for Efficient and Effective STEM Teachers’ Competence Development

In this paper the results from a study, performed in several European countries, are presented, aimed to identify the main challenges which teachers face when trying to implement innovative teaching methods, stressing on the situation in Bulgaria. The overall design of the study is presented and the used research method is described. The main research activities, performed during the study, are outlined and compared with similar research efforts and initiatives. At the end, systematic analysis of results achieved is performed and proposals for further improvement of the competence development of teachers were made.

Nikolina Nikolova, Eliza Stefanova, Pencho Mihnev, Krassen Stefanov
Let’s Reinvent Note Taking

We discuss the problem of note taking versus having handouts provided by the teacher. We present a system we developed, composed by both hardware and software solution. The system requests students to be active in class taking notes, and yet helps them by relieving from the need of transcribing what is being presented. Moreover, it synchronizes the notes taken by students with a recording of the lecture in form of a video, so that student can use notes as anchors for reviewing portions of the lecture.

Marco Ronchetti, Tiziano Lattisi, Andrea Zorzi
COCO: Semantic-Enriched Collection of Online Courses at Scale with Experimental Use Cases

With the proliferation in number and scale of online courses, several challenges have emerged in supporting stakeholders during their delivery and fruition. Machine Learning and Semantic Analysis can add value to the underlying online environments in order to overcome a subset of such challenges (e.g. classification, retrieval, and recommendation). However, conducting reproducible experiments in these applications is still an open problem due to the lack of available datasets in Technology-Enhanced Learning (TEL), mostly small and local. In this paper, we propose COCO, a novel semantic-enriched collection including over 43 K online courses at scale, 16 K instructors and 2,5 M learners who provided 4,5 M ratings and 1,2 M comments in total. This outruns existing TEL datasets in terms of scale, completeness, and comprehensiveness. Besides describing the collection procedure and the dataset structure, we depict and analyze two potential use cases as meaningful examples of the large variety of multi-disciplinary studies made possible by having COCO.

Danilo Dessì, Gianni Fenu, Mirko Marras, Diego Reforgiato Recupero
Didactic System for Process Control Learning: Case Study Flow Control

It is presented a didactic system for the process control learning, it is considered as a practical case the flow control; four stages are considered for the implementation. The first stage allows obtaining the mathematical model of the plant under test, the second stage allows the design of the PID control and selection of the tuning method, the third and fourth stages allow to implement the control designed to verify experimentally the response of the developed controls.

Edwin Pruna, Mauricio Rosero, Rai Pogo, Ivón Escobar, Julio Acosta
Perceptions of the Educational Benefits of Mobile Devices in Language Teaching and Learning

This study investigates perceptions of the educational benefits of mobile devices in language teaching and learning. Within the context of Initial Teacher Education, MA students answered a semi-structured questionnaire containing questions about (a) their level of comfort with their mobile devices, (b) their own learning experiences with mobile devices and (c) their future teaching perspectives as English language teachers. The results reveal that a significant discrepancy exists between the students’ level of comfort, on the one hand, and their familiarity with the educational uses of mobile devices, on the other. Explicit training in the use of mobile devices as learning and teaching tools, with a focus on specific subject areas, is therefore needed in Initial Teacher Education programs to ensure that future teachers are capable of using mobile devices effectively.

Ana R. Luís
Towards the Measuring Criteria of IT Project Success in University Context

Commercial projects are carried out according to the rules of a certain software development approach but the academic projects do not always adhere to any formal processes. So far little attention has been paid to the same problem in academic context. By investigation of their assessment criteria in commercial context a set of metrics and measures was determined and adapted to provide a structured evaluation approach for projects developed in academic setting. Professionalizing teaching and assessment process is an attempt to close a gap between workforce’s expectations towards new graduates and the outcomes of their university education.

Rafał Włodarski, Aneta Poniszewska-Marańda
User-Generated Content: Perceived Affordances in Students’ Usage of the Web for Tertiary Learning Activities

This paper investigates which affordances students perceive when using the Internet for academic learning purposes, with a focus on user-generated content (UGC). Based on an explorative interview study, Wikipedia, Facebook, and YouTube were found to play an important role in students’ preparation and working settings. The findings indicate that the usage of Wikipedia and YouTube is mainly based on perceived content-related and physical affordances. In the case of Facebook, also social/relational and transactional affordances play a major role by enabling users to participate in peer groups and to share materials of interest. The results are to be validated in further research.

Corinna Raith
Autism and Web-Based Learning: Review and Evaluation of Web Apps

Challenges with social skills are the main barrier of people with autism preventing them to be independent adults. However, information and communication technology can help individuals with autism by developing and practicing essential social skills. The objective of this research was reviewing and evaluating web apps for people with autism using Mobile App Rating Scale. The review used a set of free apps from Doctor TEA and Pictoaplicaciones. Based on these websites, a total of 65 apps were evaluated using Mobile App Rating Scale. The results showed that twenty-five apps had an acceptable quality with scores over four. Therefore, the use of these apps could be recommended for therapists, parents, and people with autism. The app list generated by this research can be used as a complementary resource in the educational field targeting specific skills needed by people with autism to overcome different social situations.

Andrés Larco, Esteban Diaz, Cesar Yanez, Sergio Luján-Mora
Student Play: A Didactic Tool to Educate in Values

This article describes the creation, development and practical application of a prototype version of a web tool called Student Play, an educational module of Agent SocialMetric, whose main objective is to establish interactive games with students to educate in values, through different conversational interface software agents.

Antoniet Kuz, Roxana Giandini
Computer-Assisted Reading and Spelling Intervention with Graphogame Fluent Portuguese

Learning to master reading and spelling can be assisted by communication technologies, ever more common among children. The aim of this study was to investigate whether a new science-based computer-assisted intervention could improve reading and spelling in 2nd graders at risk to fail literacy acquisition. Participants were 7-year-old monolingual children identified as having reading difficulties in their native language, Portuguese. Following neurocognitive assessment, the children were divided into two matched groups (N = 15 × 2), one to be trained with Graphogame Fluent Portuguese that we developed (target intervention) and the other with an analogous Graphogame in mathematics (control intervention) as supplement to regular classroom instruction. After 7-h training, the target group made greater progress in reading and spelling than the control group. These results show that Graphogame Fluent Portuguese allowed struggling beginning readers to improve reading and spelling and thus is a promising tool for early intervention in reading difficulties in Portuguese.

Lénia Carvalhais, Ulla Richardson, São Luís Castro

Information Technologies in Radiocommunications

Frontmatter
Generation of Land-Clutter Maps for Cognitive Radar Technology

The concept of cognitive radar is based on an intelligent signal processing obtained by the interaction of the radar with the surrounding environment in order to optimize the radar operation. In the frame of the vast literature on measurement and modelling of the echoes from the environment such as land, vegetation, man-made infrastructures (called land clutter or fixed clutter in the radar jargon), this paper is aimed to contribute with a practical methodology based on a moveable, lightweight, small and cheap marine radar and its calibration. Live results, related to Radar Cross Section measurements in a typical suburban area, are also shown and analysed. Buildings, streets, highways, car parking areas with different coatings, large and medium size lampposts, soil with grass, trees, all in a radius of a very few kilometres, makes this area interesting as far as land clutter characterization is concerned.

Gaspare Galati, Gabriele Pavan
Quadrature Receiver Benefits in CW Doppler Radar Sensors for Vibrations Detection

Techniques for contactless detection of vibrations claim a wide range of applications in both civil as well as biomedical contexts. In this work, two Doppler radar signal processing solutions, respectively based on single-channel and quadrature receivers, are implemented and compared as alternative to standard sensors for contactless identification of vibrations. The detection ability at different target ranges is tested through numerical simulations.

A. Raffo, S. Costanzo, V. Cioffi
Adaptive Dual Color Visible Light Communication (VLC) System

In this work, we propose a Visible Light Communication (VLC) system dynamically adapting, through a decision-making process based on a simple fuzzy-logic, the transmitting color selection to the external environmental conditions. Transmitted Signals are opportunistically treated through softwarization approaches by using basic hardware (i.e. Arduino boards and inexpensive LEDs in the transmitting stage) in order to implement an effective, end-to-end, adaptive communication system. In particular, we will show that, if a low environmental noise is added, the system keeps to be well-performing in terms of Bit Error Rate (BER) also at higher distances (up to 8–9 m) using a warm white front-end, while, if high external interfering lights are present in the environment, a low power red front end is dynamically fed for maintaining a good-level communication (with low Bit Error Rate).

Antonio Costanzo, Valeria Loscri’, Sandra Costanzo
Multi-band Fractal Microwave Absorbers

Multi-band fractal microwave absorbers are proposed to operate within the UHF-RFID band and/or the GSM frequency bands. The miniaturization capabilities offered by the adopted fractal geometry are exploited to design multi-band absorber unit cells having small size (less than half-wavelength) and a very thin substrate thickness (≤λ0/100 at the operating frequency). Thanks to its compactness and effectiveness in achieving perfect absorption, the proposed configuration is appealing for multipath reduction in confined indoor environments or for multi-band electromagnetic interferences suppression in low-frequency communication systems.

Francesca Venneri, Sandra Costanzo, Giuseppe Di Massa
Fast Diagnostics of Conformal Arrays

Near-field data are used as diagnostics tool to reconstruct the aperture field distribution of conformal array antennas and identify any defective element. The problem of inversion and regularization of the resulting matrix is addressed. A sampling method is proposed and successfully compared, in terms of computation time, with other existing approaches.

Giuseppe Di Massa, Sandra Costanzo

Technologies for Biomedical Applications

Frontmatter
Complex Permittivity Effect on the Performances of Non-invasive Microwave Blood Glucose Sensing: Enhanced Model and Preliminary Results

The importance of an accurate dielectric model for the blood permittivity, both in terms of real part as well as of imaginary part, when changing the glucose level, is properly highlighted in the present contribution, with the aim to improve the performances of non-invasive microwave glucose sensors. A Cole-Cole based enhanced dielectric model, properly correcting the imaginary part of blood permittivity to include the glucose changes dependency, is proposed. Preliminary numerical validations of the presented approach, and comparisons with existing models, are discussed.

Sandra Costanzo, Vincenzo Cioffi, Antonio Raffo
Backmatter
Metadaten
Titel
Trends and Advances in Information Systems and Technologies
herausgegeben von
Álvaro Rocha
Prof. Hojjat Adeli
Luís Paulo Reis
Sandra Costanzo
Copyright-Jahr
2018
Electronic ISBN
978-3-319-77712-2
Print ISBN
978-3-319-77711-5
DOI
https://doi.org/10.1007/978-3-319-77712-2