Skip to main content

2019 | Buch

Applied Informatics

Second International Conference, ICAI 2019, Madrid, Spain, November 7–9, 2019, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed papers of the Second International Conference on Applied Informatics, ICAI 2019, held in Madrid, Spain, in November 2019.

The 37 full papers and one short paper were carefully reviewed and selected from 98 submissions. The papers are organized in topical sections on bioinformatics; data analysis; decision systems; health care information systems; IT Architectures; learning management systems; robotic autonomy; security services; socio-technical systems; software design engineering.

Inhaltsverzeichnis

Frontmatter
Correction to: Applied Informatics

In the originally published version of the paper on p. 158, the name of the Author was incorrect. The name of the Author has been corrected as “Pramote Kuacharoen”.In the originally published version of the paper on p. 357, the affiliation of the Author was incorrect. The affiliation has been corrected as “Universidad Distrital Francisco Jose de Caldas, Bogota, Colombia”.In the originally published version of the paper on p. 373, the affiliation of the Author was incorrect. The affiliation has been corrected as “Universidad Distrital Francisco Jose de Caldas, Bogota, Colombia”.

Hector Florez, Marcelo Leon, Jose Maria Diaz-Nafria, Simone Belli

Bioinformatics

Frontmatter
Bioinformatics Methods to Discover Antivirals Against Zika Virus

Zika virus is a member of the Flaviviridae virus family, similar to other viruses that affect humans, such as hepatitis C and dengue virus. After its first appearance in 1947, Zika virus reappeared in 2016 causing an international public health emergency. Zika virus was considered a non dangerous human pathogen; however, it is currently considered a pathogen with serious consequences for human health, showing association with neurological complications such as Guillain-Barre syndrome and microcephaly. Then, it is necessary to get antivirals able to inhibit the replication of the Zika virus since vaccines for this virus are not yet available. Zika virus structure is similar to hepatitis C virus structure. This characteristic suggests that anti-hepatitis C virus agents can be used as alternative in treatments against the Zika virus. This work aims to determine a non-nucleoside analogue antivirals that can be considered possible antivirals against Zika virus. In this study, we used computational methods to analyze the Docking and the modeling of the NS5 polymerase of Zika virus and antivirals.

Karina Salvatierra, Marcos Vera, Hector Florez

Data Analysis

Frontmatter
Academic Behavior Analysis in Virtual Courses Using a Data Mining Approach

Virtual education is one of the educational trends of the 21st century; however knowing the perception of students is a new challenge. This article presents a proposal to define the essential components for the construction of a model for the analysis of the records given by the students enrolled in courses in a virtual learning platform (VLE). The article after a review of the use of data analytics in VLE presents a strategy to characterize the data generated by the student according to the frequency and the slice of the day and week that access the material. With these metrics, clustering analysis is performed and visualized through a map of self-organized Neural Networks. The results presented correspond to five courses of a postgraduate career, where was found that students have greater participation in the forums in the daytime than in the nighttime. Also, they participate more during the week than weekends. These results open the possibility to identify possible early behaviors, which let to implement tools to prevent future desertions or possible low academic performance.

Dario Delgado-Quintero, Olmer Garcia-Bedoya, Diego Aranda-Lozano, Pablo Munevar-Garcia, Cesar O. Diaz
Analysis of Usability of Various Geosocial Network POI in Tourism

The paper deals with an analysis of information usability of Points of Interest across different geosocial networks in tourism. The analysis contains a comparison of data retrieved from Facebook API, Foursquare API and Google Places API. The data was obtained for tourist areas from the smallest towns up to metropolitan cities. This article tries to verify the hypothesis whether or not geosocial networks provide relevant local information to participants in tourism, at least at the equivalent level currently available from traditional information resources used in tourism. In which case, geosocial networks have a potential to be used as a primary information resources in the commercial sector, specifically in local tourism.

Jiří Kysela
Application of the Requirements Elicitation Process for the Construction of Intelligent System-Based Predictive Models in the Education Area

Decision-making is an essential process in the lives of organizations. While each member in an organization makes decisions, this process is particularly important for managerial positions in charge of making decisions on resources allocation. These decisions must be based on predictions about time, effort and/or risks involved in their tasks. Currently, this situation is exacerbated by the complex environment surrounding the organizations, which makes them act beyond their traditional management systems incorporating new mechanisms such as those provided by Artificial Intelligence, leading to the development of an Intelligent Predictive Model. In this context, this work proposes the implementation of a process to assist the Information Systems Engineer in the difficult work of collecting, understanding, identifying and registering the necessary information to implement an Intelligent System-based Predictive Model.

Cinthia Vegega, Pablo Pytel, María Florencia Pollo-Cattaneo
Evaluating Student Learning Effect Based on Process Mining

As education is taking an increasingly significant role in society today, efficient and precise evaluation of student learning effect is calling for more attention. With recent advances of information technology, learning effect can now be evaluated via mining student’s learning process. This paper proposes an interactive student learning effect evaluation framework which focuses on in-process learning effect evaluation. In particular, our proposal analyzes students modeling assignment based on their operation records by using techniques of frequent sequential pattern mining, user behavior analysis, and feature engineering. In order to enable effective student learning evaluation and deliver practical value, we have developed a comprehensive online modeling platform to collect operation data of modelers and to support the corresponding analysis. We have carried out a case study, in which we applied our approach to a real dataset, consisting of student online modeling behavior data collected from 24 students majoring in computer science. The results of our analysis show that our approach can effectively and practically mine student modeling patterns and interpret their behaviors, contributing to assessment of their learning effect.

Yu Wang, Tong Li, Congkai Geng, Yihan Wang
Evalu@: An Agnostic Web-Based Tool for Consistent and Constant Evaluation Used as a Data Gatherer for Artificial Intelligence Implementations

Evalu@ is a software development created under the model-view-controller pattern and is meant to be executed in a client-server architecture. It is benefited from the worldwide coverage of the Internet and acts as an evaluating gadget and a data centralizer. Evalu@ is initially conceived as a solution to the lack of assistant tools while running the quality programs in industrial environments. Later, due to its high degree of generalization in the setup of evaluations schemes, the software was successfully flavored to suit the willingness of entrepreneurs in other fields. Recently, some Machine Learning features have been added and are being tested to close the monitoring cycle by not only keeping track of the evaluation items chronologically; but also being capable of classifying and predicting outcomes based on previously gathered data.

Fernando Yepes-Calderon, Juan F. Yepes Zuluaga, Gonzalo E. Yepes Calderon
Model for Resource Allocation in Decentralized Networks Using Interaction Nets

This article presents the description of a model for allocating resources using Interaction Nets and a strategy for playing public goods. In the description of the model first shows the behavior of the allocation of resources towards the nodes depending on the usefulness of the network and the satisfaction of the agents. Then the generalization of the model with Interaction Nets is described, and a simulation of this behavior is made. It is found that there is an emerging behavior condition in the dynamics of the interaction when assigning resources. To test the model, the interaction of sharing the Internet in an ad hoc network is done. The interaction is shown in the general model obtained.

Joaquín F. Sánchez, Juan P. Ospina, Carlos Collazos-Morales, Henry Avendaño, Paola Ariza-Colpas, N. Vanesa Landero
RefDataCleaner: A Usable Data Cleaning Tool

While the democratization of data science may still be some way off, several vendors of tools for data wrangling and analytics have recently emphasized the usability of their products with the aim of attracting an ever broader range of users. In this paper, we carry out an experiment to compare user performance when cleaning data using two contrasting tools: RefDataCleaner, a bespoke web-based tool that we created specifically for detecting and fixing errors in structured and semi-structured data files, and Microsoft Excel, a spreadsheet application in widespread use in organizations throughout the world which is used for diverse types of tasks, including data cleaning. With RefDataCleaner, a user specifies rules to detect and fix data errors, using hard-coded values or by retrieving values from a reference data file. In contrast, with Microsoft Excel, a non-expert user may clean data by specifying formulae and applying find/replace functions. The results of this initial study, carried out using a focus group of volunteers, show that users were able clean dirty data-sets more accurately using RefDataCleaner, and moreover, that this tool was generally preferred for this purpose.

Juan Carlos Leon-Medina, Ixent Galpin
Study of Crime Status in Colombia and Development of a Citizen Security App

The indices of citizen insecurity have been increasing in Colombia in recent years after the signing of the peace agreement. Many of the demobilized guerrilla members have gone to the streets of the country to seek a new direction or occupation and unfortunately have fallen into crime, increasing delinquency levels of crimes such as robbery, extortion, rape, micro-trafficking and personal injury. Added to this, the increase of the migrant population from the neighboring country Venezuela, in conditions of displacement have forced that part of this population with limited employment opportunities to take refuge in crime as it has become their only form of survival. Given this problem, it has become interesting to analyze through this investigation, the behavior of these crimes in the last years and to propose a technological solution that allows detecting the geological sectors and crime type that contribute the most to the social problem. In this way, offer alternatives for the protection of citizens using a mobile application that allows them to face the situation by making them part of the solution.

Raquel E. Canon-Clavijo, Cesar O. Diaz, Olmer Garcia-Bedoya, Holman Bolivar
University Quality Measurement Model Based on Balanced Scorecard

A Higher Education Institution (HEI) has the responsibility to track the processes through indicators that guarantee the measurement of the results in almost real time. This article presents the design of a management and quality model of the processes in a university, through the integration of a Balance Scorecard (BSC) and the implementation of an information system. For which it was required: a review of existing tracing and monitoring systems in the academic sector, definition of the requirements of the proposed technological, a diagnosis of the current measurement system of the HEI analyzed, identify measurement indicators and develop a technological tool. The designed model presents a precise and clear methodological guide that can be replicated in any HEI to monitor its processes.

Thalia Obredor-Baldovino, Harold Combita-Niño, Tito J. Crissien-Borrero, Emiro De-la-Hoz-Franco, Diego Beltrán, Iván Ruiz, Joaquin F. Sanchez, Carlos Collazos-Morales

Decision Systems

Frontmatter
Algorithmic Discrimination and Responsibility: Selected Examples from the United States of America and South America

This paper discusses examples and activities that promote consumer protection through adapting of non-discriminatory algorithms. The casual observer of data from smartphones to artificial intelligence believes in technological determinism. To them, data reveal real trends with neutral decision-makers that are not prejudiced. However, machine learning technologies are created by people. Therefore, creator biases can appear in decisions based on algorithms used for surveillance, social profiling, surveillance, and business intelligence.This paper adapts Lawrence Lessig’s framework (laws, markets, codes, and social norms). It highlights cases in the USA and South America where algorithms discriminated and how statutes tried to mitigate the negative consequences. Global companies such as Facebook and Amazon are among those discussed in the case studies. In the case of Ecuador, the algorithms and the lack of protection of personal data for citizens are not regulated or protected in the treatment of information that arises in social networks used by public and private institutions. Consequently, individual rights are not strictly shielded by national and international laws and or through regulations of telecommunications and digital networks. In the USA, a proposed bill, the “Algorithmic Accountability Act” would require large companies to audit their machine-learning powered automated systems such as facial recognition or ad targeting algorithm for bias. The Federal Trade Commission (FTC) will create rules for evaluating automated systems, while companies would evaluate the algorithms powering these tools for bias or discrimination, including threats to consumer privacy or security.

Musonda Kapatamoyo, Yalitza Therly Ramos-Gil, Carmelo Márquez Dominiguez
Continuous Variable Binning Algorithm to Maximize Information Value Using Genetic Algorithm

Binning (bucketing or discretization) is a commonly used data pre-processing technique for continuous predictive variables in machine learning. There are guidelines for good binning which can be treated as constraints. However, there are also statistics which should be optimized. Therefore, we view the binning problem as a constrained optimization problem. This paper presents a novel supervised binning algorithm for binary classification problems using a genetic algorithm, named GAbin, and demonstrates usage on a well-known dataset. It is inspired by the way that human bins continuous variables. To bin a variable, first, we choose output shapes (e.g., monotonic or best bins in the middle). Second, we define constraints (e.g., minimum samples in each bin). Finally, we try to maximize key statistics to assess the quality of the output bins. The algorithm automates these steps. Results from the algorithm are in the user-desired shapes and satisfy the constraints. The experimental results reveal that the proposed GAbin provides competitive results when compared to other binning algorithms. Moreover, GAbin maximizes information value and can satisfy user-desired constraints such as monotonicity or output shape controls.

Nattawut Vejkanchana, Pramote Kuacharoen
CVRPTW Model for Cargo Collection with Heterogeneous Capacity-Fleet

This work shows the application of the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW) to collect different cargo-demand in several locations with low time disponibility to attend any vehicle. The objective of the model is to reduce the routing time in a problem with mixed vehicle-fleet. The initial step is the creation of a distance matrix by using the Google Maps API, then cargo capacities for every vehicle and time-windows for every demand point are included in the model. The problem is solved with Google-OR tools using as firt solution aproximated algoritm and as second solution one metaheuristic algorithm for local search.

Jorge Ivan Romero-Gelvez, William Camilo Gonzales-Cogua, Jorge Aurelio Herrera-Cuartas
Evaluation of Transfer Learning Techniques with Convolutional Neural Networks (CNNs) to Detect the Existence of Roads in High-Resolution Aerial Imagery

Infrastructure detection and monitoring traditionally required manual identification of geospatial objects in aerial imagery but advances in deep learning and computer vision enabled the researchers in the field of remote sensing to successfully apply transfer learning from pretrained models on large-scale datasets for the task of geospatial object detection. However, they mostly focused on objects with clearly defined boundaries that are independent of the background (e.g. airports, airplanes, buildings, ships, etc.). What happens when we have to deal with more complicated, continuous objects like roads? In this paper we will review four of the best-known CNN architectures (VGGNet, Inception-V3, Xception, Inception-ResNet) and apply feature extraction and fine-tuning techniques to detect the existence of roads in aerial orthoimages divided in tiles of 256 × 256 pixels in size. We will evaluate each model´s performance on unseen test data using the accuracy metric and compare the results with those obtained by a CNN especially built for this purpose.

Calimanut-Ionut Cira, Ramon Alcarria, Miguel-Ángel Manso-Callejo, Francisco Serradilla
Predicting Stock Prices Using Dynamic LSTM Models

Predicting stock prices accurately is a key goal of investors in the stock market. Unfortunately, stock prices are constantly changing and affected by many factors, making the process of predicting them a challenging task. This paper describes a method to build models for predicting stock prices using long short-term memory network (LSTM). The LSTM-based model, which we call dynamic LSTM, is initially built and continuously retrained using newly augmented data to predict future stock prices. We evaluate the proposed method using data sets of four stocks. The results show that the proposed method outperforms others in predicting stock prices based on different performance metrics.

Duc Huu Dat Nguyen, Loc Phuoc Tran, Vu Nguyen

Health Care Information Systems

Frontmatter
Hyperthermia Study in Breast Cancer Treatment Using a New Applicator

A study about effects obtained by implementing an electromagnetic hyperthermia (EM) treatment model are presented. The study focus is the breast cancer treatment; this study is perform using an electromagnetic simulation model. A breast was modeled using the conductivity and permittivity of tissues such as fat, skin, lobules and muscle. The distribution of the power density was analyzed for two cases, first the applicator is not aligned with the tumor; second the applicator is aligned with the applicator. The distribution of the power density was analyzed inside the breast model when it was irradiated with two applicators at 2.45 GHz and 5 GHz. The second applicator proposed it is a new prototype of applicator developed in the Groove Gap Waveguide technology (GGW). The power density obtained in lobes, tumor and fat is compared and it was observed that tissues overheating that are close to the tumor can be avoided by optimizing the applicator location. The preliminary results indicate that with the new prototype of applicator developed in the Groove Gap Waveguide technology (GGW) is possible to focus the EM energy. Moreover, the tissues close to the tumor obtain a lower concentration of power density.

H. F. Guarnizo Mendez, M. A. Polochè Arango, J. F. Coronel Rico, T. A. Rubiano Suazo
Manual Segmentation Errors in Medical Imaging. Proposing a Reliable Gold Standard

Manual segmentation is ubiquitous in modern medical imaging. It is a tedious and time-consuming process that is also operator-dependent and due to its low reproducibility, presents to specialist a challenge to reach consensus when diagnosing from an image. In the diagnosis of several abnormalities, geometrical features such as distances, curvatures, volumes, areas, and shapes are used to derive verdicts. These features are only quantifiable if the measuring structures can be separated from other elements in the image. The process of manual segmentation provides the analysis with a response to the question of the limits, and those limits are not easy to identify. Despite all the mentioned drawbacks, manual segmentation is still used in medical imaging analysis or employed to validate automatic or semi-automatic methods. Intending to quantify the operator variability of the process, we have created a controlled environment and run segmentations on known volumes scanned with Magnetic Resonance. The strategy proposed here suggests a mechanism to establish gold standards for geometrical readings in medical imaging; thus measuring instruments can be analyzed and certified for the task.

Fernando Yepes-Calderon, J. Gordon McComb

IT Architectures

Frontmatter
A Proposal Model Based on Blockchain Technology to Support Traceability of Colombian Scholar Feeding Program (PAE)

Given the development of Blockchain technology and its advantages of immutability and traceability to provide traceability in the control and management of transactions the present work exposes a model based on Blockchain technology to support traceability and control of the resources of the Colombian Scholar Feeding Program PAE (in Spanish Programa de Alimentación Escolar – PAE). This proposal tries to promote a technological approach to face corruption linked with this kind of programs; providing PAE with tools that could reduce the risk to lost resources or facilitate the identification of the people responsible of corruption cases. The steps of the proposed model are described providing their advantages and limitations.

Carol Cortés, Alejandro Guzmán, Camilo Andrés Rincón-González, Catherine Torres-Casas, Camilo Mejía-Moncayo
Capacity of Desktop Clouds for Running HPC Applications: A Revisited Analysis

Desktop Clouds, such as UnaCloud and CernVM, run scientific applications on desktop computers installed in computer laboratories and offices. These applications run on virtual machines using the idle capacities of that desktops and networks. While some universities use desktop clouds to run bag of tasks (BoT), we have used these platforms to run High Performance Computing (HPC) applications that require coordination among the nodes and are sensible to communication delays. There, although a virtual machine with 4 virtual cores on computers released in 2012 may achieve more than 40 GFLOPs, the capacity of clusters with tens or hundreds of virtual machines cannot be determined by multiplying this value. In a previous work, we studied the capacity of desktop clouds for running applications non-intensive on communications on our computer labs. This paper presents a revisited analysis, focused on the capacity of desktop clouds for running HPC applications. The resulting information can be used for researchers deciding on investing on HPC clusters or using existing computer labs for running their applications, and those interested on designing desktop clusters that may achieve the maximum possible capacity.

Jaime Chavarriaga, Carlos E. Gómez, David C. Bonilla, Harold E. Castro
Migration to Microservices: Barriers and Solutions

Microservices architecture (MSA) has been emerged over the past few years. Despite standardization efforts, migrating large-scale legacy applications into microservices architecture remains challenging. This study presents the results of seventeen interviews about obstacles and suggested solutions in migration to microservices. We analyzed the results of interviews using Jobs-to-be-done framework in literature and classified the barriers into three categories—inertia, anxiety and context. This work provides a categorization and a framework to overcome the barriers based on the advises of experts in this field. The results can be a reference for future research directions and advanced migration solutions.

Javad Ghofrani, Arezoo Bozorgmehr
Towards a Maturity Model for Cloud Service Customizing

In last years, more and more cloud providers are making great efforts to offer personalized services that fully match customers’ needs. However, it is not always easy because of the maturity level that requires their customization capabilities in order to design, implement, operate and improve personalized services. Furthermore, there is a lack of tools to help cloud providers to understand what is the current maturity level of their customization processes and the path to improve them. In this context, this work presents our progress for building a maturity model for the customization of cloud services. The model is proposed from a literature review in the area and two interviews to industry experts. The model, that includes two dimensions (customization capabilities and maturity levels) aims at helping researchers to develop new contributions in this research area and practitioners to develop their customization capabilities.

Oscar Avila, Cristian Paez, Dario Correal
Towards an Architecture Framework for the Implementation of Omnichannel Strategies

New technological trends and disruptive technologies are allowing companies from multiple sectors to define and implement omnichannel strategies to provide a better customer experience. The implementation of such strategies consists of supporting marketing and sales activities through interrelated and coherent channels in order to reach target markets. Although this approach has allowed companies to exploit digital technologies to get competitive advantages, its implementation involves harmonizing marketing, sales, delivery and service processes as well as the underlying information and technology infrastructures supporting them for the correct operation of the company. However, as far as we know, there is no approach dealing with the alignment of these different aspects which are at different organisational levels. To deal with this lack, we present in this paper our advances towards an architecture framework to fit business and information and technology aspects related to omnichannel development. The framework is applied to a case study in the educative sector.

Nestor Suarez, Oscar Avila

Learning Management Systems

Frontmatter
Connecting CS1 with Student’s Careers Through Multidisciplinary Projects. Case of Study: Material Selection Following the Ashby Methodology

This paper describes the implementation of an open-source software developed using Python, which facilitates the materials selection process commonly used in engineering. This software has been developed by non-CS students (Materials Engineering, Food Engineering and Chemistry Engineering), as a project course of their 1st-year cross-curricular course of CS1 (“Programming Fundamentals”), in order to connect their CS1 learning process with core subjects related to their careers, aiming to motivate both, the use of computer programming in their personal development and also, their interest in their professional career. The program developed allows choosing between different types of materials, based on specific characteristics required by the user; furthermore, this program enables the visualization of the Michael Ashby methodology for materials selection, which allows non-CS students to solve a problem related to their career, while it gives upper-level students a new tool to learn in class. The dataset used covers approximately 10000 distinct materials, classified by its features as ceramics, metals, polymers, wood/natural materials, pure elements and other advanced engineering materials. As a part of the outcome of this project, a public access repository has been created containing the implemented algorithms and the dataset used. The code developed can be modified and reused under license “GNU General Public License”. Finally, a report on the perception of non-CS students taking CS1 and the perception of upper-level students taking “Material selection” subject is described and analyzed.

Bruno Paucar, Giovanny Chunga, Natalia Lopez, Clotario Tapia, Miguel Realpe
Development of Online Clearance System for an Educational Institution

It is mandatory for graduating students of an educational institution to exit the system in an orderly manner. The students usually do this through the mandatory clearance process. The manual process is time-consuming and stressful as the students have to move from place to place to get their clearance document endorsed. It has also been found to be vulnerable to fraud and other vices. The few automated ones also exhibit some limitations in their functionalities such as non-user-friendly interface, lack of adequate information to user, non-prioritization of processes and so on. This study therefore proposes a system that overcomes the issues with manual processing while improving on the identified automated ones. The study adopts a case study approach of a complete manual system for a leading institution of learning in Southwest Nigeria, with a view to evolving a working prototype. First a thorough understanding of the existing procedure is carried out. A new system that is web-based is built using Hypertext Markup Language (HTML) along with PHP for business logic layer, CSS for proper rendering of display pages of the front end and MySQL for the data layer. The new system will reduce the amount of time and efforts wasted on students’ clearance as well as reduce cost incurred on paper by the institution. Another advantage is that students can also initiate and monitor their clearance status from any location they are thereby eliminating the need to travel or be physically present.

Oluranti Jonathan, Sanjay Misra, Funmilayo Makinde, Robertas Damasevicius, Rytis Maskeliunas, Marcelo Leon
Enhanced Sketchnoting Through Semantic Integration of Learning Material

Within the transition from pen-and-paper to digital solutions, many tablet-based note-taking apps have been developed. In addition, sketchnoting, a highly visual form of note-taking, has gained in popularity, especially in teaching and learning. While current solutions already reduce the burden of organizing and searching through stacks of paper, they provide little support in finding additional information related to the notes, and search capabilities are mostly limited to textual queries and content. In this paper, we present a novel solution for digital sketchnoting aimed at enhancing the learning experience of students. In addition to common capabilities such as handwriting recognition, our note-taking app integrates semantic annotation and drawing recognition. Handwritten notes are recognized and passed through concept recognition and entity linking tools to enable a knowledge graph-based integration of contextually relevant learning resources. We extend traditional search capabilities by including the semantic metadata from the related material as well as enabling visual queries to find recognized sketches. Finally, resembling the exchange of paper notes among students, our app allows to share the semantically enhanced notes with other devices.

Aryobarzan Atashpendar, Christian Grévisse, Steffen Rothkugel

Robotic Autonomy

Frontmatter
Comparative Analysis of Three Obstacle Detection and Avoidance Algorithms for a Compact Differential Drive Robot I N V-Rep

The aim of this research is to build a compact differential drive robot using the Virtual Robotics Experimentation Platform. Sensors are embedded in the Pioneer 3-dx mobile robot to provide necessary data from the real world to the robot. The main purpose of the mobile robot is its ability to arrive at a given destination (goal) precisely and astutely, hence, significantly reducing the risk of human mistakes. Many existing algorithms like obstacle detection, lane detection is combined to provide the essential and basic control functionalities to the car. The mobile robot controller model runs on a series of benchmark tasks, and its performance is compared to conventional controllers. During the scope of this project, comparisons between different algorithms, hardware and tools have been made to choose the best-fit for the project. The results are obstacle detection algorithms and a terrain handling feature, that works very well in simulations and real-life situations. The major tailbacks during the development of this project were limitations caused by low hardware computational power, the presence of stronger processors would exponentially increase the throughput and consequently improve the accuracy of the scene objects and the obstacle detection algorithms.

Chika Yinka-Banjo, Obawole Daniel, Sanjay Misra, Oluranti Jonathan, Hector Florez

Security Services

Frontmatter
A Secured Private-Cloud Computing System

The advent of internet and rapid growth of digital media has vastly increased concerns of cloud computing as source of data and information storage ever than before. It allows users to setup their own private clouds since the public clouds available, can’t surpass their expectations at affordable costs. With private cloud computing, users do not need to be in a physical location when accessing personified cloud resources, thus; leading to cost reduction, improved user experience (ux), staff effectiveness, and file sharing with organizational collaborations from remote sites. Despite lots of benefits offered, there have been constant challenges associated with cloud computing from Data loss & breaches, service vulnerabilities, insufficient due diligence, identity, access and credential management, poor footprint tracking for threats & malicious insider attacks etc. which threatens network security. This proposed application is set out to provide immediate solution to help fight against cloud insecurity, particularly in private cloud domains through restriction of access to cloud resources, giving access to only trusted members to prove their identities through Footprint Notifications, Logging Encryption and activities of the authenticated users, as it is essential to monitor information flow on the cloud for information assessment purposes. The implementation of a secured Private-Cloud Computing System is the central focus of this project; the application is designed by using advanced features of HTML, CSS, JAVASCRIPT, & JQUERY for the frontend interface while PHP & MYSQL were used to design the backend of the application and its log encryption.

Modebola Olowu, Chika Yinka-Banjo, Sanjay Misra, Hector Florez
Comparative Evaluation of Techniques for Detection of Phishing URLs

One of the popular cyberattacks today is phishing. It combines social engineering and online identity theft to delude Internet users into submitting their personal information to cybercriminals. Reports have shown continuous increase in the number and sophistication of this attack worldwide. Phishing Uniform Resource Locator (URL) is a malicious web address often created to look like legitimate URL, in order to deceive unsuspecting users. Many algorithms have been proposed to detect phishing URLs and classify them as benign or phishing. Most of these detection algorithms are based on machine learning and detect using inherent characteristics of the URLs. In this study, we examine the performance of a number of such techniques. The algorithms were tested using three publicly available datasets. Our results revealed, overall, the Random Forest algorithm as the best performing algorithm, achieving an accuracy of 97.3%.

Oluwafemi Osho, Ayanfeoluwa Oluyomi, Sanjay Misra, Ravin Ahuja, Robertas Damasevicius, Rytis Maskeliunas

Socio-technical Systems

Frontmatter
Cultural Archaeology of Video Games: Between Nostalgic Discourse, Gamer Experience, and Technological Innovation

This paper presents a history of video games as innovation form beyond entertainment, offering reasons to establish why it is important to know and study their history with regards its social and cultural contexts: making emphasis in the importance that the users have when creating video games through experience. The social and cultural context in which those video games were born is fundamental to understand the diffusion and popularity that video games had throughout the ‘80s and especially in the ‘90s. The objective of this study is to identify the communication and information strategies of video games prior to the arrival of the Internet, especially the way in which this information was shared in the Spanish context. In the first part of the paper, we introduce the theoretical and methodological framework in which this research is based, through the concept of cultural archeology. In the second part, we present stories created by the users to analyze the gaming experience and how to share it, using the concepts of playformance and play-world, to finish questioning the gamer’s identity as a white, young, middle-class male subject. Finally, we want to point out the importance of sharing knowledge and strategies as a fundamental part of the social interaction of the gamer’s experience. We observed video games as a tool to identify something beyond: the society and the uses that move around a cultural product.

Simone Belli, Cristian López Raventós
Effects of Digital Transformation in Scientific Collaboration. A Bibliographic Review

In this paper, we present a bibliographic review that contains the most important aspects of the digital transformation in scientific collaboration. We reviewed 162 scientific papers where authors have identified and analysed the digital changes in scientific research and the impact they have generated in the scientific community.The main research question for this bibliographic review is the following: What are the key dimensions of digital transformation in scientific collaboration? The objective is to explain the changes in design practices in the science assessment criteria and ways of sharing knowledge through new techniques and software arising from stabilization of new tools. We will observe the most important aspects of the digital transformation in science, with the main contributions from the most representative authors in this area.We show how open access and open science can solve the digital divide in science to create new modes of scientific communication.

Simone Belli
glossaLAB: Co-creating Interdisciplinary Knowledge

The paper describes the glossaLAB international project as a contribution to confront the urgent need of knowledge integration frameworks, as required to face global challenges that overwhelm disciplinary knowledge capacity. Under this scope, glossaLAB is devised to make contributions in three main aspects of such endeavor: (i) development of a sound theoretical framework for the unification of knowledge, (ii) establishment of broadly accepted methodologies and tools to facilitate the integration of knowledge, (iii) development of assessment criteria for the qualification of interdisciplinarity undertakings. The paper discusses the main components of the project and the solutions adopted to achieve the intended objectives at three different levels: at the technical level, glossaLAB aims at developing a platform for knowledge integration based on the elucidation of concepts, metaphors, theories and problems, including a semantically-operative recompilation of valuable scattered encyclopedic contents devoted to two entangled transdisciplinary fields: the sciences of systems and information. At the theoretical level, the goal is reducing the redundancy of the conceptual system (defined in terms of “intensional performance” of the contents recompiled), and the elucidation of new concepts. Finally, at the meta-theoretical level, the project aims at assessing the knowledge integration achieved through the co-creation process based on (a) the diversity of the disciplines involved and (b) the integration properties of the conceptual network stablished through the elucidation process.

José María Díaz-Nafría, Teresa Guarda, Mark Burgin, Wolfgang Hofkirchner, Rainer Zimmermann, Gerhard Chroust, Simone Belli
ICT and Science: Some Characteristics of Scientific Networks and How They Apply Information Technology

In every discipline, we observed the so-called “digital turn” or the digital transformation in scientific collaboration. A survey was designed and applied to researchers within the framework of the H2020 project EULAC Focus. The researchers consulted embrace different disciplines and are located in different universities and research institutes. There are 305 different interviews and 159 variables or observed responses. The research explores how information and communication technologies in this specific professional setting have affected the way in which research teams are related. In particular, this investigation explores the extent to which scientific networks are composed and how they collaborate thanks to the applied informatics, observing the digital tools used by researchers to communicate and collaborate.

Simone Belli, Ernesto Ponsot
ICTs Connecting Global Citizens, Global Dialogue and Global Governance. A Call for Needful Designs

Humankind is on the transition to a supra-system of humanity, according to which social relationships – that organise the common good – are re-organised such that global challenges are kept below the threshold of a self-inflicted breakdown. In order to succeed, three conditions are imperative: (1) Global governance needs a global conscience that orients towards the protection of the common good. (2) Such global governance needs a global dialogue on the state of the common good and the ways to proceed. (3) Such a global dialogue needs global citizens able to reflect upon the current state of the common good and the ways to proceed to desired states. Each of these imperatives is about a space of possibilities. Each space nests the following one such that they altogether form the scaffolding along which institutions can emerge that realise the imperatives when proper nuclei are introduced in those spaces. Such nuclei would already support each other. However, the clue is to further their integration by Information and Communication Technologies. An information platform shall be launched that could cover any task on any of the three levels, entangled with the articulation of cooperative action from the local to the global, based on the cyber-subsidiarity model. This model is devised to ensure the percolation of meaningful information throughout the different organisational levels.

Wolfgang Hofkirchner, José María Díaz-Nafría, Peter Crowley, Wilfried Graf, Gudrun Kramer, Hans-Jörg Kreowski, Werner Wintersteiner
Introduction to the Mathematical Theory of Knowledge Conceptualization: Conceptual Systems and Structures

The paper departs from the general problem of knowledge integration and the basic strategies that can be adopted to confront this challenge. With the purpose of providing a sound meta-theoretical framework to facilitate knowledge conceptualization and integration, as well as assessment criteria to evaluate achievements regarding knowledge integration, the paper first reviews the previous work in the field of conceptual spaces. It subsequently gives an overview of structural tools and mechanisms for knowledge representation, recapped in the modal stratified bond model of global knowledge. On these groundings, a novel formalized representation of conceptual systems, structures, spaces and algebras is developed through a set of definitions which goes beyond the exploration of mental knowledge representation and the semantics of natural languages. These two components provide a sound framework for the development of the glossaLAB international project with respect to its two basic objectives, namely (i) facilitating knowledge integration in general and particularly in the context of the general study of information and systems; (ii) facilitating the assessment of the achievements as regards knowledge integration in interdisciplinary settings. An additional article tackles the solutions adopted to integrate these results in the elucidation of the conceptual network of the general study of information and systems.

Mark Burgin, José María Díaz-Nafría

Software Design Engineering

Frontmatter
Dynamic Interface and Access Model by Dead Token for IoT Systems

Communication between users and intelligent devices is normally done through a graphical user interface. In addition, devices that communicate using Bluetooth are also implementing a control interface. Thus, most of the devices in an enclosure such as home or work can be remotely controlled. This implies that each device can have an interface and an IP assignment for its own control. In this way, users must learn and manage several communication interfaces. In this paper, we present a model of a general graphical user interface to control different smart devices that can consume HTTP requests or that are controlled by Bluetooth. In addition, we present an authentication approach for the Internet of Things that uses the proposed model.

Jorge Hernandez, Karen Daza, Hector Florez, Sanjay Misra
Evaluation of the Performance of Message Routing Protocols in Delay Tolerant Networks (DTN) in Colombian Scenario

Certain vehicles need to send information to their monitoring stations constantly, this information is usually sent by the vehicles, through the cellular network. The use of these wireless networks depends on coverage that it is not usually available in all geographic areas. This is the case of road segments where the coverage of data service of cellular networks is partial or zero, making transmission impossible. A particular case is the roads between the municipality of Juan de Acosta and the city of Barranquilla in Atlántico department (Colombia). As a solution, Delay-Tolerant Networks (DTN) emerge, which allow the transmission of data to the monitoring stations when there is no cellular network coverage. In this work, a simulated evaluation of the performance of some message routing protocols for DTN is performed, in the Juan de Acosta – Barranquilla scenario. Using “The Opportunistic Networking Environment”, we determined the performance of these message routing protocols. The results show that the first contact message routing protocol, presents the highest rate of delivery messages (delivery rate) and the lowest delivery latency (delivery latency). In addition, the Spray and Wait protocol presents better results in System message overload (overhead) than the first one. The Opportunistic Networking Environment simulator, the performance of these message routing protocols was determined in this scenario. The results show that the Firstcontact message routing protocol presents the highest rate of delivery (deliveryrate) and the lowest delivery delay (deliverylatency). In addition, the Spray and Wait protocol has a better result in system overhead than the first one.

Nazhir Amaya-Tejera, Farid Meléndez-Pertuz, Rubén Sánchez-Dams, José Simancas-García, Iván Ruiz, Hermes Castellanos, Fredy A. Sanz, César A. Cárdenas R, Carlos Collazos-Morales
Recovering Fine Grained Traceability Links Between Software Mandatory Constraints and Source Code

Software traceability is a necessary process to carry out source code maintenance, testing and feature location tasks. Despite its importance, it is not a process that is strictly conducted since the creation of every software project. Over the last few years information retrieval techniques have been proposed to recover traceability links between software artifacts in a coarse-grained and middle-grained level. In contexts where it is fundamental to ensure the correct implementation of regulations and constraints at source code level, as in the case of HIPAA, proposed techniques are not enough to find traceability links in a fine-granular way. In this research, we propose a fine-grained traceability algorithm to find traces between high level requirements written in human natural language with source code lines and structures where they are implemented.

Alejandro Velasco, Jairo Hernan Aponte Melo
Using Graph Embedding to Improve Requirements Traceability Recovery

Information retrieval (IR) is widely used in automatically requirements traceability recovery. Corresponding approaches are built based on textual similarity, that is, the higher the similarity, the higher possibility of artifacts related. A common work of many IR-based techniques is to remove false positive links in the candidate links to achieve higher accuracy. In fact, traceability links can be recovered by different kinds of information, not only the textual information. In our study, we propose to recover more traceability links by exploring both textual features and structural information. Specifically, we use combined IR techniques to process the textual information of the software artifacts, and extract the structural information from the source code, establishing corresponding code relationship graphs. We then incorporate such structural information into the traceability recovery analysis by using graph embedding. The results show that combined IR techniques and using graph embedding technology to process structural information can improve the recovery traceability.

Shiheng Wang, Tong Li, Zhen Yang
Backmatter
Metadaten
Titel
Applied Informatics
herausgegeben von
Hector Florez
Marcelo Leon
Jose Maria Diaz-Nafria
Simone Belli
Copyright-Jahr
2019
Electronic ISBN
978-3-030-32475-9
Print ISBN
978-3-030-32474-2
DOI
https://doi.org/10.1007/978-3-030-32475-9