Skip to main content

2018 | Buch

Computational Science and Its Applications – ICCSA 2018

18th International Conference, Melbourne, VIC, Australia, July 2–5, 2018, Proceedings, Part IV

herausgegeben von: Prof. Dr. Osvaldo Gervasi, Beniamino Murgante, Sanjay Misra, Elena Stankova, Prof. Dr. Carmelo M. Torre, Ana Maria A.C. Rocha, Prof. David Taniar, Bernady O. Apduhan, Prof. Eufemia Tarantino, Prof. Yeonseung Ryu

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The five volume set LNCS 10960 until 10964 constitutes the refereed proceedings of the 18th International Conference on Computational Science and Its Applications, ICCSA 2018, held in Melbourne, Australia, in July 2018.

Apart from the general tracks, ICCSA 2018 also includes 34 international workshops in various areas of computational sciences, ranging from computational science technologies, to specific areas of computational sciences, such as computer graphics and virtual reality.

Inhaltsverzeichnis

Frontmatter

Workshop Scientific Computing Infrastructure (SCI 2018)

Frontmatter
Virtual Laboratories: Prospects for the Development of Techniques and Methods of Work

The possibilities of using virtual laboratories in the process of teaching physics at a university are discussed. Various scenarios for conducting classes in a virtual laboratory for both undergraduate students and masters are offered. The ways of expanding the subject and technical capabilities of the virtual laboratory are considered, methodical recommendations and their possible technical solutions are suggested.

E. N. Stankova, N. V. Dyachenko, G. S. Tibilova
CUDA Support in GNA Data Analysis Framework

Usage of GPUs as co-processors is a well-established approach to accelerate costly algorithms operating on matrices and vectors. We aim to further improve the performance of the Global Neutrino Analysis framework (GNA) by adding GPU support in a way that is transparent to the end user. To achieve our goal we use CUDA, a state of the art technology providing GPGPU programming methods.In this paper we describe new features of GNA related to CUDA support. Some specific framework features that influence GPGPU integration are also explained. The paper investigates the feasibility of GPU technology application and shows an example of the achieved acceleration of an algorithm implemented within framework. Benchmarks show a significant performance increase when using GPU transformations.The project is currently in the developmental phase. Our plans include implementation of the set of transformations necessary for the data analysis in the GNA framework and tests of the GPU expediency in the complete analysis chain.

Anna Fatkina, Maxim Gonchar, Liudmila Kolupaeva, Dmitry Naumov, Konstantin Treskov
Application Porting Optimization on Heterogeneous Systems

Modern heterogeneous computer systems offer an exceptional computational potential, but require specific knowledge and experience on the part of the programmer to fully realize it. In this paper we explore different approaches to the task of adapting an application to the heterogeneous computer system. We provide performance evaluation of the test application ported using those approaches. We also evaluate the difficulty and time investment required to implement those approaches in relation to performance improvements they offer.

Nikita Storublevtcev, Vladimir Korkhov, Alexey Beloshapko, Alexander Bogdanov
Creating Artificial Intelligence Solutions in E-Health Infrastructure to Support Disabled People

Recently, the creation of a barrier-free environment for disabled people is becoming more and more important. All this is done so that people do not feel difficulties in filing their ordinary needs, including communication. For this purpose, a communicator application was developed that allows communication using card-pictograms for people with speech and writing disorders, particularly people with ASD. According to the US National Center for Health Statistics and the Health Resources and Services Administration, in 2011–2012 Autism was detected in 2% of schoolchildren worldwide, and this problem is very relevant.This article discusses several approaches of using Artificial Intelligence to simplify text typing with pictogram based cards by predictive input, which allows users faster compose messages and simplify communication process. A tool for analyzing the texts semantics - Word2Vec, was used, which is a neural network of direct distribution. Two approaches are considered: Continuous Bag of Words and Skip-gram. Also quality measures of advisory systems were used, and an approach giving the best results was identified.Besides that, quality measurements were carried out to identify optimal solutions of sentiment analysis to automatically detect suspicious messages sent by the users with such disabilities, which will help doctors to enhance their capabilities of monitoring and behavioral control and take appropriate actions if undesirable behavior of patient is detected by the system.

David Grigoryan, Avetik Muradov, Serob Balyan, Suren Abrahamyan, Armine Katvalyan, Vladimir Korkhov, Oleg Iakushkin, Natalia Kulabukhova, Nadezhda Shchegoleva
GPGPU for Problem-Solving Environment in Accelerator Physics

The paper contains the survey of benefits of using graphical processors for general purpose computations as a part of problem-solving environment in the beam physics studies. The comparison of testing numerical element-to-element modelling on CPU and the long-turn symbolic simulation with the general purpose GPUs in the working prototype is made. With the help of the graphical processors from both sides - the general purpose computations and the graphical units itself - the analysis of beam behaviour under the influence of the space charge is done.

Nataliia Kulabukhova
The Construction of the Parallel Algorithm Execution Schedule Taking into Account the Interprocessor Data Transfer

The method of constructing a schedule for parallel algorithm execution is considered in the article. This algorithm takes into account the execution time of each operation of the algorithm and the relationship of operations on the data. The method is based on an information graph in which the nodes are the operations of the algorithm, and the edges are the directions of the data transfer. As a result of the interchange of operations between computing nodes, it is possible to achieve a reduction in the execution time of the algorithm by reducing the time spent on data transfer between computing nodes and reducing the downtime of computational nodes. The algorithm can be applied both in parallel programming and in adjacent areas, for example, when scheduling tasks in distributed systems.

Yulia Shichkina, Al-Mardi Mohammed Haidar Awadh, Nikita Storublevtcev, Alexander Degtyarev
Data Storage, Processing and Analysis System to Support Brain Research

Complex human research, in particular, research in the field of brain pathologies requires strong informational support for consolidation of clinical and biological data from various sources to enable data processing and analysis. In this paper we present design and implementation of an information system for patient data collection, consolidation and analysis. We show and discuss results of applying cluster analysis methods for the automated processing of magnetic resonance voxel-based morphometry data to facilitate the early diagnosis of Alzheimer’s disease. Our results indicate that detailed investigation of the properties of cluster analysis data can significantly help neurophysiologists in the study of Alzheimer’s disease especially with the means of automated data handling provided by the developed information system.

Vladimir Korkhov, Vladislav Volosnikov, Andrey Vorontsov, Kirill Gribkov, Natalia Zalutskaya, Alexander Degtyarev, Alexander Bogdanov
Staccato: Cache-Aware Work-Stealing Task Scheduler for Shared-Memory Systems

Parallel tasks work-stealing schedulers yield near-optimal tasks distribution (i.e. all CPU cores are loaded equally) and have low time, memory and inter-thread synchronizations. The key idea of work-stealing strategy is that when scheduler worker runs out of tasks for execution, it start stealing tasks from the queues of other workers. It’s been shown that double ended queues based on circular arrays are effective in this scenario. They are designed with an assumption that tasks pointer are stored in these data structures, while tasks object reside in heap memory. By modifying tasks queues so that they can hold task objects instead pointers we managed to increase the performance above 2.5 times on CPU bound applications and decrease last-level cache misses 30% compared to Intel TBB and Intel/MIT Cilk work-stealing schedulers.

Ruslan Kuchumov, Andrey Sokolov, Vladimir Korkhov
Design and Implementation of a Service for Cloud HPC Computations

Cloud computing became a routine tool for scientists in many domains. In order to speed up an achievement of scientific results a cloud service for execution of distributed applications was developed. It obliviates users from manually creating virtual cluster environment or using batch scheduler and allows them only to specify input parameters to perform their computations. This service, in turn, deploys virtual cluster, executes supplied job and uploads its results to user’s cloud storage. It consists of several components and implements flexible and modular architecture which allows to add on one side more applications and on another side various types of resources as a computational backends as well as to increase a utilization of cloud idle resources.

Ruslan Kuchumov, Vadim Petrunin, Vladimir Korkhov, Nikita Balashov, Nikolay Kutovskiy, Ivan Sokolov
Porting the Algorithm for Calculating an Asian Option to a New Processing Architecture

This article describes some numerical approaches for solving the problem of pricing derivatives. These approaches are based on the Monte Carlo and finite difference methods. A number of techniques are given that provide a possibility to optimize the computational algorithms for their use on graphics processors. A software and hardware complex is also described that allows to increase the efficiency of calculations.

Eduard Stepanov, Dmitry Khmel, Vladimir Mareev, Nikita Storublevtcev, Alexander Bogdanov
Influence of External Source on KPI Equation

The analysis of external sources influence on 2D waves evolution is carried out with special attention to the possibility of exponential growing. We have proposed master equation, that is the generalization of Kadomtsev-Petviashvili-I Equation (KPI), that shows major part of the problems in ocean waves evolution and at the same time most difficult from the point of view of numerical algorithm stability. Some indications for choosing of correct numerical procedures are given. This analysis is especially relevant in connection with the emergence of new hybrid computing architectures, the porting of applications on which strongly depends on the chosen algorithm.

Alexander V. Bogdanov, Vladimir V. Mareev, Nataliia V. Kulabukhova, Alexander B. Degtyarev, Nadezhda L. Shchegoleva
Reconstruction of Stone Walls in Form of Polygonal Meshes from Archaeological Studies

Visualization of archeological monuments plays an important role in reconstruction of historic and cultural context. The fragmented nature of many artefacts and archival documents stresses the need to use specialized software to model the objects being studied. The paper describes computer algorithms of monument reconstruction that allow to generate three-dimensional models using very limited input data. We have developed a software product that generates 3D models of stones using their contours and enables a user to reconstruct a wall based on available polygonal objects. This software product has a number of distinguishing features: reliable results even with very limited input data; no need to use specialized equipment; flexibility and support of recurrent use of the reconstructed model’s components.

Oleg Iakushkin, Anna Fatkina, Vadim Plaksin, Olga Sedova, Alexander Degtyarev, Alexei Uteshev
Algorithm for Processing the Results of Cloud Convection Simulation Using the Methods of Machine Learning

Data preprocessing is an important stage in machine learning. The use of qualitatively prepared data increases the accuracy of predictions, even with simple models. The algorithm has been developed and implemented in the program code for converting the output data of a numerical model to a format suitable for subsequent processing. Detailed algorithm is presented for data pre-processing for selecting the most representative cloud parameters (features). As a result, six optimal parameters: vertical component of speed; temperature deviation from ambient temperature; relative humidity (above the water surface); the mixing ratio of water vapour; total droplet mixing ratio; vertical height of the cloud has been chosen as indicators for forecasting of dangerous convective phenomena (thunderstorm, heavy rain, hail). Feature selection has been provided by using recursive feature elimination algorithm with automatic tuning of the number of features selected with cross-validation. Cloud parameters have been fixed at mature stage of cloud development. Future work will be connected with identification of the influence of the nature of the evolution of the cloud parameters from initial stage to dissipation stage on the probability of a dangerous phenomenon.

E. N. Stankova, E. T. Ismailova, I. A. Grechko
3D Reconstruction of Landscape Models and Archaeological Objects Based on Photo and Video Materials

Computer technology is used to reconstruct the main parts of archaeological monuments by creating their 3D models. There is a number of software products that can solve this important task of historical and cultural studies. However, the existing solutions either require expensive specialized equipment or may only be used by specially trained personnel. This makes it relevant to create a software that could reconstruct 3D models automatically.This paper describes the algorithm and development stages of a new application that comprises components with the following functionality: video decomposition, user movement tracking, point cloud creation, polygon mesh creation, and application of texture to a polygon mesh. The software we have developed allows to run an automatic 3D reconstruction of landscape models and archaeological objects based on photo and video materials. It allows to significantly reduce labour costs and processing time compared to the existing solutions. The software has a friendly interface and may be operated be users without special expertise.

Oleg Iakushkin, Dmitrii Selivanov, Liliia Tazieva, Anna Fatkina, Valery Grishkin, Alexei Uteshev

10th International Symposium on Software Engineering Processes and Applications (SEPA 2018)

Frontmatter
A Software Reference Architecture for IoT-Based Healthcare Applications

With the Internet of Things (IoT), a myriad of connected things and the data captured by them is making possible the development of applications in various markets, such as transportation, buildings, energy, home, industrial and healthcare. Concerning healthcare, the development of these applications is expected as part of the future, since IoT can be the main enabler for distributed healthcare applications, having a significant potential to contribute to the overall decrease of healthcare costs while increasing the health outcomes. However, there are a lot of challenges in the development and deployment of this kind of application, such as interoperability, availability, usability and security. The complex and heterogeneous nature of the IoT-based healthcare applications makes its design, development and deployment difficult. It also causes an increase in the development cost, as well as an interoperability problem with the existing systems. To contribute to solve the aforementioned challenges, this paper aims to improve the understanding and systematization of the IoT-based healthcare applications’ architectural design. It proposes a software reference architecture, named Reference Architecture for Healthcare (RAH), to systematically organize the main elements of IoT-based healthcare applications, its responsibilities and interactions, promoting a common understanding of these applications’ architecture to minimize the challenges related to it.

Itamir de Morais Barroca Filho, Gibeon Soares de Aquino Junior
Machine Learning Based Predictive Model for Risk Assessment of Employee Attrition

Every organization today is challenged with the issues of employee attrition. Attrition is the reduction in the employee base of an organization. This could be because of voluntary resignation or expulsion by the higher management. It becomes important for the company to be prepared for the loss of human power in whom company has invested and from whose help it has earned revenue. Thus, it is a profitable idea to predict the risk involved with uneven attritions so that management can take preventive measures and wise decisions for the benefit of the organization. In this paper, a model based on Machine Learning techniques that predicts the employee attrition has been designed. The model is implemented and is thoroughly analyzed for the full profile of companies. It has been shown that the model can be effectively used to maximize the employee retention.

Goldie Gabrani, Anshul Kwatra
A Way of Design Thinking as an Inference Rule of Substantially Evolutionary Theorizing in Software Projects

The principal direction of innovations in software engineering is the search for useful ways of theorizing in the design process and its results. In searching, the nature of design and specificity of software essences should take into account. The paper describes a way of theorizing that focuses on features of organizational and behavioral activity of designing and Grounded theories used in such conditions. The main feature of a suggested theorization is building a project theory on the base of facts of interacting the designers with the accessible experience when they use the design thinking approach for evolving the project and its theory in parallel. Such way leads to new positives in architectural and cause-and-effects forms of understanding.

P. Sosnin
A Critical Review of the Politics of Artificial Intelligent Machines, Alienation and the Existential Risk Threat to America’s Labour Force

While an increasing number of scholars are growing weary about the troubling predictions about when Artificial Intelligent Machines (AIMs) will fully acquire the capacity of intentionality - the ability for AIMs to possess the similitude of human-like knowledge for processing data and the knowledge of what is right and wrong in their own eyes, to the detriment of mankind – there are scholars who argue that politicians and the powers that be in the American government, have blatantly disregarded the existential threats magnified in the works of scholars like Katja Grace and Kevin Drum who frankly portrayed with some degree of certainty, an era of job apocalypse among other dangers mankind would be exposed to when AIMs eventually take over. Drawing from the Marxian Alienation Theory, the authors examine the degrees of extinction and existential threat imminent on humanity and the justification and implications for politicizing the predictions made about when AIMs would take over man’s job. The ex-post facto research methodology and Derrida’s reconstructive and deconstructive analytical method was adopted for evaluating the degree of politicking at play among American politicians. The paper identifies the impending era of mass joblessness as one of the greatest tasks progressive governments and thinkers must grapple with in other to curb this threat. Policy makers and scholars of AIM research must quickly identify pathways for distributing the gains of robot labour, such that its operations will cease to be a threat to mankind.

Ikedinachi Ayodele Wogu, Sanjay Misra, Patrick Assibong, Adewole Adewumi, Robertas Damasevicius, Rytis Maskeliunas
Mapping Dynamic Behavior Between Different Object Models in AOM

Adaptive Object Model (AOM) is an architectural pattern with the aim of increasing flexibility regarding domain classes. The domain entity types are represented in AOM as instances that can be changed at runtime. Because entities have a distinct structure, they are not compatible with the majority of the existing frameworks, especially the ones that use reflection and code annotations. In the proposed model, AOM entities can be mapped and adapted for the format expected by the frameworks. A reference implementation, called Esfinge AOM Role Mapper, was developed to evaluate the viability of the proposed model. When the development was concluded, it was realized that, although this flexibility on the development of software using AOM architecture, it does not implement dynamic behavior based on adding new methods on adapted classes. The main objective of this work is to introduce dynamic behavior on AOM architecture using Esfinge AOM Role Mapper framework reference to validate this study.

Antônio de Oliveira Dias, Eduardo Martins Guerra, Fábio Fagundes Silveira, Tiago Silva da Silva
Transformation of the Teacher into “Produser”: An Emergency Case from the Appropriation of Social Web-Based Technologies

The teacher “produser” as an emerging figure that explores the ability of the subject to participate or to generate new experiences and content from information and communication technologies, provides a new overview of the impact of technological developments in the social, cultural and even educational fields. This overview is the main focus of this manuscript. For this reason, it is sought to investigate both the emerging role of this figure and its relation with the work of teachers in higher education. A quantitative methodological approach is adopted with exploratory design, which allows determining the favorability on behalf of the teachers to take on this role, through the technique of semantic differential. As a result of this approach, the produser is characterized by the will, cultural work and empathy shown to promote innovation and learning with autonomous and collaborative purposes in order to achieve relevance and flexibility in teaching. The conclusion is that transitioning to produser depends on its disassociation with economic components closer to the action of the prosumers and thereby generates processes of collective consciousness from the social web to address many different issues, both inside and outside of the classroom.

Karolina González Guerrero, José Eduardo Padilla Beltrán, Leonardo E. Contreras Bravo
Investigation of Obstructions and Range Limit on Bluetooth Low Energy RSSI for the Healthcare Environment

Indoor Real-Time Location Systems (RTLS) research identifies Bluetooth Low Energy as one of the technologies that promise an acceptable response to the requirements of the Healthcare environment. In this context, we investigate the latest improvements with Bluetooth 5.0 especially with regards its range when the signal penetrates through different types of multiple partitions. The improvements in Bluetooth technology especially with regards form factor, low energy consumption, and higher speeds make this a viable technology for use in indoor RTLS. Several different venues are used at the University to mimic the Healthcare environment to conduct the experiment. The results indicated an acceptable range through obstacles such glass, drywall partitions and solid brick wall. Future research will investigate methods to determine the position of Bluetooth Low Energy devices for the possible location of patients and assets.

Jay Pancham, Richard Millham, Simon James Fong
A Business Intelligent Framework to Evaluate Prediction Accuracy for E-Commerce Recommenders

It is important for on-line retailers to better understand the interest of users for creating personalized recommendations to survive in the competitive market. Implicit details of user that is extracted from click stream data plays a vital role in making recommendations. These indicators reflect users’ items of interest. The browsing behavior, frequency of item visits, time taken to read details of an item are few measures that predict users’ interest for a particular item. After identifying these strong attributes, users are clustered on the basis of context clicks such as promotional and discounted offers and interest of the individual user is predicted for the particular context in user-context preference matrix. After clustering analysis is performed, neighborhood formation process is conducted using collaborative filtering on the basis of item category such as regular or branded items which depicts users’ interest in that particular category. Using these matrices, computational burden and processing time to generate recommendations are greatly reduced. To determine the effectiveness of proposed work, an experimental evaluation has been done which clearly depicts the better performance of the system as compared to conventional approaches.

Shalini Gupta, Veer Sain Dixit
Recommendations with Sparsity Based Weighted Context Framework

Context-Aware Recommender Systems (CARS) is a sort of information filtering tool which has become crucial for services in this big era of data. Owing to its characteristic of including contextual information, it achieves better results in terms of prediction accuracy. The collaborative filtering has been proved as an efficient technique to recommend items among all existing techniques in this area. Moreover, incorporation of other evolutionary techniques in it for contextualization and to alleviate sparsity problem can give an additive advantage. In this paper, we propose to find the vector of weights using particle swarm optimization to control the contribution of each context feature. It is aimed to make a balance between data sparsity and maximization of contextual effects. Further, the weighting vector is used in different components of user and item neighborhood-based algorithms. Moreover, we present a novel method to find aggregated similarity from local and global similarity based on sparsity measure. Local similarity gives importance to co-rated items while global similarity utilizes all the ratings assigned by a pair of users. The proposed algorithms are evaluated for Individual and Group Recommendations. The experimental results on two contextually rich datasets prove that the proposed algorithms outperform the other techniques of this domain. The sparsity measure that is best suited to find aggregation is dataset dependent. Finally, the algorithms show their efficacy for Group Recommendations too.

Veer Sain Dixit, Parul Jain
Teaching Training Using Learning Collaborative Technologies for Knowledge Generation

Changes in learning and collaborative technologies due to social web have great impact on education, in particular, regarding the generation of knowledge. In this learning environment, two actors emerge in higher education, the Prosumer (to produce and to consume technologies), and the Produser (to produce and reuse technologies empowered by them). This article characterizes, describes and compares these two figures, in order to determine how appropriate the producer’s action in higher education is. For research purposes, we used an exploratory study with a quantitative approach of non-experimental design, in which the Likert scale was used. Several teachers from university Militar Nueva Granada (Colombia), who use Learning and Collaborative Technologies as well as b-learning, participated in the study. Although the prosumer action is considered a preponderant figure in the economic context, this does not necessarily implies that it should be transferred to education. In fact, the producer and action should be deemed more appropriate for higher education since implementation and usage of Learning and Collaborative Technologies develop competences of empowerment and creativity, which transcend mere technologies for production and consumption.

Karolina González Guerrero, José Eduardo Padilla Beltrán, Andrés Felipe Matallana Borda
A Scalable Bluetooth Low Energy Design Model for Sensor Detection for an Indoor Real Time Location System

Indoor Real Time Location Systems (RTLS) research identifies Bluetooth Low Energy as one of the technologies that promise an acceptable response to the requirements of the Healthcare environment. A scalable dynamic model for sensor detection, which uses the latest developments of Bluetooth Low Energy, is designed to extend its range coverage. This design extends on our previous papers which tested the range and signal strength through multiple types of obstructions. The model is based on the scenarios and use cases identified for future use in RTLS within the Health care sector. The Unified Modelling Language (UML) is used to present the models and inspections and walkthroughs are used to validate and verify them. This model will be implemented using Bluetooth Low Energy devices for patients and assets with in the Health care sector.

Jay Pancham, Richard Millham, Simon James Fong
A Survey About the Impact of Requirements Engineering Practice in Small-Sized Software Factories in Sinaloa, Mexico

Scientific literature over time highlighted the relevance of requirements engineering for software development process for desktop, web or mobile applications. Nevertheless, not much contemporary information with regard to current practices in small-sized software factories is available. This is specially true in the region of Sinaloa, México, for that reason this work presents an exploratory study which provides insight into industrial practices in Sinaloa. A combination of both qualitative and quantitative data is collected, using semi-structured interviews and a detailed questionnaire from sixteen software factories. A Pearson (r) correlation analysis was performed independently between the variables Company location (EU), Scope of coverage (AC), Number of workers (NT), Time to live in the market (TV), Projects completed (PY), Time dedicated to activities related to the project (TA), Outdated projects completed (PC) in order to determine the degree of relationship between each of the variables mentioned, with all. A correlation analysis and an analysis of variance (ANOVA) were performed. The quantitative results offers opportunities for further interpretation and comparison.

José Alfonso Aguilar, Aníbal Zaldívar-Colado, Carolina Tripp-Barba, Roberto Espinosa, Sanjay Misra, Carlos Eduardo Zurita
An Approach of a Framework to Create Web Applications

Currently, there are a lot of frameworks to build web applications working with the architectural pattern MVC (Model View Controller). One interesting approach is based on using 3-layer models, which allow identifying and separating the final application in different layers that facilitates its construction and maintenance. The purpose of this paper is to present our approach of a framework for developing PHP web applications using a 3-layer model. This approach integrates different technologies and design patterns in order to provide one tool that supports the community in the creation of PHP web applications by providing build-in tools and applying good practices focused on the pursue of proper development times. In addition, the approach aims to handle common issues in the industry like efficiency, maintainability, and security.

Daniel Sanchez, Oscar Mendez, Hector Florez
Model Driven Engineering Approach to Manage Peripherals in Mobile Devices

In the last years, Model Driven Engineering (MDE) has demonstrated several benefits for software development. It has gained a great popularity in both academic and industry communities. The application of its guidelines is suitable for several domains including Model Transformations. In addition, mobile applications is one domain that has a lot of relevance. However, these applications increases their value when they use properly mobile peripherals. Thus, the purpose of this paper is to show the creation of a domain metamodel to manage peripherals in mobile devices. Said metamodel will serve to built a Model Transformation Chain that will be able to generate native code for the Android platform.

Daniel Sanchez, Hector Florez
Automated Analysis of Variability Models: The SeVaTax Process

Variability management includes a set of techniques and methods for defining, modeling, implementing and testing variabilities within the development of a Software Product Line (SPL). Within the testing activity, several approaches have proposed novel techniques for automatic analysis of variability models. However, in spite of the research community has reached some consensus about the base scenarios that should be evaluated, the large number of modeling approaches makes that the way of evaluating those scenarios is still extensively researched.In this work we propose the SeVaTax process which takes variability models based on orthogonal variability model (OVM) primitives as inputs, and generates a formal model representation. Then, it uses a SAT-based solver for analyzing a wide set of validation scenarios and provides a different level of responses, even proposing some specific actions for correcting the models. Finally, we compare our proposal to others in the literature, based on the supported validations.

Matias Pol’la, Agustina Buccella, Alejandra Cechich
Study on Process of Data Processing and Analysis Based on Geographic Information

Today Big Data is the biggest buzzword. However, in order for the data transaction to be activated, it is easy to combine and analyze the stored data and the sales data, but there is a problem that additional infrastructure and manpower for processing and analyzing the data after data purchase. In this paper, we develop data products through geographic information based data processing and analysis. In addition, the system was proposed that the data buyer utilizes data without any separate infrastructure by providing the user with the entire process of data processing for product development. Finally, the process and system will be verified through mediation platform.

Jae-Young Choi, Young-Hwa Cho, Chin-Chol Kim, Yeong-Il Kwon, JeongAh Kim, Suntae Kim, EunSeok Kim
Impact Factors on Using of E-learning System and Learning Achievement of Students at Several Universities in Vietnam

The industrial revolution 4.0 opens many opportunities for online learning and leads to the need to study, entertain, and work anywhere and anytime. Recently, e-learning systems become vital for any university to increase the educational quality and to provide students useful and high quality learning resources. However, how to encourage the e-learning usage and to improve the learning achievement of students through e-learning system is still a challenged task. From previous researches, a research model has been proposed and it is evaluated by Cronbach alpha analysis, EFA, CFA, and Structural Equation Modeling (SEM) techniques on SPSS and AMOS software. Based on quantitative analysis from 356 valid samples, the results showed that 5 factors positively impacted on the e-learning usage are: University support (0.367), Computer competency of students (0.274), Infrastructure (0.195), Content and design of courses (0.145), and Collaboration of students (0.118). Besides, learning achievement is influenced by 2 factors, including: E-learning usage (0.446), and Collaboration of students (0.129). Finally, some managerial suggestions are made to improve the efficiency of e-learning usage and to increase the learning achievement of university students in Vietnam.

Quoc Trung Pham, Thanh Phong Tran
Measuring the Extent of Source Code Readability Using Regression Analysis

Software maintenance accounts for a large portion of the software life cycle cost. In the software maintenance phase, comprehending the legacy source code is inevitable, which takes most of the time. Source code readability is a metric of the extent of source code comprehension. The better the code is readable, the easier it is for code readers to comprehend the system based on the source code. This paper proposes an enhanced source code readability metric to quantitative measure the extent of code readability, which is more enhanced measurement method than previous research that dichotomously judges whether the source code was readable or not. As an evaluation, we carried out a survey and analyzed them with two-way linear regression analysis to measure the extent of source code readability.

Sangchul Choi, Suntae Kim, Jeong-Hyu Lee, JeongAh Kim, Jae-Young Choi
Optimization of Scaling Factors for Image Watermarking Using Harmony Search Algorithm

We propose a novel watermarking scheme for images which optimizes the watermarking strength using Harmony Search Algorithm (HSA). The optimized watermarking scheme is based on the discrete wavelet transform (DWT) and singular value decomposition (SVD). The amount of modification made in the coefficients of the LL3 sub band of the host image depends on the values obtained by the Harmony Search algorithm. For optimization of scaling factors, HSA uses an objective function which is a linear combination of imperceptibility and robustness. The PSNR and SSIM values show that the visual quality of the signed and attacked images is good. The proposed scheme is robust against common image processing operations. It is concluded that the embedding and extraction of the proposed algorithm is well optimized, robust and show an improvement over other similar reported methods.

Anurag Mishra, Charu Agarwal, Girija Chetty
Combining Automatic Variability Analysis Tools: An SPL Approach for Building a Framework for Composition

The automatic analysis of variability models is an important research field included in variability management activities. In the context of software product lines, it includes a set of methods and techniques aimed at verifying the design of the variability models in order to avoid inconsistencies during variability definition, implementation, and derivations activities. There exist several tools and proposals implementing the basic activities involved in this analysis process. However, the large number of them makes difficult to find and select the most suitable tool/set of tools to be applied in a particular SPL development. Taking into account this problem, our work aims at developing a framework, built as a software product line, that allows developers to compose/build automatic analysis tools according to their specific needs. We illustrate the proposal through possible instantiations of the framework.

Agustina Buccella, Matias Pol’la, Esteban Ruiz de Galarreta, Alejandra Cechich
SW Architecture of Clinical Decision Support Service in Prevention of Falls

A clinical decision support (CDS) service reduces errors in healthcare services and improves the quality and efficiency of healthcare by providing appropriate recommendations or alerts when needed. Owing to these advantages, the attempts to build a CDS service for each hospital, and for each ward in the hospital are increasing. In order to efficiently build multiple CDS systems, it is necessary to develop them into a CDS product line rather than a single CDS. That is, an architecture that can accommodate the variability of a CDS service to easily reflect the different requirements that need to be created. In this study, we designed an architecture that can support the building of a product line that addresses falls, which is the main management subject of each hospital, by applying an architecture-based design (ABD) technique. The applicability of the product line architecture was verified by additionally constructing CDS services to prevent falls in other hospitals based on the proposed architecture.

SeungYoung Choi, Jeong Ah Kim, InSook Cho
Learning User Preferences for Recommender System Using YouTube Videos Tags

Recommender systems have become essential in several domains to deal with the problem of information overload. Collaborative filtering is one of the most popularly used paradigm of recommender systems for over a decade. The personalized recommender systems use past preference history of the users to make future recommendations for them. The cold start problem of recommender system concerns with the personalized recommendation to the users having no or few past history. In this work we propose an approach to learn implicit user preferences by making use of YouTube Video Tags. The profile of a new user is created from his/her preferences in watching the YouTube videos. This profile is generic and may be used for a wide variety of domains of recommender systems. In this work we have used it for a biography recommender system. However this may be used for several other types of recommender system.

Sunita Tiwari, Abhishek Jain, Prakhar Kothari, Rahul Upadhyay, Kanishth Singh
A Novel Methodology for Effective Requirements Elicitation and Modeling

Undoubtedly, requirement engineering is one of the most crucial step in the development of any software upon which its success depends. In light of the increasing flow of sensitive information, attack attempts and numerous interactions with variety of users, developing a correct software specification is a challenge. Software implementation needs to be an exact translation of this specification. Correct specification ensures customer satisfaction. To obtain precise specification efficient modeling of requirements is beneficial. The methodology envisaged here follows an exhaustive approach to elicit requirements from all possible types of stakeholders to obtain the different constituent entities of the software, finding associations among them and finally modeling the requirements through novel diagrams. This modeling scheme scores over the prior methodologies in the order of information it represents. The methodology includes several good practices suggested by different researchers and can cater to any domain.

Rajat Goel, Mahesh Chandra Govil, Girdhari Singh
IoT Powered Vehicle Tracking System (VTS)

With its new avenues, Internet of Things is bringing immense value and potential to the life style of masses. One of the important applications is managing and tracking large fleet which falls under domain of effective transportation and logistics. In this paper a framework has been proposed for real time fleet tracking system. This framework is composed of GPS, GSM and microcontroller technologies. The key features of the system are real-time location tracking, an open-source GIS platform, flexibility and a web-based user interface provided at the base station. A prototype of the proposed system is implemented. Our system prototype is experimentally tested for many trips in Delhi NCR region of North India. The system has been found stable and robust. Targeted fleet has been accurately tracked and its location has been transmitted to the server in real time. Further, with the capabilities of geofencing, functionality of traccar server, its user friendly interface the system is serving the purpose of ubiquitious fleet tracking system providing maximum accessibility for user anytime and anywhere.

Priti Jagwani, Manoj Kumar
Grammar-Algebraic Approach to Analyze Workflows

Improving the lifecycle of automated systems and reducing their development time are an important production problem in a large enterprise. We have created a new approach to the analysis and transformation of their processes on the basis of author’s principles, grammar, method of the design process for narrowing the semantic gap between business process analysis and business process execution. This approach allows designers to improve the quality and reduce the time spent on the lifecycle of automated systems.

Alexander Afanasyev, Nikolay Voit
Specifying and Incorporating Compliance Requirements into Software Development Using UML and OCL

Nowadays, industries, agencies, institutions demand a high degree of compliance at different level of commercial enterprise to meet various laws, regulations, standards etc. Compliance check on the processes of different firms have shown that it is a daunting task which resulted to high monetary implication in resolving the issues of changing requirements. Here, compliance requirement was incorporated into an industrial domain in Nigeria in order to develop an advanced and effective system. Unified Modeling Language was used to design the software for the case study. Classical Unified Modeling Language like: use case diagram, activity diagram, class diagram and sequence diagram were designed for the system. Compliance requirements embedded in the UML were formalized and validated using Object Constraint Language. Facts gathered from different organizations and customers in this domain were used to incorporate compliance requirements into the design. This will aid system developers to implement compliant systems for business enterprises.

Oluwasefunmi Tale Arogundade, Temitope Elizabeth Abioye, Abiodun Muyideen Mustapha, Adeola Mary Adeniji, Abiodun Motunrayo Ikotun, Franklin O. Asahiah
Software R&D Process Framework for Process Tailoring with EPF Cases

Process tailoring is to make, alter, or adapt a process description for a particular end. Process tailoring is not a simple work because of the following difficulties. First, it should generate a project-specific software process each time that is executed, second, it can be considered as a reuse activity of the standard software process, third, it needs various experiences and involves an intimate knowledge of several aspects of software engineering. To resolve these difficulties, we proposed a software research and development (R&D) process framework that can make, alter, or adapt efficiently software process that will be applied to certain software projects by reusing software process assets constructed. We expect that R&D project tailors can efficiently establish software processes to apply reusable legacy software process assets to specific projects through the proposed process framework for process tailoring. If they make, alter, or adapt their own software processes founded on the proposed process framework, they can have an effect on reducing their efforts to reapplying software processes.

SeungYong Choi, JeongAh Kim, SunTae Kim
Performance Evaluation of Visual Descriptors for Image Indexing in Content Based Image Retrieval Systems

In practice, appropriate computer vision and image processing techniques are usually employed to obtain image visual features. Central to functional Content Based Image Retrieval (CBIR) system is effective indexing and fast searching of images based on the visual features. Effective indexing is also essential to make CBIR system scalable for large image databases and incorporation of advanced technique such as machine learning based relevance feedback (RF). However, it is extremely difficult to know the particular feature model(s) to be used to uniquely identify certain groups of images, while including many feature models can incur dimensionality problem. In this paper, Colour Moment (CM), Gabor Wavelet (GW), and Wavelet Moment (WM) are used to encode the low-level information at global and sub-global image levels. A query by feature example retrieval (QVER) was implemented to test the retrieval performance of each feature descriptor by computing average mean precision value for L1 and L2 distance measure. Taking average of the recalls, the average mean precision values of 0.6501, 0.6330 and 0.6380 were obtained for 54-dimensional CM (CM54), 48-dimensional GW (GW48) and 40-dimensional WM (WM40) respectively. The results reveal that colour descriptor computed using only the first two statistical moments at sub-global image level gave better retrieval performance than those computed at global image level, while the converse is true for texture descriptors. Hence, CM54, GW48, and WM40 are recommended for CM, GW, and WM feature models respectively.

Oluwole A. Adegbola, David O. Aborisade, Segun I. Popoola, Aderemi A. Atayero
Model Checking of TTCAN Protocol Using UPPAAL

Recent years, vehicles are becoming more and more intelligent and automatic. Some experts estimate that more than 80% of all current innovations within vehicles are based on distributed electronic systems. The critical parts of such systems are the services provided by the underlying distributed control networks. TTCAN is the extension of the standard Controller Area Network (CAN), which is the most widely adopted in-vehicle network. As the complexity of TTCAN protocol, formal verification is the best choice to verify the specification correctness of TTCAN protocol. The previous researches are only able to verify the models of TTCAN protocol with no more than three nodes. If there are four nodes in the model, it meets two problems: state space explosion and magnanimous verification time. This paper proposes a novel method and the model of TTCAN protocol with 4 nodes can be verified. TTCAN is the extension of the standard Controller Area Network (CAN), which is the most widely adopted in-vehicle network.

Liu Shuxin, Noriaki Yoshiura
Processing of Design and Manufacturing Workflows in a Large Enterprise

The paper deals with the problem of design and manufacturing workflows in a large enterprise. As an example of a workflow, we presented the author’s model of coordination of design documentation (DD) based on the Petri net. The model was analyzed for possible errors in system designing.

Alexander Afanasyev, Maria Ukhanova, Irina Ionova, Nikolay Voit
Graph Database Indexing Layer for Logic-Based Tree Pattern Matching Over Intensional XML Document Databases

Most XML query evaluation approaches are based on the technique of tree pattern query matching (TPQ) to find similar occurrences of the query’s path and conditions. Mainly, two types of constraints are matched to evaluate a given query, including hierarchical structure constraints and value-based constraints. However, TPQ technique falls short when it comes to matching the logic-based constraints and non-hierarchical relationships between nodes and entities in the XML document and database. In this paper, we overcome this shortage by providing an abstract graph database layer that provides a logic-based graph relational model to inspect and resolve the logics of the query to choose most relevant nodes in the XML document. Only the subtrees of the relevant nodes will be traversed in the document and the other subtrees will be skipped. We propose the application of graph database as an indexing layer that defines conceptual linking between database entities, beside logic-based assertions and constraints to evaluate XML queries over this layer to find most related entities and traverse only their related nodes in the XML document. In addition, we propose a mapping criteria and algorithm between XQuery and Cypher, which is a query language for Neo4j graph database.

Abdullah Alrefae, Jinli Cao, Eric Pardede
Development of Interactive Tools for Intelligent Engineering Education System

Increasing the students’ motivation for learning is related to the effective management of the student’s development process, which requires the skills of conducting an active dialogue, organizing communication methods, jointly searching for solutions by an educator. The main methodological innovations of this direction are connected with the use of interactive learning methods, which help an educator and student to interact.

Alexander Afanasyev, Nikolay Voit
Performance Evaluation of MQTT Broker Servers

Internet of Things (IoT) is a rapidly growing research field, which has enormous potential to enrich our lives for a smarter and better world. Significant improvements in telemetry technology make it possible to quickly connect things (i.e. different smart devices) situated at different geographical locations. Telemetry technology helps to monitor and measure the devices from remote locations, making them even more useful and productive at a low cost of management. MQTT (MQ Telemetry Transport) is a lightweight messaging protocol that meets today’s smarter communication needs. The protocol is used for machine-to-machine communication and plays a pivotal role in IoT. In case the network bandwidth is low, or a network has high latency, and for devices having limited processing capabilities and memory, MQTT is able to distribute telemetry information using a publish/subscribe communication pattern. It enables IoT devices to send or publish information on a topic head to a server (i.e. MQTT broker), then it sends the information out to those clients that have previously subscribed to that topic. This paper puts several publicly available brokers and locally deployed brokers into experiment and compares their performance by subscription throughput i.e., in how much time a broker pushes a data packet to the client (the subscriber) or how much time a data packet takes to reach the client (the subscriber) from the broker. MQTT brokers based on the latest MQTT v3.1.1 version were evaluated. The paper also includes mqtt-stresser and mqtt-bench stress test results of both locally and publicly deployed brokers.

Biswajeeban Mishra
Water Treatment Monitoring System at San Jose de Chaltura, Imbabura - Ecuador

Water reuse is a necessary process because water is excessively used and increasingly it is becoming scarcer. Water after being used by industry and domestic use and once treated, it can be applied to recharge aquifers, discharge into receiving bodies without contaminating them or being reused mainly in green areas and/or agriculture. Its reuse could be possible after a treatment, which aims to eliminate as many pollutants as may be otherwise harmful. This research describes the cur-rent situation of a wastewater treatment plant in at San José de Chaltura parish of canton Antonio Ante located in the province of Imbabura in Ecuador; and a process that was formulated to the treatment and decontamination of water through a system of pervasive monitoring. Experimentation was carried out by treating the wastewater by triplicate for 15 days, using biological reactors and combinations of aquatic plants. In general, results show the best aquatic species to raise quality of treated wastewater in agricultural use as irrigation water was duckweed since it managed to decrease its initial value for total coliforms and fecal coliforms.

Marcelo León, Maritza Ruíz, Lídice Haz, Robert Montalvan, Viviana Pinos Medrano, Silvia Medina Anchundia
Non-linear Behavior of the Distribution of Deformities in the Periodontal Ligament by Varying the Size of the Root: Finite Element Analysis

The objective of this study was to simulate through finite elements the deformation of the periodontal ligament of an upper right central incisor under the action of a 1 N load applied in two specific positions: The center of the clinical crown (CCC) and the center of the anatomical crown (CAC). For the periodontal ligament, a non-linear behavior was used to better recreate the deformities within the area, assuming it as a material with a hyperelastic behavior. Additionally, two crown-to-root ratios were used (1:1 and 1:1.5) to analyze the effect over the induced charge in the tooth root and its impact on the deformities of the periodontal ligament. In the cases where the load was applied in the CAC, fewer deformations were obtained than for the cases in which the load was applied in the CCC. In the scenarios with 1:1.5 ratios, the deformities were fewer than for the scenarios with 1:1 crown-to-root ratios.

Luis Fernando Vargas Tamayo, Leonardo Emiro Contreras Bravo, Ricardo Augusto Ríos Linares
Formal Modeling of the Key Determinants of Hepatitis C Virus (HCV) Induced Adaptive Immune Response Network: An Integrative Approach to Map the Cellular and Cytokine-Mediated Host Immune Regulations

HCV is a major causative agent of liver infection and is the leading cause of Hepatocellular carcinoma (HCC). To understand the complexity in interactions within the HCV induced immune signaling networks, a logic-based diagram is generated based on multiple reported interactions. A simple conceptual framework is presented to explore the key determinants of the immune system and their functions during HCV infection. Furthermore, an abstracted sub-network is modeled qualitatively which consists of both the key cellular and cytokine components of the HCV induced immune system. In the presence of NS5A protein of HCV, the behaviors and the interplay amongst the natural killer (NK) and T regulatory (Tregs) cells along with cytokines such as IFN-γ, IL-10, IL-12 are predicted. The overall modelling approach followed in this study comprises of prior knowledge-based logical interaction network, network abstraction, parameter estimation, regulatory network construction and analysis through state graph, enabling the prediction of paths leading to both, disease state and a homeostatic path/cycle predicted based on maximum betweenness centrality. To study the continuous dynamics of the network, Petri net (PN) model was generated. The analysis implicates the critical role of IFN-γ producing NK cells in recovery while, the role of IL-10 and IL-12 in pathogenesis. The predictive ability of the model implicates that IL-12 has a dual role under varying circumstances and leads to varying disease outcomes. This model attempts to reduce the noisy biological data and captures a holistic view of the key determinants of the HCV induced immune response.

Ayesha Obaid, Anam Naz, Shifa Tariq Ashraf, Faryal Mehwish Awan, Aqsa Ikram, Muhammad Tariq Saeed, Abida Raza, Jamil Ahmad, Amjad Ali
Location Aware Personalized News Recommender System Based on Twitter Popularity

The mobile and handheld devices have become an indispensable part of life in this era of technological advancement. Further, the ubiquity of location acquisition technologies like global positioning system (GPS) has opened the new avenues for location aware applications for mobile devices. Reading online news is becoming increasingly popular way to gather information from news sources around the globe. Users can search and read the news of their preference wherever they want. The news preferences of individuals are influenced by several factors including the geographical contexts and the recent trends on social media. In this work we propose an approach to recommend the personalized news to the users based on their individual preferences. The model for user preferences are learned implicitly for individual users. Also, the popularity of trending articles floating around the twitter are exploited to provide news interesting recommendations to the user. We believe that the interest of the user, popularity of article and other attributes of news are implicitly fuzzy in nature and therefore we propose to exploit this for generating the recommendation score for articles to be recommended. The prototype is developed for testing and evaluation of proposed approach and the results of the evaluation are motivating.

Sunita Tiwari, Manjeet Singh Pangtey, Sushil Kumar
An Improved Generalized Regression Neural Network for Type II Diabetes Classification

This paper proposes an improved Generalized Regression Neural Network (KGRNN) for the diagnosis of type II diabetes. Diabetes, a widespread chronic disease, is a metabolic disorder that develops when the body does not make enough insulin or is unable to use insulin effectively. Type II diabetes is the most common type and accounts for an estimated 90% of cases. The novel KGRNN technique reported in this study uses an enhanced K-Means clustering technique (CVE-K-Means) to produce cluster centers (centroids) that are used to train the network. The technique was applied to the Pima Indian diabetes dataset, a widely used benchmark dataset for Diabetes diagnosis. The technique outperforms the best known GRNN techniques for Type II diabetes diagnosis in terms of classification accuracy and computational time and obtained a classification accuracy of 86% with 83% sensitivity and 87% specificity. The Area Under the Receiver Operating Characteristic Curve (ROC) of 87% was obtained.

Moeketsi Ndaba, Anban W. Pillay, Absalom E. Ezugwu
Global Software Development: Key Performance Measures of Team in a SCRUM Based Agile Environment

This paper is intended to study the key performance indicators of team members working in an Agile project environment in a Global Software Development setup. Practitioners from nine different projects were chosen to respond to the survey measuring the escaped defects, team member’s velocity, deliverables and effort based performance indicators. These indicators are vital to any Agile project in a Global Software Development setup. The observed performance indicators were compared against the Gold Standard industry benchmarks to enable academicians and practitioners to take necessary course corrections to stay in the best case scenarios.

Chamundeswari Arumugam, Srinivasan Vaidayanthan, Harini Karuppuchamy
Cloud Applications Management – Issues and Developments

Cloud computing is a platform that dictates the mode of operations within most data centers. Cloud computing relieves its consumer’s from investment in IT infrastructure. Cloud consumers are provided with on-demand services at affordable cost. Cloud service providers offer custom made applications that can be used by a variety of users to handle routine tasks. Cloud service providers also offer their users programming interfaces that enables developers design and deploy applications efficiently. In addition, it is very important for cloud service providers to regulate resources based on the workload on the applications and provide computational resources and storage. Despite cloud computing benefits, it is difficult for cloud users to port applications from one platform to another, this difficulty is inevitable because of the cost and complexity of porting such applications. In addition, it is essential for the cloud service providers to adjust resources based on workloads or failures on the system. This paper discusses key concepts of cloud applications management, issues and development and also reviews recent related literature on cloud application management. This paper examines present trends in the area of cloud application management and provides a guide for future research. In this paper, the main objective is to answer the following questions: what is the current development and trend in cloud application management? Papers published in journals, conferences and white papers were analyzed. The result in this review shows that there is insufficient discussion in the area of trust management and security as it relates to cloud applications management. This would be beneficial to prospective cloud users, researchers and cloud service providers alike.

I. Odun-Ayo, B. Odede, R. Ahuja
IoT-Enabled Alcohol Detection System for Road Transportation Safety in Smart City

In this paper, an alcohol detection system was developed for road transportation safety in smart city using Internet of Things (IoT) technology. Two Blood Alcohol Content (BAC) thresholds are set and monitored with the use of a microcontroller. When the first threshold is reached, the developed system transmits the BAC level of the driver and the position coordinates of the vehicle to the central monitoring unit. At the reach of the second BAC threshold, the IoT-enabled alcohol detection system shuts down the vehicle’s engine, triggers an alarm and puts on the warning light indicator. A prototype of this scenario is designed and implemented such that a Direct Current (DC) motor acted as the vehicle’s engine while a push button served as its ignition system. The efficiency of this system is tested to ensure proper functionality. The deployment of this system will help in reducing the incidence of drunk driving-related road accidents in smart cities.

Stanley Uzairue, Joshua Ighalo, Victor O. Matthews, Frances Nwukor, Segun I. Popoola

Workshop Challenges, Trends and Innovations in VGI (VGI 2018)

Frontmatter
Volunteered Geographic Information, Open Data, and Citizen Participation: A Review for Post-seismic Events Reconstruction in Mexico

This work in progress presents the initial idea to support the creation of an online platform that includes and is updated with citizen and volunteer-generated data together with verified data from official sources. Its goal is to help society and government deal with situations when facing an emergency such as the one that occurred in Mexico City on September 19, 2017, after a 7.1-degree earthquake hit the city, causing death, injuries, and damage throughout the city’s infrastructure. This proposal draws inspiration from previous experiences in crowdmapping exercises and from the fact that volunteer citizens acted as first respondents on the ground, collected and published in-situ data to social networks and online repositories, verified these crowdsourced data, and put together online portals to broadcast this information to other citizens to make the best decisions as fast as possible in order to direct rescue efforts, people, resources, food, etc., but unfortunately most of these citizen efforts quickly faded away and are no longer operational. Hopefully, in the long run, a more robust platform that brings together different volunteer citizen efforts and governmental points of view will prove useful and effective for disaster emergency management in the city and the country.

Rodrigo Tapia-McClung

Workshop Virtual Reality and Applications (VRA 2018)

Frontmatter
Object Detection with Deep Learning for a Virtual Reality Based Training Simulator

Virtual Reality (VR) provides immersive user experience which makes them a cost effective solution to employ for various training purposes. However, a major shortcoming of VR systems is their limitation when it comes to interacting with the environment. Typically, when users wear a head mounted display their vision will be limited to virtual world and their external vision will be blocked. They will not be able to see useful objects in their environment such as controllers, buttons or even their hands. In this paper, we describe design of a training system for aerospace industry where real and virtual images blended, creating an augmented virtuality. The real world images are obtained from a camera mounted on the head-mounted-display. Some of the predefined objects, such as game controllers and user’s hands, are detected via deep learning algorithms and blended into the virtual reality images providing a more comfortable and immersive user experience. Furthermore, camera and object detection algorithms are employed to interact with VR headset making it more convenient tool for training simulators.

M. Fikret Ercan, Qiankun Liu, Yasushi Amari, Takashi Miyazaki
Evaluating an Accelerometer-Based System for Spine Shape Monitoring

In western societies a huge percentage of the population suffers from some kind of back pain at least once in their life. There are several approaches addressing back pain by postural modifications. Postural training and activity can be tracked by various wearable devices most of which are based on accelerometers. We present research on the accuracy of accelerometer-based posture measurements. To this end, we took simultaneous recordings using an optical motion capture system and a system consisting of five accelerometers in three different settings: On a test robot, in a template, and on actual human backs. We compare the accelerometer-based spine curve reconstruction against the motion capture data. Results show that tilt values from the accelerometers are captured highly accurate, and the spine curve reconstruction works well.

Katharina Stollenwerk, Johannes Müllers, Jonas Müller, André Hinkenjann, Björn Krüger
An Approach to Developing Learning Objects with Augmented Reality Content

Augmented reality (AR) has become widely available to the general public. Diverse real-life AR applications, ranging from entertainment to learning, have been created. In this context, this paper describes a systematic approach to creating learning objects with an AR content. This approach yields seven steps to guide the developer: (i) requirements; (ii) design; (iii) implementation; (iv) evaluation; (v) packaging; (vi) distribution; and (vii) learning evaluation. To evaluate the proposed approach, a case study was carried out. We carried out the development and evaluation of learning objects with AR content in an elementary school. We also conducted a usability test with specialists and an experiment with 40 students, on the usage of a learning object with an AR content. The delivered lecture was compared with the use of learning objects with multimedia content (the traditional type). Post and pre-test evaluations were conducted to record the students’ learning; these indicated that the proposed learning objects are more effective than the traditional type and can play a significant role in improving students’ grades. As a result, we claim that the proposed approach efficiently guides the development of learning objects with AR content. Using the approach presented here, it was possible to conclude the following: (i) it can guide the developer to create learning objects with AR content; (ii) it can integrate learning objects into learning object repositories.

Marcelo de Paiva Guimarães, Bruno Carvalho Alves, Rafael Serapilha Durelli, Rita de F. R. Guimarães, Diego Colombo Dias
eStreet: Virtual Reality and Wearable Devices Applied to Rehabilitation

The use of virtual reality has grown in recent years due to the popularization of immersive and interactive devices, and applications in general. In areas related to medical rehabilitation, it has not been different. Several types of research have emerged in the last years integrating virtual reality as an aid to patients’ rehabilitation of diverse pathologies, as for example people who suffered a stroke. The purpose of this research was to create a low-cost wearable device and provide an immersive and interactive virtual environment to perform daily life activities aiming at medical rehabilitation. In this way, virtual reality was used to support the accomplishment of activities such as functional mobility (through stationary walking) and maintenance of personal objects. The wearable device was developed using an Arduino UNO, sonars, accelerometer, and gyroscope. We also created a virtual reality environment that provides functionalities for user’s movement, such as pedestrian crossing a street and moving around a virtual city. It is expected that the solution developed can assist health professionals in the rehabilitation process.

Diego R. C. Dias, Iago C. Alvarenga, Marcelo P. Guimarães, Luis C. Trevelin, Gabriela Castellano, Alexandre F. Brandão
An RGB-Based Gesture Framework for Virtual Reality Environments

Virtual reality is growing as a new interface between human and machine, new technologies improving the development of virtual reality applications, and the user’s experience is extremely important for the science improvement. In order to define a new approach based on already established and easily acquired techniques of detection and tracking, an interaction framework was developed. The developed framework is able to understand basic commands through gestures performed by the user. Making use of a simple RGB camera. It is able to be used in a simple virtual reality application, allowing the user to interact with the virtual environment using natural user interface, focusing on presenting a way to interact with users without deep knowledge of computing, providing an easy-to-use interface. The results shows to be promising, and the possibilities of its uses are growing.

João P. M. Ferreira, Diego R. C. Dias, Marcelo P. Guimarães, Marcos A. M. Laia
Sharing Learning Objects Between Learning Platforms and Repositories

The wide adoption of e-learning in several contexts requires an increasing integration of learning material from different platforms, made available to users in a secure and easy-to-use way. In the present paper the re-engineering of the Moodledata module functionalities of GLOREP federation, designed to migrate information units between several content platforms is discussed.

Sergio Tasso, Simonetta Pallottelli, Osvaldo Gervasi, Marina Rui, Antonio Laganà
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2018
herausgegeben von
Prof. Dr. Osvaldo Gervasi
Beniamino Murgante
Sanjay Misra
Elena Stankova
Prof. Dr. Carmelo M. Torre
Ana Maria A.C. Rocha
Prof. David Taniar
Bernady O. Apduhan
Prof. Eufemia Tarantino
Prof. Yeonseung Ryu
Copyright-Jahr
2018
Electronic ISBN
978-3-319-95171-3
Print ISBN
978-3-319-95170-6
DOI
https://doi.org/10.1007/978-3-319-95171-3

Premium Partner