Skip to main content

2017 | Buch

Computational Science and Its Applications – ICCSA 2017

17th International Conference, Trieste, Italy, July 3-6, 2017, Proceedings, Part I

herausgegeben von: Osvaldo Gervasi, Beniamino Murgante, Sanjay Misra, Giuseppe Borruso, Carmelo M. Torre, Ana Maria A.C. Rocha, David Taniar, Bernady O. Apduhan, Elena Stankova, Alfredo Cuzzocrea

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The six-volume set LNCS 10404-10409 constitutes the refereed proceedings of the 17th International Conference on Computational Science and Its Applications, ICCSA 2017, held in Trieste, Italy, in July 2017.

The 313 full papers and 12 short papers included in the 6-volume proceedings set were carefully reviewed and selected from 1052 submissions. Apart from the general tracks, ICCSA 2017 included 43 international workshops in various areas of computational sciences, ranging from computational science technologies to specific areas of computational sciences, such as computer graphics and virtual reality. Furthermore, this year ICCSA 2017 hosted the XIV International Workshop On Quantum Reactive Scattering. The program also featured 3 keynote speeches and 4 tutorials.

Inhaltsverzeichnis

Frontmatter

General Tracks

Frontmatter
An Analysis of Reordering Algorithms to Reduce the Computational Cost of the Jacobi-Preconditioned CG Solver Using High-Precision Arithmetic

Several heuristics for bandwidth and profile reductions have been proposed since the 1960s. In systematic reviews, 133 heuristics applied to these problems have been found. The results of these heuristics have been analyzed so that, among them, 13 were selected in a manner that no simulation or comparison showed that these algorithms could be outperformed by any other algorithm in the publications analyzed, in terms of bandwidth or profile reductions and also considering the computational costs of the heuristics. Therefore, these 13 heuristics were selected as the most promising low-cost methods to solve these problems. Based on this experience, this article reports that in certain cases no heuristic for bandwidth or profile reduction can reduce the computational cost of the Jacobi-preconditioned Conjugate Gradient Method when using high-precision numerical computations.

Sanderson L. Gonzaga de Oliveira, Guilherme Oliveira Chagas, Júnior Assis Barreto Bernardes
An Ensemble Similarity Model for Short Text Retrieval

The rapid growth of World Wide Web has extended Information Retrieval related technology such as queries for information needs become more easily accessible. One such platform is online question answering (QA). Online community can posting questions and get direct response for their special information needs using various platforms. It creates large unorganized repositories of valuable knowledge resources. Effective QA retrieval is required to make these repositories accessible to fulfill users information requests quickly. The repositories might contained similar questions and answer to users newly asked question. This paper explores the similarity-based models for the QA system to rank search result candidates. We used Damerau-Levenshtein distance and cosine similarity model to obtain ranking scores between the question posted by the registered user and a similar candidate questions in repository. Empirical experimental results indicate that our proposed ensemble models are very encouraging and give a significantly better similarity value to improve search ranking results.

Arifah Che Alhadi, Aziz Deraman, Masita@Masila Abdul Jalil, Wan Nural Jawahir Wan Yussof, Akashah Amin Mohamed
Automatic Clustering and Prediction of Female Breast Contours

The horizontal shape of breast is the key of shape categorization of female subjects. In this paper, Elliptic Fourier Analysis and two machine learning approaches (K-Means++ and Support Vector Machine) were used for the clustering and prediction of female breast. Female subjects were scanned by RGB-Depth camera (Microsoft Kinect). The breast contours and the under-breast contours were extracted via an anthropometric algorithm without manual intervention. Pearson Correlation Coefficient (PCC) was used to screen the breast candidate(s) for following shape clustering. Principal Component Analysis (PCA) was performed on the Elliptic Fourier Descriptors (EFDs), extracted during the Elliptic Fourier Analysis (EFA), followed by K-Means++ and SVM. K-Means++ was employed to determine the clustering number, meanwhile offered a credible labeled dataset for the subsequent Support Vector Machine (SVM). Finally, a prediction model was built through the SVM. The primary motivation for this research is to offer a quick reference tool for the designers of female bra. The proposed model was validated by reaching an accuracy of 90.5% for breast horizontal shape identification.

Haoyang Xie, Duan Li, Zhicai Yu, Yueqi Zhong, Tayyab Naveed
Parallel Ray Tracing for Underwater Acoustic Predictions

Different applications of underwater acoustics frequently rely on the calculation of transmissions loss (TL), which is obtained from predictions of acoustic pressure provided by an underwater acoustic model. Such predictions are computationally intensive when dealing with three-dimensional environments. Parallel processing can be used to mitigate the computational burden and improve the performance of calculations, by splitting the computational workload into several tasks, which can be allocated on multiple processors to run concurrently. This paper addresses an Open MPI based parallel implementation of a three-dimensional ray tracing model for predictions of acoustic pressure. Data from a tank scale experiment, providing waveguide parameters and TL measurements, are used to test the accuracy of the ray model and the performance of the proposed parallel implementation. The corresponding speedup and efficiency are also discussed. In order to provide a complete reference runtimes and TL predictions from two additional underwater acoustic models are also considered.

Rogério M. Calazan, Orlando C. Rodríguez, Nadia Nedjah
Influences of Flow Parameters on Pressure Drop in a Patient Specific Right Coronary Artery with Two Stenoses

Blood pressure loss along the coronary arterial length and the local magnitude of the spatial wall pressure gradient (WPG) are important factors for atherosclerosis initiation and intimal hyperplasia development. The pressure drop coefficient (CDP) is defined as the ratio of mean trans-stenotic pressure drop to proximal dynamic pressure. It is a unique non-dimensional flow resistance parameter useful in clinical practice for evaluating hemodynamic impact of coronary stenosis. It is expected that patients with the same stenosis severity may be at different risk level due to their blood pressure situations. The aim of this study is to numerically examine the dependence of CDP and WPG on flow rate and blood viscosity using a patient-specific atherosclerotic right coronary artery model with two stenoses. Our simulation results indicate that the coronary model with a lower flow rate yields a greater CDP across a stenosis, while the model with a higher flow rate yields a greater pressure drop and a greater WPG. Increased blood viscosity results in a greater CDP. Quantitatively, CDP for each stenosis appears to be a linear function of blood viscosity and a decreasing quadratic function of flow rate. Simulations with varying size and location of the distal stenosis show that the influence of the distal stenosis on the CDP across the proximal stenosis is insignificant. In a right coronary artery segment with two moderate stenoses of the same size, the distal stenosis causes a larger drop of CPD than the proximal stenosis does. A distal stenosis located in a further downstream position causes a larger drop in the CDP.

Biyue Liu, Jie Zheng, Richard Bach, Dalin Tang
Adaptive Sine Cosine Algorithm Integrated with Differential Evolution for Structural Damage Detection

A sine cosine algorithm is one promising meta-heuristic recently proposed. In this work, the algorithm is extended to be self-adaptive and its main reproduction operators are integrated with the mutation operator of differential evolution. The new algorithm is called adaptive sine cosine algorithm integrated with differential evolution (ASCA-DE) and used to tackle the test problems for structural damage detection. The results reveal that the new algorithm outperforms a number of established meta-heuristics.

Sujin Bureerat, Nantiwat Pholdee
Resource Production in StarCraft Based on Local Search and Planning

This paper describes a approach for resource production in real-time strategy games (RTS Games). RTS games is a research area that presents interesting challenges for planning of concurrent actions, satisfaction of preconditions and resource management. Rather than working with fixed goals for resource production, we aim at achieving goals that maximize real-time resource production. The approach uses the Simulated Annealing (SA) algorithm as a search tool for deriving resource production goals. The authors have also developed a planning system that works in conjunction with SA to operate properly in a real-time environment. Analysis of performance compared to human and bot players corroborate in confirming the efficiency of our approach and the claims we have made.

Thiago F. Naves, Carlos R. Lopes
Inductive Synthesis of the Models of Biological Systems According to Clinical Trials

In the article an approach to solving the problem of inductive synthesis of the models of biological systems according to clinical trials is suggested. Suggested approach to inductive synthesis of biological models on the base of results of clinical trials allows essentially decrease computational complexity of this problem. Formalization of biological models in the form of graph of parameters allows use well developed mathematical apparatus of theory of graphs, which suggest effective methods of models transformation. Nowadays suggested approach is used in Almazov Cardiological Center for automatic medical data processing.

Vasily Osipov, Mikhail Lushnov, Elena Stankova, Alexander Vodyaho, Nataly Zukova
Solving Sparse Differential Riccati Equations on Hybrid CPU-GPU Platforms

The numerical treatment of the linear-quadratic optimal control problem requires the solution of Riccati equations. In particular, the differential Riccati equations (DRE) is a key operation for the computation of the optimal control in the finite-time horizon case. In this work, we focus on large-scale problems governed by partial differential equations (PDEs) where, in order to apply a feedback control strategy, it is necessary to solve a large-scale DRE resulting from a spatial semi-discretization. To tackle this problem, we introduce an efficient implementation of the implicit Euler method and linearly implicit Euler method on hybrid CPU-GPU platforms for solving differential Riccati equations arising in a finite-time horizon linear-quadratic control problems. Numerical experiments validate our approach.

Peter Benner, Ernesto Dufrechou, Pablo Ezzatti, Hermann Mena, Enrique S. Quintana-Ortí, Alfredo Remón
A Hybrid CPU-GPU Scatter Search for Large-Sized Generalized Assignment Problems

In the Generalized Assignment Problem, tasks must be allocated to machines with limited resources, in order to minimize processing costs. This problem has several industrial applications and often appears as substructure of other combinatorial optimization problems. By harnessing the massive computational power of Graphics Processing Units in a Scatter Search metaheuristic framework, we propose a method that efficiently generates a solution pool using a Tabu list criteria and an Ejection Chain mechanism. Common characteristics are extracted from the pool and solutions are combined by exploring a restricted search space, as a Binary Programming model. Classic instances vary from 100–1600 jobs and 5–80 agents, but due to the big amount of optimal and near-optimal solutions found by our method, we propose novel large-sized instances up to 9000 jobs and 600 agents. Results indicate that the method is competitive with state-of-the-art algorithms in literature.

Danilo S. Souza, Haroldo G. Santos, Igor M. Coelho, Janniele A. S. Araujo
Vector Field Second Order Derivative Approximation and Geometrical Characteristics

Vector field is mostly linearly approximated for the purpose of classification and description. This approximation gives us only basic information of the vector field. We will show how to approximate the vector field with second order derivatives, i.e. Hessian and Jacobian matrices. This approximation gives us much more detailed description of the vector field. Moreover, we will show the similarity of this approximation with conic section formula.

Michal Smolik, Vaclav Skala
Exhaustive Analysis for the Effects of a Feedback Regulation on the Bi-Stability in Cellular Signaling Systems

Cellular signaling systems regulate biochemical reactions operating in cells for various functions. The regulatory mechanisms have been recently studied intensively since the malfunction of the regulation is thought to be one of the substantial causes of cancer formation. However, it is rather difficult to develop the theoretical framework for investigation of the regulatory mechanisms due to their complexity and nonlinearity. In this study, more general approach is proposed for elucidation of emergence of the bi-stability in cellular signaling systems by construction of mathematical models for a class of cellular signaling systems and the exhaustive simulation analysis over the variation of network architectures and the values of parameters. The model system is formulated as regulatory network in which every node represents an activation-inactivation cyclic reaction for respective constituent enzyme of the network and the regulatory interactions between the reactions are depicted by arcs between nodes. The emergence of the stable equilibrium point in steady states of the network is analyzed with the Michaelis-Menten reaction scheme as the reaction mechanism in each cyclic reaction. The analysis is performed for all variations of the regulatory networks comprised of two nodes, three nodes, and four nodes with a single feedback regulation loop. The ratios and the aspects of the emergence of the stable equilibrium points are analyzed over the exhaustive combinations of the parameter values for each node with the common Michaelis constant for the regulatory networks. It is revealed that the shorter feedback length is favorable for bi-stability. Furthermore, the bi-stability and the oscillation is more likely to develop in the case of low value of the Michaelis constant than in the case of high value, implying that the condition of the higher saturation levels, which induces stronger nonlinearity. In addition to these results, the analysis for the parameter regions yielding the bi-stability and the oscillation are presented.

Chinasa Sueyoshi, Takashi Naka
Fault Classification on Transmission Lines Using KNN-DTW

To maintain the quality of electricity is necessary to know the main disturbances in the electrical power system, an investigation into signal behavior is presented in this research through the short circuit fault type classification in transmission lines. The analysis of the database UFPAFaults using the KNN algorithm with a change in the calculation of similarity allowed the classifier to execute multivariate time series. On the other hand, the DTW calculation dispenses preprocessing steps as front ends adopted in several papers and presents satisfactory results in the classification of these faults. The comparison of this classifier with Frame Based Sequence Classification architecture, shows the relevance of direct classification of faults using KNN-DTW.

Bruno G. Costa, Jean Carlos Arouche Freire, Hamilton S. Cavalcante, Marcia Homci, Adriana R. G. Castro, Raimundo Viegas, Bianchi S. Meiguins, Jefferson M. Morais
Intelligent Twitter Data Analysis Based on Nonnegative Matrix Factorizations

In this paper we face the problem of intelligently analyze Twitter data. We propose a novel workflow based on Nonnegative Matrix Factorization (NMF) to collect, organize and analyze Twitter data. The proposed workflow firstly fetches tweets from Twitter (according to some search criteria) and processes them using text mining techniques; then it is able to extract latent features from tweets by using NMF, and finally it clusters tweets and extracts human-interpretable topics. We report some preliminary experiments demonstrating the effectiveness of the proposed workflow as a tool for Intelligent Data Analysis (IDA), indeed it is able to extract and visualize interpretable topics from some newly collected Twitter datasets, that are automatically grouped together according to these topics. Furthermore, we numerically investigate the influence of different initializations mechanisms for NMF algorithms on the factorization results when very sparse Twitter data are considered. The numerical comparisons confirm that NMF algorithms can be used as clustering method in place of the well known k-means.

Gabriella Casalino, Ciro Castiello, Nicoletta Del Buono, Corrado Mencar
Q-matrix Extraction from Real Response Data Using Nonnegative Matrix Factorizations

In this paper we illustrate the use of Nonnegative Matrix Factorization (NMF) to analyze real data derived from an e-learning context. NMF is a matrix decomposition method which extracts latent information from data in such a way that it can be easily interpreted by humans. Particularly, the NMF of a score matrix can automatically generate the so called Q-matrix. In an e-learning scenario, the Q-matrix describes the abilities to be acquired by students to correctly answer evaluation exams. An example on real response data illustrates the effectiveness of this factorization method as a tool for EDM.

Gabriella Casalino, Ciro Castiello, Nicoletta Del Buono, Flavia Esposito, Corrado Mencar
Building Networks for Image Segmentation Using Particle Competition and Cooperation

Particle competition and cooperation (PCC) is a graph-based semi-supervised learning approach. When PCC is applied to interactive image segmentation tasks, pixels are converted into network nodes, and each node is connected to its k-nearest neighbors, according to the distance between a set of features extracted from the image. Building a proper network to feed PCC is crucial to achieve good segmentation results. However, some features may be more important than others to identify the segments, depending on the characteristics of the image to be segmented. In this paper, an index to evaluate candidate networks is proposed. Thus, building the network becomes a problem of optimizing some feature weights based on the proposed index. Computer simulations are performed on some real-world images from the Microsoft GrabCut database, and the segmentation results related in this paper show the effectiveness of the proposed method.

Fabricio Breve
QueueWe: An IoT-Based Solution for Queue Monitoring

Internet of Things allows people’s everyday objects to be connected to the Internet and to each other, which gives them the ability to communicate between themselves and with the end users. This allows such objects to contribute to the improvement of the accomplishment of tasks such as: environment monitoring, people’s health monitoring, natural resources management, and many other activities. This way, this work designs, implements and evaluates a solution for queue monitoring. This solution used as its first use scenario the University Restaurant (UR) of the Universidade Federal do Rio Grande do Norte (Federal University of Rio Grande do Norte - UFRN). The goal was to inform the restaurant’s users the best times to go to the UR in order to avoid queues, consequently making better use of their time. To achieve this goal, a prototype of the solution was developed involving sensors, connectivity, a mobile application and an IoT platform.

Gibeon S. Aquino Jr., Cícero A. Silva, Itamir M. B. Filho, Dênis R. S. Pinheiro, Paulo H. Q. Lopes, Cephas A. S. Barreto, Anderson P. N. Silva, Renan O. Silva, Thalyson L. G. Souza, Tyrone M. Damasceno
Inter-building Routing Approach for Indoor Environment

Routing system has been implemented in the outdoor routing and the indoor routing. There are significant differences that make indoor routing is more complex than outdoor routing, which is the outdoor routing implements two dimensional spaces, while at the indoor routing allows the routing of the three dimensional spaces that represent multi-level building. This research concern about the prototype development of the inter-building routing system. The construction of this prototype needs to consider both outdoor and indoor routing. Shortest path algorithms could be implemented after the construction of three dimensional spaces spatial data structure in order to inform the users about the shortest route between two points in indoor spaces.

Tiara Annisa Dionti, Kiki Maulana Adhinugraha, Sultan Mofareh Alamri
A CFD Study of Wind Loads on High Aspect Ratio Ground-Mounted Solar Panels

Computational fluid dynamics is used to study the wind loads on a high aspect ratio ground-mounted solar panel. Reynolds-averaged Navier-Stokes simulations are performed using a commercial finite volume-based code with two different numerical approaches. First, the entire panel is directly simulated in a three-dimensional domain. Then, a small portion of the panel is considered, by imposing periodic boundary conditions in the spanwise homogeneous direction. The comparison shows a good match between the results obtained with the two different models, in terms of pressure coefficient and aerodynamic loads. The main consequence is a considerable reduction of the computational costs when using the reduced model.

Giovanni Paolo Reina, Giuliano De Stefano
A Comparative Study of a GUI-Aided Formal Specification Construction Approach

Formal specification techniques still remain a challenge for applying formal methods in practice how to reduce unnecessary changes during and after writing formal specifications. We proposed a GUI-aided approach to constructing formal specifications, but its effectiveness has not been evaluated. This paper describes an experiment we have conducted to systematically evaluate the GUI-aided approach by comparing it with the existing refinement approach for constructing a formal specification based on an informal requirements specification. We chose a travel reservation system as the target system for the experiment, and developed half of its functions using the GUI-aided approach and the other using the existing refinement approach. The comparative experiment shows how we analyzed the data collected during the development and presents the findings. The result indicates that the GUI-aided approach is superior to the existing refinement approach due to focus on adequate requirement acquisitions in the early phase of software design.

Fumiko Nagoya, Shaoying Liu
Missing Data Completion Using Diffusion Maps and Laplacian Pyramids

A challenging problem in machine learning is handling missing data, also known as imputation. Simple imputation techniques complete the missing data by the mean or the median values. A more sophisticated approach is to use regression to predict the missing data from the complete input columns. In case the dimension of the input data is high, dimensionality reduction methods may be applied to compactly describe the complete input. Then, a regression from the low-dimensional space to the incomplete data column can be constructed from imputation. In this work, we propose a two-step algorithm for data completion. The first step utilizes a non-linear manifold learning technique, named diffusion maps, for reducing the dimension of the data. This method faithfully embeds complex data while preserving its geometric structure. The second step is the Laplacian pyramids multi-scale method, which is applied for regression. Laplacian pyramids construct kernels of decreasing scales to capture finer modes of the data. Experimental results demonstrate the efficiency of our approach on a publicly available dataset.

Neta Rabin, Dalia Fishelov
Classification of Cocaine Dependents from fMRI Data Using Cluster-Based Stratification and Deep Learning

Cocaine dependence continues to devastate millions of human lives. According to the 2013 National Survey on Drug Use and Health, approximately 1.5 million Americans are currently addicted to cocaine. It is important to understand how cocaine addicts and non-addicted individuals differ in the functional organization of the brain. This work advances the identification of cocaine dependence based on fMRI classification and innovates by employing deep learning methods. Deep learning has proved its utility in machine learning community, mainly in computational vision and voice recognition. Recently, studies have successfully applied it to fMRI data for brain decoding and classification of pathologies, such as schizophrenia and Alzheimer’s disease. These fMRI data were relatively large, and the use of deep learning in small data sets still remains a challenge. In this study, we fill this gap by (i) using Deep Belief Networks and Deep Neural Network to classify cocaine dependents from fMRI, and (ii) presenting a novel stratification method for robust training and evaluation of a relatively small data set. Our results show that deep learning outperforms traditional techniques in most cases, and present a great potential for improvement.

Jeferson S. Santos, Ricardo M. Savii, Jaime S. Ide, Chiang-Shan R. Li, Marcos G. Quiles, Márcio P. Basgalupp
Linear Models for High-Complexity Sequences

Different binary sequence generators produce sequences whose period is a power of 2. Although these sequences exhibit good cryptographic properties, in this work it is proved that such sequences can be obtained as output sequences from simple linear structures. More precisely, every one of these sequences is a particular solution of a linear difference equation with binary coefficients. This fact allows one to analyze the structural properties of the sequences with such a period from the point of view of the linear difference equations. In addition, a new application of the Pascal’s triangle to the cryptographic sequences has been introduced. In fact, it is shown that all these binary sequences can be obtained by XORing a finite number of binomial sequences that correspond to the diagonals of the Pascal’s triangle reduced modulo 2.

Sara D. Cardell, Amparo Fúster-Sabater
PrescStream: A Framework for Streaming Soft Real-Time Predictive and Prescriptive Analytics

All the data volume generated by modern applications brings opportunities for knowledge extraction and value creation. In this sense, the integration of predictive and prescriptive analytics may help the industry and users to be more productive and successful. It means not only to estimate an outcome but also to act on it in the real world. Nonetheless, mastering these concepts and providing their integration is not an easy task. This work proposes PrescStream, a proof of concept framework that uses machine learning based prediction, and process this outcome result to do prescriptive analytics, allowing researchers to integrate predictive and prescriptive analytics into their experiments. It has a scalable, fault-tolerant microservices based architecture, making it ideal for cloud deployment and IoT (internet of things) applications. The paper describes the general architecture of the system, as well as a validation usage with result analysis.

Marcos de Aguiar, Fabíola Greve, Genaro Costa
A Novel Approach for Supporting Approximate Representation of Linear Prediction Residuals in Pattern Recognition Tools

The goal of this paper is to describe an optimization approach for selecting a reduced number of samples of the linear prediction residual. This can be extremely useful in pattern recognition tools. Sample determination is a combinatorial problem. Our approach addresses the combinatorial problem with simulated annealing based optimization. We show that better results than that obtained by a standard approximation approach, namely the multi-pulse algorithm, are obtained with our approach. Multi-pulse selects pulse locations by a sequential, sub-optimal, algorithm and computes the pulses amplitudes according to an optimization criteria. Our approach finds the optimal residual samples by means of an optimization algorithm approach without amplitudes optimization. The compressed residual is fed to an all-pole model of speech obtaining better results than standard Multipulse modeling. We believe that this algorithm could be used as an alternative to other algorithms for medium-rate coding of speech in low complexity embedded devices. We also discuss performance and complexity issues of the described algorithm.

Alfredo Cuzzocrea, Enzo Mumolo
Genetic Estimation of Iterated Function Systems for Accurate Fractal Modeling in Pattern Recognition Tools

In this paper, we describe an algorithm to estimate the parameters of Iterated Function System (IFS) fractal models. We use IFS to model Speech and Electroencephalographic signals and compare the results. The IFS parameters estimation is performed by means of a genetic optimization approach. We show that the estimation algorithm has a very good convergence to the global minimum. This can be successfully exploited by pattern recognition tools. However, the set-up of the genetic algorithm should be properly tuned. In this paper, besides the optimal set-up description, we describe also the best tradeoff between performance and computational complexity. To simplify the optimization problem some constraints are introduced. A comparison with suboptimal algorithms is reported. The performance of IFS modeling of the considered signals are in accordance with known measures of the fractal dimension.

Alfredo Cuzzocrea, Enzo Mumolo, Giorgio Mario Grasso
Evaluation of Frequent Pattern Growth Based Fuzzy Particle Swarm Optimization Approach for Web Document Clustering

Soft and hard clustering efficiency evaluation of novel approach of frequent pattern growth based fuzzy particle swarm optimization for clustering web documents is studied and analyzed in this paper. The conventional approaches K-Means and Fuzzy c-means (FCM) fails with regard to random initialization and local minima hookups. To overcome this drawbacks, bio inspired mechanisms like genetic algorithm, ant colony optimization and particle swarm optimization (PSO) are used to optimize the K-means and FCM clustering. The major contribution of the novel method are three fold. Primarily in its ways to automatically find effective cluster numbers, cluster centroids and swarms for the bio inspired fuzzy particle swarm optimization. Second in yielding fuzzy overlapping clusters using the FCM objective function overcoming the drawbacks of the existing methods. Third, the methodology discusses in this paper prunes out the irrelevant elements from the search space and thereby retains all relationships with search query as semantic conditionally relatable sets. The evaluation results show that our proposed approach performs better for Adjusted Rand Index (ARI), Normalized Mutual Information (NMI) and Adjusted Concordance Index (ACI) against various distance based similarity measures and FCMPSO.

Raja Varma Pamba, Elizabeth Sherly, Kiran Mohan
A Differential Evolution Algorithm for Computing Caloric-Restricted Diets - Island-Based Model

A rich and balanced diet, combined with physical exercises, is the most common and efficient manner to achieve a healthy body. Since the classic Diet Problem proposed by Stigler, several works in the literature proposed to compute a diet that respects the nutritional needs of an individual. This work deals with a variation of the Diet Problem, called Caloric-Restricted Diet Problem (CRDP). The CRDP objective is to find a reduced caloric diet that also respects the nutritional needs of an individual, thus enabling weight loss in a healthy way. In this paper we propose an Island-based Differential Evolution algorithm, a distributed metaheuristic that evolves a set of populations semi-isolated from each other. Computational experiments showed that this island-based structure outperforms its non-distributed implementation, generating a greater variety of diets with small calorie count.

João Gabriel Rocha Silva, Iago Augusto Carvalho, Leonardo Goliatt, Vinícus da Fonseca Vieira, Carolina Ribeiro Xavier
An Implementation of Parallel 1-D Real FFT on Intel Xeon Phi Processors

In this paper, we propose an implementation of a parallel one-dimensional real fast Fourier transform (FFT) on Intel Xeon Phi processors. The proposed implementation of the parallel one-dimensional real FFT is based on the conjugate symmetry property for the discrete Fourier transform (DFT) and the six-step FFT algorithm. We vectorized FFT kernels using the Intel Advanced Vector Extensions 512 (AVX-512) instructions, and parallelized the six-step FFT by using OpenMP. Performance results of one-dimensional FFTs on Intel Xeon Phi processors are reported. We successfully achieved a performance of over 91 GFlops on an Intel Xeon Phi 7250 (1.4 GHz, 68 cores) for a $$2^{29}$$-point real FFT.

Daisuke Takahashi
A Comprehensive Survey on Human Activity Prediction

Human activity recognition has been extensively studied and achieves promising results in Computer Vision community. Typical activity recognition methods require observe the whole process, then extract features and build a model to classify the activity. However, in many applications, the ability to early recognition or prediction a human activity before it completes is necessary. This task is challenging because of the lack of information when only a fraction of the activity is observed. To get an accurate prediction, the methods must have high discriminated power with just the beginning part of activity. While activity recognition is very popular and has a lot of surveys, activity prediction is still a new and relatively unexplored problem. To the best of our knowledge, there is no survey specifically focusing on human activity prediction. In this survey, we give a systematic review of current methods for activity prediction and how they overcome the above challenge. Moreover, this paper also compares performances of various techniques on the common dataset to show the current state of research.

Nghia Pham Trong, Hung Nguyen, Kotani Kazunori, Bac Le Hoai
Using Ontology for Personalised Course Recommendation Applications

The primary data source for universities and courses for students is increasingly becoming the web, and with a vast amount of information about thousands of courses on different websites, it is quite a task to find one that matches a student’s needs. That is why we are proposing the “Course Recommendation System”, a system that suggests the course best suited for prospective students. As there has been a huge increase in course content on the Internet, finding the course you really need has become time-consuming, so we are proposing to use an ontology-based approach to semantic content recommendation. The aim is to enhance the efficiency and effectiveness of providing students with suitable recommendations. The recommender takes into consideration knowledge about the user (the student’s profile) and course content, as well as knowledge about the domain that is being learned. Ontology is used to both models and represent such forms of knowledge. There are four steps to this: extracting information from multiple sources, applying ontologies by using Protégé tools, semantic relevance calculation and refining the recommendation. A personalised, complete and augmented course is then suggested for the student, based on these steps.

Mohammed Essmat Ibrahim, Yanyan Yang, David Ndzi
Accelerating Docking Simulation Using Multicore and GPU Systems

Virtual screening methodologies have been used to help drug researchers to discover new medicine. The main goal of these methodologies is to help in the docking phase, reducing the vast chemical space (usually referred to have 1060 molecules) to a small number that can be more easily processed and tested. The docking phase tests which molecules better interact with a drug target, such as an enzyme or protein receptor. This process is very time consuming, as we need to test all possible combinations. So, hybrid parallel architectures comprised by multicore processors and multi-GPUs can be a suitable approach to this problem, as they reduce the execution time whereas allow for the exploitation of huge libraries of candidate molecules. In this paper, we present a methodology to increase docking performance through the parallelization of the AutoDock tool over multiprocessor and GPU hardware. The results show our multicore implementation achieves a maximum speedup of 8 times, while our GPU implementation reaches a speedup of 35 times and the hybrid implementation provides a maximum speedup of 80 times.

Everton Mendonça, Marcos Barreto, Vinícius Guimarães, Nelci Santos, Samuel Pita, Murilo Boratto
Separation Strategies for Chvátal-Gomory Cuts in Resource Constrained Project Scheduling Problems: A Computational Study

The Resource Constrained Project Scheduling Problems is a well-known $$\mathcal {N}\mathcal {P}$$-hard combinatorial optimization problem. A solution for RCPSP consists in allocating jobs by selecting execution modes and respecting precedence constraints and resource usage. One of main challenges that exact linear-based programming solution approaches currently face is that compact usually provide weak lower bounds. In this paper we propose use of general purpose Chvátal-Gomory cuts to strengthen the LP-based bounds. We observed that by using proper cut separation strategies, the produced bounds can compete with or improve bounds with those obtained with problem specific cuts.

Janniele Aparecida Soares Araujo, Haroldo Gambini Santos
Data Mining Approach to Dual Response Optimization

In manufacturing process optimization, analyzing a large volume of operational data is getting attention due to the development of data processing techniques. One of important issues in the process optimization is a simultaneous optimization of mean and variance of a response variable. It is called dual response optimization (DRO). Traditional DRO methods build statistical models for the mean and variance of the response variable by fitting the models to experimental data. Then, an optimal setting of input variables is obtained by analyzing the fitted models. This model based approach assumes that the statistical model is fitted well to the data. However, it is often difficult to satisfy this assumption when dealing with a large volume of operational data from manufacturing line. In such a case, data mining approach is an attractive alternative. We proposes a particular data mining method by modifying patient rule induction method for DRO. The proposed method obtains an optimal setting of the input variables directly from the operational data where mean and variance are optimized. We explain a detailed procedure of the proposed method with case examples.

Dong-Hee Lee
Integrated Evaluation and Multi-methodological Approaches for the Enhancement of the Cultural Landscape

The paper presents an integrated assessment process for the identification of scenarios of cultural landscape sustainable valorization in a particularly significant area of southern Italy characterized by tangible and intangible resources. The decision-making process uses multi-methodological evaluations in order to support the development of scenarios and alternatives policy strategies, aimed at pre-order a territory development system subject of study. The methodological pathway is structured to allow the interaction among different techniques, which are selected in order to outline a decision support system, dynamic, flexible and adaptive, sensitive to the specificities of the context and oriented to the development of intervention strategies based on of experts and common knowledge, and on recognized and shared values. The selection of ‘conscious actions’ helps to reduce conflicts turning them in synergies, recognizing that the essential components of a landscape are multidimensional and complex and where interact different systems of values and relationships. Therefore, the strategies will be feasible or practicable in proportion to what projects will tend to achieve the idea of “scenario” for the site shared by social and institutional actors of the local system.

Lucia Della Spina
Study of Parameter Sensitivity on Bat Algorithm

Heuristics and metaheuristics are known to be sensitive to input parameters. Bat algorithm (BA), a recent optimization metaheuristic, has a great number of input parameters that need to be adjusted in order to increase the quality of the results. Despites the crescent number of works with BA in literature, to the best of our knowledge, there is no work that aims the fine tuning of the parameters. In this work we use benchmark functions and more than 9 millions tests with BA in order to find the best set of parameters. Our experiments shown that we can have almost 14000% of difference in objective function value between the best and the worst set of parameters. Finally, this work shows how to choose input parameters in order to make Bat Algorithm to achieve better results.

Iago Augusto Carvalho, Daniel G. da Rocha, João Gabriel Rocha Silva, Vinícus da Fonseca Vieira, Carolina Ribeiro Xavier
A Control Mechanism for Live Migration with Data Regulations Preservation

In this paper, we propose a data protection mechanism for live migration process. The proposed data protection mechanism verifies whether it is permitted to copy the data of from the migration source application to the migration destination based on the contents of the regulations concerning the use of the data issued by the organization and the country at the time of during the live migration procedures. This mechanism performs live migration only when copying is permitted. By applying this mechanism, it is possible to use appropriate data while protecting privacy during the live migration process. Detailed explanation and implementation evaluation of the data protection mechanism were carried out. As a result, we observed that the mechanism did not exhibit adverse effects on the live migration process.

Toshihiro Uchibayashi, Yuichi Hashi, Seira Hidano, Shinsaku Kiyomoto, Bernady Apduhan, Toru Abe, Takuo Suganuma, Masahiro Hiji

Workshop on Challenges, Trends and Innovations in VGI (VGI 2017)

Frontmatter
Determining the Potential Role of VGI in Improving Land Administration Systems in Iraq

Land is undoubtedly the most important natural resource and asset in any nation, contributing half to three quarters of the national wealth in most countries. Hence there is a need to manage it in a sustainable manner. The management of a land system is affected by a number of factors, including the political situation, the socio-cultural environment, and economic aspects e.g. financial crises. In Iraq all these problems are present and further, the country has endured decades of internal and external conflict. The results of such factors include different waves of human population displacement, land ownership document forgery, seizure of public land by unauthorised groups, manipulation of urban planning by changing the use of the land, and subdivision of single parcels into multiple landholdings. By examining volunteered geographic information (VGI), we hypothesise that it has a role in enhancing the Iraqi land administration system. The paper presents a research design, including specific aims and objectives, for conducting such VGI application in Iraq and concludes with a framework for conducting challenging fieldwork in local communities which seek improvements in land administration.

Mustafa Hameed, David Fairbairn, Suzanne Speak

Workshop on Advances in Web Based Learning (AWBL 2017)

Frontmatter
Unsupervised Learning of Question Difficulty Levels Using Assessment Responses

Question Difficulty Level is an important factor in determining assessment outcome. Accurate mapping of the difficulty levels in question banks offers a wide range of benefits apart from higher assessment quality: improved personalized learning, adaptive testing, automated question generation, and cheating detection. Adopting unsupervised machine learning techniques, we propose an efficient method derived from assessment responses to enhance consistency and accuracy in the assignment of question difficulty levels. We show effective feature extraction is achieved by partitioning test takers based on their test-scores. We validate our model using a large dataset collected from a two thousand student university-level proctored assessment. Preliminary results show our model is effective, achieving mean accuracy of 84% using instructor validation. We also show the model’s effectiveness in flagging mis-calibrated questions. Our approach can easily be adapted for a wide range of applications in e-learning and e-assessments.

Sankaran Narayanan, Vamsi Sai Kommuri, N. Sethu Subramanian, Kamal Bijlani, Nandu C. Nair
An Agents and Artifacts Metamodel Based E-Learning Model to Search Learning Resources

In this paper, an e-learning model based on Agents and Artifacts (A&A) Metamodel to search learning resources from multiple sources is proposed. Multi agent system (MAS) based e-learning models with the same functionality are available in the literature. However, they are mostly developed as standalone systems that contain a single agent responsible for searching and retrieving learning resources. With the highly distributed nature of learning resources over multiple repositories, giving this responsibility to only one agent decreases scalability. The proposed model exploits the A&A Metamodel to overcome this issue. A&A Metamodel focuses on environment modeling in MAS design and models entities in the environment as artifacts, that are first class entities like agents. From the perspective of MAS based e-learning systems, learning resources are the main components in the environment that agents interact with. Thus, an efficient solution can be achieved with an e-learning model that searches learning objects by using an e-learning environment model based on A&A Metamodel. The proposed e-learning system is developed with Jason and the e-learning environment model is implemented with CArtAgO framework. Finally, current limitations and future directions of the proposed approach are discussed.

Birol Ciloglugil, Mustafa Murat Inceoglu

Workshop on Virtual Reality and Applications (VRA 2017)

Frontmatter
Distributed, Immersive and Multi-platform Molecular Visualization for Chemistry Learning

This paper presents Dimmol (acronym for Distributed Immersive Multi-platform Molecular visualization), a scientific visualization application based on UnityMol, developed with the Unity game engine, and that uses the Unity Cluster Package to enable distributed and immersive visualization of molecular structures across multiple device of different types, with support to Google VR, molecular trajectory files, and master-host-slave rendering. Its goal is to improve and facilitate the way educators and researchers visualize molecular structures with students and partners. In order to demonstrate a possible use scenario for Dimmol, better understand the contributions of each platform it can be executed in, and gather performance data, three molecular visualizations are loaded on it and distributed to a graphic cluster, a laptop, a tablet, and a smartphone. Other possible uses are also discussed.

Luiz Soares dos Santos Baglie, Mário Popolin Neto, Marcelo de Paiva Guimarães, José Remo Ferreira Brega
Embedding Augmented Reality Applications into Learning Management Systems

A tool is proposed to reduce the disparity between the state of the art of technologies and the time of maturity required for effective implementation, facilitating the insertion of augmented reality content into learning management systems. This tool uses didactic material based on augmented reality in the Sharable Content Object Reference Model (SCORM) that is a learning object standard. We tested this tool, generating a learning object based on augmented reality and sharing it to the Moodle platform. We also tested and shared this object to the repository SCORM Cloud.

Marcelo de Paiva Guimarães, Bruno Alves, Valéria Farinazzo Martins, Luiz Soares dos Santos Baglie, José Remo Brega, Diego Colombo Dias
Immersive Ground Control Station for Unmanned Aerial Vehicles

Nowadays, the use of unmanned aerial vehicles, also known as drones, is growing in several areas, as military, civilian and entertainment. These vehicles can generate large volumes of data, such as 3D videos (three-dimensional) and telemetry data. A challenge is to visualize this data and take flight control decisions based on them. On the other hand, Virtual Reality provides immersion and interaction of users in simulated environments. Therefore, enriching the experience of drones’ users with Virtual Reality becomes a promising possibility. This project aims at investigating how the features provided by virtual reality can be used in planning drone flight, allowing the flight plan creation - flight routes are determined by waypoints (georeferenced points), which contains altitude, latitude and longitude; and monitoring the entire path of mission - presenting information telemetry. To this end, a system of multiprojection is used, which inserts the user in an upcoming 3D environment videos captured by a Drone. The goal of this work is to develop an immersive and interactive control station for controlling, planing flights, and tracking.

Glesio Garcia de Paiva, Diego Roberto Colombo Dias, Marcelo de Paiva Guimarães, Luis Carlos Trevelin
SPackageCreator3D - A 3D Content Creator to the Moodle Platform to Support Human Anatomy Teaching and Learning

Understanding three-dimensional (3D) forms is very important for anatomy learning. Most methods of anatomy teaching are offered to students through two-dimensional (2D) resources, such as videos and images. In order to support anatomy teaching and learning, many software solutions have been developed to allow the interaction and visualization of 3D virtual models using several techniques and technologies. Even solutions that offer great visual resources lack common teaching and learning environments features. This article aims to present the SPackageCreator3D, a tool to create 3D SCORM packages to the Moodle platform, which is used by a large number of educational institutions, being one of the most popular teaching and learning management platforms, validated by educators and supported by pedagogical a methodology. In order to test the 3D SCORM packages created by the SPackageCreator3D, an evaluation with 32 students was performed, indicating potential uses and needed improvements.

Fabrício Quintanilha Baptista, Mário Popolin Neto, Luiz Soares dos Santos Baglie, Silke Anna Theresa Weber, José Remo Ferreira Brega

Workshop on Industrial Computational Applications (WIKA 2017)

Frontmatter
Real-Time Vision Based System for Measurement of the Oxyacetylene Welding Parameters

In the paper an original method of the oxyacetylene welding measurement and control is presented. The method is based on the computer processing of the flame images of the oxyacetylene torch. In this paper flame analysis is presented which is based on adaptive thresholding, statistical shape parameters computations, as well as color analysis of characteristic parts of a flame. The latter is done based on the proposed flame model. These parameters are then used as features in classification process. Thanks to this the proposed method is able to automatically determine in real-time parameters of a flame which can be used for automatic setup of the welding conditions.

Bogusław Cyganek, Maciej Basiura
A Data-Mining Technology for Tuning of Rolling Prediction Models: Theory and Application

The realization of physical modeling of the rolling process is proposed as a material hardness virtual sensor and represents a valid tool for data exploration. The use of unsupervised clustering technology is here proposed and explored so as ease the material grouping process that might be strictly required for technological maintenance purposes.

Francesco A. Cuzzola, Claudio Aurora

Workshop on Web-Based Collective Evolutionary Systems: Models, Measures, Applications (IWCES 2017)

Frontmatter
Structural and Semantic Proximity in Information Networks

This research includes the investigation, design and experimentation of models and measures of semantic and structural proximity for knowledge extraction and link prediction. The aim is to measure, predict and elicit, in particular, data from social or collaborative sources of heterogeneous information. The general idea is to use the information about entities (i.e. users) and relationships in collaborative or social repositories as an information source to infer the semantic context, and relations among the heterogeneous multimedia objects of any kind to extract the relevant structural knowledge. Contexts can then be used to narrow the domains and improve the performances of tasks such as disambiguation of entities, query expansion, emotion recognition and multimedia retrieval, just to mention a few.There is thus the need for techniques able to produce results, even approximated, with respect to a given query, for ranking a set of promising candidates. Tools to reach the rich information already exist: web search engines, which results can be calculated with web-based proximity measures. Semantic proximity is used to compute attributes e.g. textual information. On the other hand, non-textual (i.e. structural, topological) information in collaborative or social repositories is used in contexts where the object is located. Both web-based and structural similarity measures can make profit from suboptimal results of computations. Which measure to use, and how to optimize the extraction and the utility of the extracted information, are the open issues that we address in our work.

Valentina Franzoni, Alfredo Milani
Data Protection Risk Modeling into Business Process Analysis

We present a novel way to link business process model with data protection risk management. We use established body of knowledge regarding risk manager concepts and business process towards data protections. We try to contribute to the problems that today organizations should find a suitable data protection model that could be used in as a risk framework. The purpose of this document is to define a model to describe data protection in the context of risk. Our approach including the identification of the main concepts of data protection according to the scope of the with EU directive data protection regulation. We outline data protection model as a continuous way of protection valued organization information regarding personal identifiable information. Data protection encompass the preservation of personal data information from unauthorized access, use, modification, recording or destruction. Since this kind of service is offered in a continuous way, it is important to stablish a way to measure the effectiveness of awareness of data subject discloses regrading personal identifiable information.

António Gonçalves, Anacleto Correia, Luis Cavique
Suitability of BPMN Correct Usage by Users with Different Profiles: An Empirical Study

A declared purpose of the BPMN standard was to provide a business process modeling language, amenable of being used for modelers regardless of their technical background. This aim was intended to be achieved by extensive documentation of the syntax rules of the notation, as well as by proposed best practices for process modeling from practitioners. The wide acceptance of BPMN standard seems to accomplished the mentioned purpose, namely when considering its usage in business oriented process documentation and improvement scenarios, as well as in IT implementation of process diagrams supported by software tools. However, a relevant question can be raised regarding the correctness of business process diagrams produced by modelers with different profiles. This issue is important since the conformance of produced process diagrams to the syntax rules of the language determines the quality of the modeling process whatever its purpose is. Therefore, the main aim of this work was to gather statistical evidence that could validate the assertion that, BPMN diagrams, they have the same level of correctness, irrespective of the technical profile of people involved in modeling tasks. This paper reports a between-groups empirical study with business-oriented and IT-oriented profiles modelers.

Anacleto Correia, António Gonçalves
Analysis of Tweets to Find the Basis of Popularity

Smart and intelligent recommendation systems can be designed based on analyzing the tweets. Our work is aimed at analyzing the tweets to find the basis of popularity of a person. Although there are some works that have analyzed tweets to detect popular events, not much emphasis has been given to find out the reason behind the popularity of a person based on tweets. In this paper, we suggest an algorithm to find out the reason behind the popularity of a person. We have implemented our algorithm using 2,18,490 tweets of 5 different countries. The results are quite encouraging.

Rajat Kumar Mudgal, Rajdeep Niyogi
Fitness Landscape Analysis of the Permutation Flowshop Scheduling Problem with Total Flow Time Criterion

This paper provides a fitness landscape analysis of the Permutation Flowshop Scheduling Problem considering the Total Flow Time criterion (PFSP-TFT). Three different landscapes, based on three neighborhood relations, are considered. The experimental investigations analyze aspects such as the smoothness and the local optima structure of the landscapes. To the best of our knowledge, this is the first landscape analysis for PFSP-TFT.

Marco Baioletti, Valentino Santucci
Clustering Facebook for Biased Context Extraction

Facebook comments and shared posts often convey human biases, which play a pivotal role in information spreading and content consumption, where short information can be quickly consumed, and later ruminated. Such bias is nevertheless at the basis of human-generated content, and being able to extract contexts that does not amplify but represent such a bias can be relevant to data mining and artificial intelligence, because it is what shapes the opinion of users through social media. Starting from the observation that a separation in topic clusters, i.e. sub-contexts, spontaneously occur if evaluated by human common sense, especially in particular domains e.g. politics, technology, this work introduces a process for automated context extraction by means of a class of path-based semantic similarity measures which, using third party knowledge e.g. WordNet, Wikipedia, can create a bag of words relating to relevant concepts present in Facebook comments to topic-related posts, thus reflecting the collective knowledge of a community of users. It is thus easy to create human-readable views e.g. word clouds, or structured information to be readable by machines for further learning or content explanation, e.g. augmenting information with time stamps of posts and comments. Experimental evidence, obtained by the domain of information security and technology over a sample of 9M3k page users, where previous comments serve as a use case for forthcoming users, shows that a simple clustering on frequency-based bag of words can identify the main context words contained in Facebook comments identifiable by human common sense. Group similarity measures are also of great interest for many application domains, since they can be used to evaluate similarity of objects in term of the similarity of the associated sets, can then be calculated on the extracted context words to reflect the collective notion of semantic similarity, providing additional insights on which to reason, e.g. in terms of cognitive factors and behavioral patterns.

Valentina Franzoni, Yuanxi Li, Paolo Mengoni, Alfredo Milani

Workshop on Future Computing Systems, Technologies, and Applications (FiSTA 2017)

Frontmatter
mruby – Rapid IoT Software Development

Embedded systems are systems in which hardware and software are closely related. In the embedded system, the software controls the hardware.We focused on “Ruby” which is an object-oriented programming language and scripting language to improve development efficiency of embedded systems. We developed mruby which applied Ruby so that it can be used in embedded systems, and this overview is described in this paper. mruby provides the real-time and hardware access features required in embedded systems development.

Kazuaki Tanaka, Hirohito Higashi
Design Approaches Emerging in Developing an Agricultural Ontology for Sri Lankan Farmers

Building an ontology is not a simple task, because ontology development is a craft rather than an engineering activity. Ontology development depends on many factors, for examples, background of the domain experts, engineering techniques, need much time to investigate the domain in detail, and also it is a creative task. Then it is very easy to make mistakes due to lots of various new concepts for many users. Therefore ontology developers and researchers need best practices as well as ontology design patterns to construct good quality ontologies. However, the absence of structured guidelines, methods, and good practices hinders for the development of ontologies. We have developed a large user centered ontology to represent agricultural information and relevant knowledge in user context of Sri Lankan farmers. Through this development we have come across various design models and techniques. In this paper, we have highlighted those models and techniques that will be helpful for the users in the field of ontology development. This discussion is mainly based on three different scenarios. Scenario one mainly concerns for identifying the basic ontology components and their representations. Other scenarios such as event handling for complex real-world situations and conceptualization of overlapping concepts for the farmer location are discussed in detail and represented by using well-known Protégé tool.

Anusha Indika Walisadeera, Athula Ginige, Gihan Nilendra Wikramanayake

Workshop on Data-driven modelling for Sustainability Assessment (DAMOST 2017)

Frontmatter
Landslide Risk Analysis Along Strategic Touristic Roads in Basilicata (Southern Italy) Using the Modified RHRS 2.0 Method

This work consists of the application of a modified method for the landslide risk assessment along a strategic touristic road. The proposed qualitative method is a modification of the original method Rockfall Hazard Rating System (RHRS) developed by Pierson et al. [26] at the Oregon State Highway Division (subsequently modified by other authors) and based on the exponential scoring functions.The proposed application involves a careful analysis of environmental factors that influence the type of the mass movement as the slope, the use of the soil, the climatic conditions and the lithology, as such as parameters related to the structural characteristics of roads and traffic for example the road width, the number of lanes in each direction and the Average Vehicle Risk. The use of the different technical approaches, like double entry matrices and the implementation, for a few steps, of an Artificial Neural Network (ANN) allows to assess the analyzed landslide intensity, as well as the probability that it will occur in a given area along a transportation corridor. The application of such method involves a first phase of the data retrieval followed by a subsequent implementation and processing in a GIS softwares. The analysis carried out has been characterized by a step aimed at obtaining different layers, essential for the classification of the landsliding predisposing factors cataloged according to a final score by identifying the five risk threshold classes. So, in order to prepare appropriate interventions of protection and monitoring (if necessary, e.g. evacuation plans), underling the most dangerous areas has been fundamental.

Lucia Losasso, Carmela Rinaldi, Domenico Alberico, Francesco Sdao
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2017
herausgegeben von
Osvaldo Gervasi
Beniamino Murgante
Sanjay Misra
Giuseppe Borruso
Carmelo M. Torre
Ana Maria A.C. Rocha
David Taniar
Bernady O. Apduhan
Elena Stankova
Alfredo Cuzzocrea
Copyright-Jahr
2017
Electronic ISBN
978-3-319-62392-4
Print ISBN
978-3-319-62391-7
DOI
https://doi.org/10.1007/978-3-319-62392-4