Skip to main content
Top

2020 | Book

Computational Science and Its Applications – ICCSA 2020

20th International Conference, Cagliari, Italy, July 1–4, 2020, Proceedings, Part IV

Editors: Prof. Dr. Osvaldo Gervasi, Beniamino Murgante, Prof. Sanjay Misra, Dr. Chiara Garau, Ivan Blečić, David Taniar, Dr. Bernady O. Apduhan, Ana Maria A. C. Rocha, Prof. Eufemia Tarantino, Prof. Carmelo Maria Torre, Prof. Yeliz Karaca

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The seven volumes LNCS 12249-12255 constitute the refereed proceedings of the 20th International Conference on Computational Science and Its Applications, ICCSA 2020, held in Cagliari, Italy, in July 2020. Due to COVID-19 pandemic the conference was organized in an online event.
Computational Science is the main pillar of most of the present research, industrial and commercial applications, and plays a unique role in exploiting ICT innovative technologies.
The 466 full papers and 32 short papers presented were carefully reviewed and selected from 1450 submissions. Apart from the general track, ICCSA 2020 also include 52 workshops, in various areas of computational sciences, ranging from computational science technologies, to specific areas of computational sciences, such as software engineering, security, machine learning and artificial intelligence, blockchain technologies, and of applications in many fields.

Table of Contents

Frontmatter

International Workshop on Data Stream Processing and Applications (DASPA 2020)

Frontmatter
Dealing with Data Streams: Complex Event Processing vs. Data Stream Mining

Recently, data generation rates are getting higher than ever before. Plenty of different sources like smartphones, social networking services and the Internet of Things (IoT) are continuously producing massive amounts of data. Due to limited resources, it is no longer feasible to persistently store all that data which leads to massive data streams. In order to meet the requirements of modern businesses, techniques have been developed to deal with these massive data streams. These include complex event processing (CEP) and data stream mining, which are covered in this article. Along with the development of these techniques, many terms and semantic overloads have occurred, making it difficult to clearly distinguish techniques for processing massive data streams. In this article, CEP and data stream mining are distinguished and compared to clarify terms and semantic overloads.

Moritz Lange, Arne Koschel, Irina Astrova
Anomaly Detection for Data Streams Based on Isolation Forest Using Scikit-Multiflow

Detecting anomalies in streaming data is an important issue in a variety of real-word applications as it provides some critical information, e.g., Cyber security attacks, Fraud detection or others real-time applications. Different approaches have been designed in order to detect anomalies: statistics-based, isolation-based, clustering-based. In this paper, we present a quick survey of the existing anomaly detection methods for data streams. We focus on Isolation Forest (iForest), a state-of-the-art method for anomaly detection. We provide the implementation of IForestASD, a variant of iForest for data streams.This implementation is built on top of scikit-multiflow, an open source machine learning framework for data streams. In fact, few anomalies detection methods are provided in the well-known data streams mining frameworks such as MOA or StreamDM. Hence, we extend scikit-multiflow providing an additional tool. We performed experiments on 3 real-world data sets to evaluate predictive performance and resource consumption (memory and time) of IForestASD and compare it with a well known and state-of-the-art anomaly detection algorithm for data streams called Half-Space Trees.

Maurras Ulbricht Togbe, Mariam Barry, Aliou Boly, Yousra Chabchoub, Raja Chiky, Jacob Montiel, Vinh-Thuy Tran
Automatic Classification of Road Traffic with Fiber Based Sensors in Smart Cities Applications

Low cost monitoring of road traffic can bring a significant contribution to use the smart cities perspective for safety. The possibility of sensing and classifying vehicles and march conditions by means of simple physical sensors may support both real time applications and studies on traffic dynamics, e.g. support and assistance for car crashes and prevention of accidents, and maintenance planning or support to trials in case of litigation.Optical fibers technology is well known for its wide adoption in data transmissions as a commodity component of computer networks: its popularity led to large availability on the market of high quality fiber at affordable price. As a purely physical application, its optical properties may be exploited to monitor in real time mechanical solicitations the fiber undergoes. In this paper we present a novel approach to using optical fibers as road sensors. As quite popular in literature, fiber is used to sense the vibrations caused by vehicles on the road: in our case, signals are processed by functional classification techniques to obtain a higher quality and a larger flexibility for the reuse of results. Classification aims at enabling profiling of road traffic. Moreover in our approach we would like to optimise the analysis and classification computations by splitting the process among edge nodes and cloud nodes according to the available computation capacity.Our solution has been tested by an experimental campaign to show the suitability of the approach.

Antonio Balzanella, Salvatore D’Angelo, Mauro Iacono, Stefania Nacchia, Rosanna Verde
A Dynamic Latent Variable Model for Monitoring the Santa Maria del Fiore Dome Behavior

A dynamic principal component analysis is proposed to monitor the stability, and detect any atypical behavior, of the Brunelleschi’s Dome of Santa Maria del Fiore, in Florence. First cracks in the Dome appeared at the end of the 15th century and nowadays they are present in all the Dome’s webs, although with an heterogenous distribution. A monitoring system has been installed in the Dome since 1955 to monitor the behavior of the cracks; today, it counts more than 160 instruments, such as mechanical and electronic deformometers, thermometers, piezometers. The analyses carried out to date show slight increases in the size of the main cracks and, at the same time, a clear relationship with some environmental variables. However, due to the extension of the monitoring system and the complexity of collected data, to our knowledge an analysis involving all the detected variables has not yet conducted. In this contribution, we aim at finding simplified structures (i.e., latent common factors or principal components) that summarize the measurements coming from the different instruments and explain the overall behavior of the Dome across the time. We found that the overall behavior of the Dome tracked by multiple sensors may be satisfactorily summarized with a single principal component, which shows a sinusoidal time trend characterized, in a one-year period, by an expansive phase followed by a contractive phase. We also found that some webs contribute more than others to the Dome’s movements.

Bruno Bertaccini, Silvia Bacci, Federico Crescenzi

International Workshop on Frontiers in Machine Learning (FIML 2020)

Frontmatter
Comparing Statistical and Machine Learning Imputation Techniques in Breast Cancer Classification

Missing data imputation is an important task when dealing with crucial data that cannot be discarded such as medical data. This study evaluates and compares the impacts of two statistical and two machine learning imputation techniques when classifying breast cancer patients, using several evaluation metrics. Mean, Expectation-Maximization (EM), Support Vector Regression (SVR) and K-Nearest Neighbor (KNN) were applied to impute 18% of missing data missed completely at random in the two Wisconsin datasets. Thereafter, we empirically evaluated these four imputation techniques when using five classifiers: decision tree (C4.5), Case Based Reasoning (CBR), Random Forest (RF), Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP). In total, 1380 experiments were conducted and the findings confirmed that classification using imputation based machine learning outperformed classification using statistical imputation. Moreover, our experiment showed that SVR was the best imputation method for breast cancer classification.

Imane Chlioui, Ibtissam Abnane, Ali Idri
Exploring Algorithmic Fairness in Deep Speaker Verification

To allow individuals to complete voice-based tasks (e.g., send messages or make payments), modern automated systems are required to match the speaker’s voice to a unique digital identity representation for verification. Despite the increasing accuracy achieved so far, it still remains under-explored how the decisions made by such systems may be influenced by the inherent characteristics of the individual under consideration. In this paper, we investigate how state-of-the-art speaker verification models are susceptible to unfairness towards legally-protected classes of individuals, characterized by a common sensitive attribute (i.e., gender, age, language). To this end, we first arranged a voice dataset, with the aim of including and identifying various demographic classes. Then, we conducted a performance analysis at different levels, from equal error rates to verification score distributions. Experiments show that individuals belonging to certain demographic groups systematically experience higher error rates, highlighting the need of fairer speaker recognition models and, by extension, of proper evaluation frameworks.

Gianni Fenu, Hicham Lafhouli, Mirko Marras
DECiSION: Data-drivEn Customer Service InnovatiON

The paper presents DECiSION, an innovative framework in the field of Information Seeking Support Systems, able to retrieve all the data involved in a decision-making process, and to process, categorize and make them available in a useful form for the ultimate purpose of the user request. The platform is equipped with natural language understanding capabilities, allowing the interpretation of user requests and the identification of information sources from which to independently retrieve the information needed for the sensemaking task. The project foresees the implementation of a chatbot, which acts as a virtual assistant, and a conversational recommender system, able to dialogue with the user to discover their preferences and orient their answers in a personalized way. The goal is therefore to create an intelligent system to answer autonomously and comprehensively questions posed in natural language about a specific reference domain, to support the decision-making process. The paper describes the general architecture of the framework and then focuses on the key component that automatically translate the natural language user query into a machine-readable query for the service repository.

Dario Esposito, Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Davide Primiceri, Stefano Lisi, Mauro Casaburi, Giorgio Basile, Matteo Mennitti, Valentina Carella, Vito Manzari
A Comparative Analysis of State-of-the-Art Recommendation Techniques in the Movie Domain

Recommender systems (RSs) represent one of the manifold applications in which Machine Learning can unfold its potential. Nowadays, most of the major online sites selling products and services provide users with RSs that can assist them in their online experience. In recent years, therefore, we have witnessed an impressive series of proposals for novel recommendation techniques that claim to ensure significative improvements compared to classic techniques. In this work, we analyze some of them from a theoretical and experimental point of view and verify whether they can deliver tangible real improvements in terms of performance. Among others, we have experimented with traditional model-based and memory-based collaborative filtering, up to the most recent recommendation techniques based on deep learning. We have chosen the movie domain as an application scenario, and a version of the classic MovieLens as a dataset for training and testing our models.

Dalia Valeriani, Giuseppe Sansonetti, Alessandro Micarelli
Automated Machine Learning: Prospects and Challenges

The State of the Art of the young field of Automated Machine Learning (AutoML) is held by the connectionist approach. Several techniques of such an inspiration have recently shown promising results in automatically designing neural network architectures. However, apart from back-propagation, only a few applications of other learning techniques are used for these purposes. The back-propagation process takes advantage of specific optimization techniques that are best suited to specific application domains (e.g., Computer Vision and Natural Language Processing). Hence, the need for a more general learning approach, namely, a basic algorithm able to make inference in different contexts with distinct properties. In this paper, we deal with the problem from a scientific and epistemological point of view. We believe that this is needed to fully understand the mechanisms and dynamics underlying human learning. To this aim, we define some elementary inference operations and show how modern architectures can be built by a combination of those elementary methods. We analyze each method in different settings and find the best-suited application context for each learning algorithm. Furthermore, we discuss experimental findings and compare them with human learning. The discrepancy is particularly evident between supervised and unsupervised learning. Then, we determine which elementary learning rules are best suited for unsupervised systems, and, finally, we propose some improvements in reinforcement learning architectures.

Lorenzo Vaccaro, Giuseppe Sansonetti, Alessandro Micarelli
Contextualized BERT Sentence Embeddings for Author Profiling: The Cost of Performances

The necessity to know information about the real identity of an online subject is a highly relevant issue in User Profiling, especially for analysis from digital sources such as social media. The digital identity of a user does not always present explicit data about her offline life such as age, gender, work, and more. This problem makes the task of user profiling complex and incomplete. For many years this issue has received a considerable amount of attention from the whole community, which has developed several solutions, also based on machine learning, to estimate user characteristics. The increasing diffusion of deep learning approaches has allowed, on the one hand, to obtain a considerable increase in predictive performance, but on the other hand, to have available models that cannot be interpreted and that require very high computational power. Considering the validity of new pre-trained language models on extensive data for resolving many natural language processing and classification tasks, we decided to propose a BERT-based approach (BERT-DNN) also for the author profiling task. In a first analysis, we compared the results obtained by our model with them of more classical approaches. As a follow, a critical analysis was carried out. We analyze the advantages and disadvantages of these approaches also in terms of resources needed to run them. The results obtained by our model are encouraging in terms of reliability but very disappointing if we consider the computational power required for running it.

Marco Polignano, Marco de Gemmis, Giovanni Semeraro
Expressive Analysis of Gut Microbiota in Pre- and Post- Solid Organ Transplantation Using Bayesian Topic Models

There is a growing evidence that variation in gut microbial communities has important associations with overall host health, and that the diversity and the richness of such communities is helpful in distinguishing patients at high risk of life-threatening post-transplantation conditions. The aim of our paper is to provide an expressive and highly interpretable characterization of microbiome alterations, with the goal of achieving more effective transplantations characterized by a rejection rate as low as possible, and to avoid more severe complications by treating patients at risk in a timely and effective way. For this purpose, we propose using topic models to identify those bacterial species that have the most important weight under the two different experimental conditions (healthy and transplanted patients, or patients whose fecal microbiota has been sampled both in pre- and post-transplantation phases). Topic models are Bayesian statistical models that are not affected by data scarcity, because conclusions we can draw borrow strength across sparse gut microbiome samples. By exploiting this property, we show that topic models are expressive methods for dimensionality reduction which can help analyze variation and diversity in gut microbial communities. With topic models the analysis can be carried out at a level close to natural language, as the output can be easily interpreted by clinicians, since most abundant species are automatically selected and the microbial dynamics can be tracked and followed over time.

Luigi Santacroce, Sara Mavaddati, Javad Hamedi, Bahman Zeinali, Andrea Ballini, Massimo Bilancia
Detection of Thyroid Nodules Through Neural Networks and Processing of Echographic Images

The abnormal functioning of hormones produces the appearance of malformations in human bodies that must be detected early. In this manuscript, two proposals are presented for the identification of thyroid nodules in ultrasound images, using convolutional neural networks. For the network training, 400 images obtained from a medical center and stored in a database have been used. Free access software (Python and TensorFlow) has been used as part of the algorithm development, following the stages of image preprocessing, network training, filtering and layer construction. Results graphically present the incidence of people suffering from this health problem. In addition, based on the respective tests, it is identified that the system developed in Python has greater precision and accuracy, 90% and 81% respectively, than TensorFlow design. Through neural networks, the recognition up to 4 mm thyroid nodules is evidenced.

Alex R. Haro, Julio C. Toalombo, Eddie E. Galarza, Nancy E. Guerrón
A Novel Algorithm to Classify Hand Drawn Sketches with Respect to Content Quality

In this paper, the methodology of a novel algorithm called Counting Key-Points algorithm (CKP) is presented. The algorithm can be used during classification of same type of hand drawn sketches, where content quality is important. In brief, the algorithm uses reference pictures set to form vocabulary of key points (with descriptors) and counting how many times those key points appeared on other images, to decide the image content quality. CKP was tested on Draw-a-Person test images, drawn by primary school students, and reached 65% of classification accuracy. The results of the experiment show that the method is applicable and can be improved with further researches. The classification accuracy of CKP was compared to other state-of-art hand drawn image classification methods, to show superiority of the algorithm. As the dataset needs further studies to improve the prediction accuracy, it would be released to the community.

Ochilbek Rakhmanov

International Workshop on Future Computing System Technologies and Applications (FiSTA 2020)

Frontmatter
Using Market Basket Analysis to Find Semantic Duplicates in Ontology

This paper proposes a novel approach to detecting semantic duplicates in an ontology that was integrated from many other ontologies. The proposed approach is based on using: (1) a query log that was issued against the ontology; and (2) the Apriori and FP algorithms from market basket analysis, where sales transactions are viewed as queries and items in a sales transaction are viewed as search terms in a query. To prove the viability of the proposed approach, the paper also presents the results of four experiments that were conducted on the OntoLife ontology.

Irina Astrova, Arne Koschel, Su Ling Lee
IoT Software by Dataflow Programming in Mruby Programming Environment

In IoT software development, a method to implement software by focusing on data flow has been proposed, which is called dataflow programming. On the other hand, sensor devices are generally implemented in microcontrollers with limited resources for compactness, power savings and costs, so sensor devices are not suitable for executing data flow programs.In this paper, an environment running the scripting language mruby is described to execute a dataflow program on a small microcontroller. The execution of the data flow program is performed asynchronously by multiple nodes, which handle the sensor data. The execution environment of a dataflow program must support this style of execution. Since mruby can execute multiple programs concurrently, it is well suited to implement dataflow programs. By generating mruby code from a program developed in Node-RED, one of the popular dataflow programming environments, the mruby program is executed on a single-chip microcontroller, which includes mruby virtual machine.

Kazuaki Tanaka, Chihiro Tsujino, Hiroyuki Maeda
A Cloud VM Migration Control Mechanism Using Blockchain

In the current cloud, VM migration that moves VMs between physical host machines is indispensable. For cloud providers, before shutting down the physical host machines for maintenance, migration is used to temporarily save VMs to other physical host machine. For the cloud user, migration is used to move a VM to a location which is geographically close to the end user. These VM migrations can be performed very easily and are only limited by the scope of the VM administrator’s contract. However, the problem lies on the permission of the data in VM. In recent years, with the widespread use of IoT, various types of data can be stored in cloud’s VMs through web services. The huge amount of data collected by IoT devices requires close attention to manage because it could be very closely related to the information of an individual. However, there is no mechanism for checking data permission in VM during VM migration, and there is concern that inappropriate data movement may occur. This includes the unintended risky movement of inappropriate data which could be malicious data. Therefore, we proposed a mechanism to ensure compliance with the conditions granted by the data owner, the country regulations, and the organization regulations during VM migration. By constructing the proposed mechanism in blockchain, we can prevent malicious tampering and thus enable robust VM migration control.

Toshihiro Uchibayashi, Bernady Apduhan, Takuo Suganuma, Masahiro Hiji
Extended RTS/CTS Control Based on Transmission Request Distribution in Wireless Ad-Hoc Networks

In a wireless ad-hoc network where wireless nodes exchange data messages without help of stationary base stations, collisions of control and data messages are reduced and/or avoided by CSMA/CA and RTS/CTS control of wireless LAN protocols. Random backoff timers for avoidance of collisions among RTS control messages provides equally opportunities to transmit data messages to neighbor wireless nodes since the value of the backoff timer monotonically decreases. In usual wireless ad-hoc networks, wireless nodes are not equally distributed and frequency of transmission requests in wireless nodes is also not the same. Thus, especially in a region with high density of transmissions and receipts requests for data messages, it is not always possible to receive a response CTS control message even though a wireless node has an opportunity to transmit an RTS control message. Hence, the equal opportunities to transmit an RTS control message is not enough to realize the equal opportunities to transmit a data message. In order to solve this problem, this paper proposes a novel RTS/CTS control to equally provide opportunities to transmit data messages whose receiver node is hard to transmit a CTS control message on response to an RTS control message. Here, a transmission of a CTS control message precedes a transmission of an RTS control message in cases that transmissions of a CTS control message fail repeatedly.

Momoka Hara, Hiroaki Higaki
Interference of Overhearing by Eavesdropper Nodes for Secure Wireless Ad-Hoc Networks

In ad-hoc networks, data messages are transmitted from a source wireless node to a destination one along a wireless multihop transmission route consisting of a sequence of intermediate wireless nodes. Each intermediate wireless node forwards data messages to its next-hop wireless node. Here, a wireless signal carrying the data message is broadcasted by using an omni directional antenna and it is not difficult for an eavesdropper wireless node to overhear the wireless signal to get the data message. Some researches show that it is useful to transmit a noise wireless signal which collides to the data message wireless signal in order for interfering the overhearing. However, some special devices such as directional antennas and/or high computation power for complicated signal processing are required. For wireless multihop networks with huge number of wireless nodes, small and cheap wireless nodes without such special devices are mandatory for construction of the network. This paper proposes a novel method for interfering the overhearing by the eavesdropper wireless nodes by a routing protocol and a data message transmission protocol with cooperative noise signal transmissions by 1-hop and 2-hop neighbor wireless nodes of each intermediate wireless node. The results of simulation experiments show that the proposed intentional collision method provides enough coverage of noise wireless signals especially by help of part of 2-hop neighbor wireless nodes.

Hinano Amano, Hiroaki Higaki
Towards Ontology Based Data Extraction for Organizational Goals Metrics Indicator

In this paper, we proposed a measurement framework to evaluate the quality of results at organizational goals level. We proposed metrics indicators to guarantee and optimize the usage of data to extract useful information in organization. The framework is flexible to change without affecting things around because the framework is applicable in any domains. We discuss the review of the problems, proposed solution to the problems and draw a conclusion of this paper.

Tengku Adil Tengku Izhar, Bernady O. Apduhan

International Workshop on Geodesign in Decision Making: Meta Planning and Collaborative Design for Sustainable and Inclusive Development (GDM 2020)

Frontmatter
Geodesign as Co-creation of Ideas to Face Challenges in Indigenous Land in the South of Brazil: Case Study Ibirama La Klano

Strong development pressures affect areas in South American countries, resulting in conflicts of interests in land use and ownership, and problems of environmental protection and anthropization. Geodesign proved to be a robust systematic methodological framework to guide a workshop in which the actors are real people of the place, professionals from public administration and the academy. The paper presents a case study about an indigenous land in Santa Catarina, South of Brazil, called Ibirama La Klano, the territory of Xokleng indigenous group. In 1970 a huge dam was constructed in their land to control flooding downstream in the Itajaí valley, but it ended up causing the flooding of their place. The goal of the Geodesign workshop was to support a meeting in which different actors could co-create ideas to face vulnerabilities and to develop potentialities of the area. To prepare Geodesign workshop the academic group from UFMG and UDESC worked with the Civil Defense of the State of Santa Catarina, constructing maps about the place and evaluations about suitable areas to receive proposals. The indigenous main executive chief and regional chiefs were invited to Geodesign workshop, which resulted in ideas representing values of different sectors of society, arriving to a negotiated design.

Ana Clara Mourão Moura, Francisco Henrique de Oliveira, Thobias Furlanetti, Regina Panceli, Elna Fatima Pires de Oliveira, Carl Steinitz
Aerial Images and Three-Dimensional Models Generated by RPA to Support Geovisualization in Geodesign Workshops

Remotely Piloted Aircraft (RPA) are geotechnological instruments with good cost-benefit, as they provide quick spatial data capture with satisfactory accuracy and resolution, affording the creation of three-dimensional products and aerial photographs from different perspectives. These data have been used to support geovisualization in geodesign workshops held in socially fragile and poor communities, in which people of the place have a vast knowledge about their territory but difficulties in working with cartographic representation and maps products. This study, therefore, presents the experiences earned in three geodesign workshops held in Belo Horizonte, Brazil, when aerial images in an oblique perspective and three-dimensional models with interactive navigation were used. The results allow us to conclude that the use of fields of view with an oblique perspective to the objects of analysis promotes a link between the zenithal cartographic expression and the immersive view in the landscape, which is a way for the formation of the mental maps. That is understood as an essential condition for the promotion of citizen participation in co-design processes, based on geovisualization and the sense of digital inclusion.

Danilo Marques de Magalhães, Ana Clara Mourão Moura
Training Decision-Makers: GEODESIGN Workshop Paving the Way for New Urban Agenda

GEODESIGN represents an effective framework promoting collaborative planning and decision-making as an incremental process based on robust methodological guidance. In this application, GEODESIGN had been adopted as a tool for training decision makers in “facing planning challenges deriving from ITI Urban Agenda development” according to “sustainability” and “climate responsive principles”. The case study represents a joined activity realized by the Municipality of Potenza (member of the EU Climate Adaptation Partnership) and the LISUT Laboratory (Engineering School at UNIBAS). The results regard the comprehensive approach in terms of participation capacity of decision makers without any background in planning disciplines and unveiled the weaknesses of traditional approach mainly based on “building agreements” without any measurements of spatial evidences or scenarios comparisons.

Francesco Scorza
Opportunities and Challenges of a Geodesign Based Platform for Waste Management in the Circular Economy Perspective

In the framework of the circular economy, the operational integration of metabolic flows management and co-planning of waste territories, also defined wastescapes, has been implemented by the Geodesign Decision Support Environment (GDSE) platform, a tool developed in the H2020 REPAiR project. The GDSE can be described as a Decision Support System (DSS) to manage metabolic flows in a spatial GIS-based environment. Born as a tool for the resources management in peri-urban areas, the GDSE was also configured as a repository of multiple information. By comparing the main steps of the Geodesign Hub software and the GDSE platform, the paper intends to highlight the results obtained so far from the implementation of the GDSE and the main future development hypotheses. The primary purpose is to integrate wastescapes regeneration with the management of metabolic flows by realising evaluation maps, required by Geodesign Hub software, in the GDSE to trigger a holistic regenerative process for the circular city.

Maria Cerreta, Chiara Mazzarella, Maria Somma
Brazilian Geodesign Platform: WebGis & SDI & Geodesign as Co-creation and Geo-Collaboration

The paper presents the main motivations for the development of a Brazilian platform for Geodesign, based on adaptations of observed needs, review of processes and facilities to face the challenges of spatial inequalities and complexity. The study is motivated by the analysis of difficulties and criticisms on the applied framework tested in robust number of workshops developed. It starts by the literature review in order to understand the main values and keywords that were constructed along time in the use of technologies of geospatial information in planning and, as a result, defines the main resources and facilities that should be considered to a new format of Geodesign. The new platform itself is presented, and the paper illustrates and discusses the proposed framework according to four steps: Reading Enrichment, Dialogues as Creation of Ideas, Voting as Selection of Ideas, and Statistics as Final Decision. It compares and justifies a new framework in Geodesign in face to main models generally used and discusses possible development to a close future.

Ana Clara Mourão Moura, Christian Rezende Freitas

International Workshop on Geographical Analysis, Urban Modeling, Spatial Statistics (GEOG-AND-MOD 2020)

Frontmatter
A Text Mining Analysis on Big Data Extracted from Social Media

The aim of this paper is to analyze data derived from Social Media. In our time people and devices constantly generate data. The network is generating location and other data that keeps services running and ready to use in every moment. This rapid development in the availability and access to data has induced the need for better analysis techniques to understand the various phenomena. We consider a Text Mining and a Sentiment Analysis of data extracted from Social Networks. The application regards a Text Mining Analysis and a Sentiment Analysis on Twitter, in particular on tweets regarding Coronavirus and SARS.

Gabriella Schoier, Giuseppe Borruso, Pietro Tossut
Daily Spatial Footprint of Warsaw Metropolitan Area (Poland) Commuters in Light of Volunteered Geographic Information and Common Factors of Urban Sprawl. A Pilot Study

Urban sprawl directly affects on length of commuting. Acquisition of commuting data is based on theoretical (deductive) approaches, limited individual small observation samples or indirect phenomena like e.g. remote sensing night light data images. Volunteered Geographic Information (VGI) make possible deeper insight into the daily spatial footprint of commuting and is related to urban sprawl. Data acquired during the collection of VGI data reveal some new aspects of spatial phenomena, which can be additionally analyzed. VGI data concerning spatial phenomena involve both geotagging as well time stamps of acquisition, which in turn make possible indirectly inferring about spatial and temporal move of people. Analysis of the available spatial and temporal VGI data in context of national surveying acquired resources (INSPIRE) and confronted to modelling approach of commuting is the subject of pilot study of Warsaw functional urban area. The results are promising due to inter alia, generalization of huge volume real data observations set unlike to formerly used theoretical modelling.

Veranika Kaleyeva, Piotr A. Werner
A Framework for Sustainable Land Planning in ICZM: Cellular Automata Simulation and Landscape Ecology Metrics

In the paper, we present a Planning Framework for Integrated Coastal Zone Management (ICZM). The points of strength of the framework are the following: It is an iterative and participatory process; It is scenario-based and model-based; It uses a Spatial Decision Support System (SDSS) as enabling infrastructure; The SDSS is “powered” by open data and data systematically updated by public bodies. The theoretical starting point is ICZM requires decision support tools to cope with knowledge from multiple sources, interdisciplinarity and multiple scales (e.g., spatial, temporal or organizational) [1]. The 2007 Integrated Maritime Policy for the European Union [2] is a key document to understand the relationship between coastal and marine information and policy implementation. It shows that it is necessary to develop a marine-coastal Decision Support System [3, 4] based on indicators and indices (aggregations of indicators into a synthetic representation), use of Geographic Information Systems, models and multicriteria assessment of scenarios [5, 6]. The system of indices is used to describe the complexity of a coastal system: geo-ecological level, land processes, human society, economy, and coastal uses at multiple scales [5, 7]. Multicriteria assessment is a tool to support social and environmental decisions in the perspective of sustainability and strategic planning [8–11].During the design phase of the SDSS components (basic data, indicators and models), it was performed a review of the Land Use/Land Cover change simulation models. The output of the review was the choice of SLEUTH model [12]. The framework was tested on a study area (Veneto Region - Italy). In the test we coupled SLEUTH with Fragstats [13] for the analysis of landscape ecology metrics.

Andrea Fiduccia, Luisa Cattozzo, Leonardo Filesi, Leonardo Marotta, Luca Gugliermetti
SPACEA: A Custom-Made GIS Toolbox for Basic Marine Spatial Planning Analyses

Marine Spatial Planning (MSP) requires the analysis of the spatial distribution of marine uses and environmental conditions. Such analyses can be carried out with GIS, but standard GIS programs do not feature a toolbox that combines the most needed functionalities for such analyses. The SPACEA toolbox presented here was created to bundle and adapt existing functionalities in one toolbox. SPACEA consists of several script tools that have been designed to be user-friendly and applicable to different analyses for MSP. This includes the processing of different input layers with regard to marine uses and environmental conditions. The main functionalities of SPACEA are exemplified in a fictional case study in the Baltic Sea, where the tools are applied to find potentially suitable areas for mussel farming. The tools feature a user-friendly interface and more experienced users may also use the provided sample codes to run it from the python window or as a stand-alone script. As such, the tools can be applied by users with different levels of GIS knowledge and experience.

Miriam von Thenen, Henning Sten Hansen, Kerstin S. Schiele
A Method of Identification of Potential Earthquake Source Zones

We propose a machine learning method for mapping potential earthquake source zones (ESZ). We use two hypotheses: (1) the recurrence of strong earthquakes and (2) the dependence of sources of strong earthquakes on the properties of the geological environment. To solve this problem, we know the catalog of earthquakes and a set of spatial fields of geological and geophysical features. We tested the method of identification of the potential ESZ with $$m\ge 6.0$$ m ≥ 6.0 for the Caucasus region. The map of the potential earthquake source zones and a geological interpretation of the decision rule are presented.

K. N. Petrov, V. G. Gitis, A. B. Derendyaev
Variography and Morphometry for Classifying Building Centroids: Protocol, Data and Script

Different spatial patterns of urban growth exist such as infill, edge-expansion and leapfrog development. This paper presents a methodology, and a corresponding script, that classify new residential buildings as patterns of urban growth. The script performs a combination of variography and morphometry over building centroids on two different dates. The test data is made of the building centroids of 2002 and 2017 for Centre-Var, a region located in southern France. The different bounding regions, yield from series of morphological closings, allow classifying the building centroids that appeared between 2002 and 2017 into different categories of spatial patterns of urban growth. The final classification is made according to the degree of clustering/scattering of new centroids and to their locations regarding existing urban areas. Preliminary results show that this protocol is able to provide useful insights regarding the degree of contribution of each new residential building to the following patterns of urban growth: clustered infill, scattered infill, clustered edge-expansion, scattered edge-expansion, clustered leapfrog and scattered leapfrog. Open access to the script and to the test region data is provided.

Joan Perez, Alexandre Ornon, Hiroyuki Usui
Analyzing the Driving Factors of Urban Transformation in the Province of Potenza (Basilicata Region-Italy)

The main transformation dynamics in the province of Potenza territory (Basilicata region in the south of Italy) correspond to those of urban sprinkling. The urban sprinkling phenomena is typical of mainly mountainous internal areas with indices of settlement density and artificial coverage ratios very low. The temporal and spatial analysis of the urban sprinkling phenomenon gives a picture of the transformation dynamics of the territory, i.e. the phenomena of fragmentation and compaction of the urban territory. Through a logistic regression, the driving factors that have affected the dynamics of urban transformation and specifically the phenomena of fragmentation and compaction between 1998 and 2013 will be analyzed. The two transformation phenomena (dependent variables Y), will be analyzed separately and built on the basis of the variation of the sprinkling index in the analyzed period. In the model, eleven independent variables concerning physical characteristics, proximity analysis, socioeconomic characteristics and the urban policies or constraints, have been considered.The result of the logistic regression consists of two probability maps of change of the dependent variable Y from non-urban to fragmented or compacted. The indexes of the relative operational characteristic (ROC) of 0.85 and 0.84 respectively for compaction and fragmentation, testify the goodness of the model.

Amedeo Ieluzzi, Lucia Saganeiti, Angela Pilogallo, Francesco Scorza, Beniamino Murgante
Designing a Semi-automatic Map Construction Process for the Effective Visualisation of Business Geodata

This paper proposes a map construction process for the semi-automatic construction of thematic maps from business information data. Addressing a non-specialist user audience, the map construction process will allow a correct, at the same time effective cartographic visualisation for further (geo)visual analysis. Utilising the frequently disregarded geospatial component of existing business mass data, quality map representations facilitate the visual exploration, detection, and analysis of relevant spatial data distributions and structures hitherto unseen in the data. Presently, neither operational procedures nor appropriate software systems, such as BIS, DDS or GIS, are available in the industry for an effective map representation of the geocoded data. To put economic and business experts into a position to make full use of the geo coordinates present in the data, an easy-to-handle map construction process is required to exploit the full semantic and spatial potential of business data. Exemplified for an area diagram map, the map construction process discussed here provides the relevant tools and methods for the targeted audience.

Marion Simon, Hartmut Asche
Constructing Geo-Referenced Virtual City Models from Point Cloud Primitives

This paper presents a novel approach to construct spatially-referenced, multidimensional virtual city models from remotely generated point clouds for areas that lack reliable geographical reference data. A multidimensional point cloud is an unstructured array of single, irregular points in a spatial 3D coordinate system plus time stamp. If geospatial reference points are available, a point cloud is geo-referenced. Geo-referenced point clouds contain a high-precision reference dataset. Point clouds can be utilised in a variety of applications. They are particularly suitable for the representation of surfaces, structures, terrain and objects. Point clouds are used here to generate a virtual 3D city model representing the complex, granular cityscape of Jerusalem and its centre, the Old City. The generation of point clouds is based on two data acquisition methods: active data capture by laser scanning and passive data collection by photogrammetric methods. In our case, very high-resolution stereo imagery in visible light and near infrared bands have been systematically acquired an aerial flight campaign. The spatio-temporal data gathered necessitate further processing to extract the geographical reference and semantic features required in a specific resolution and scale. An insight is given into the processing of an unstructured point cloud to extract and classify the 3D urban fabric and reconstruct its objects. Eventually, customised, precise and up-to-date geographical datasets can be made available for a defined region at a defined resolution and scale.

Andreas Fricke, Hartmut Asche
Seismic Assessment of Reinforced Concrete Frames: Influence of Shear-Flexure Interaction and Rebar Corrosion

The stock of existing buildings across most of the European earthquake-prone countries has been built before the enforcement of modern seismic design codes. In order to assure uniform levels of safety and reduce the social and economic impact of medium to high earthquakes costly seismic intervention plans have been proposed. But their application, in order to define which building should primarily be retrofitted, requires adequate vulnerability assessment methodologies, able to model the effective non-linear response and to identify the relevant failure modes of the structure. In the case of reinforced concrete (RC) buildings, due to the lack of application of capacity design principles and the aging effects due to exposition to an aggressive environment, existing structures can exhibit premature failures with a reduction of available strength and ductility. In the last couple of decades some state-of-the-art simplified models aiming at capturing the complex interaction between shear and flexural damage mechanisms as well as behavior of rebar corrosion have been proposed in specialized literature and, in some cases, implemented in regulatory building codes and guidelines. The present paper presents how those phenomena that have a significant impact in reducing the element capacity in term of strength and energy dissipation can be implemented in the assessment of the structures.

Alessandro Rasulo, Angelo Pelle, Davide Lavorato, Gabriele Fiorentino, Camillo Nuti, Bruno Briseghella
Geovisualization for Energy Planning

In 2018 across 885 urban areas of the EU-28 only 66% of EU cities have, according to Reckien et al. classification a A1 (autonomously produced plans), A2 (plans produced to comply with national regulations) or A3(plans developed for international climate net- works) mitigation plan, 26% an adaptation plan, and 17% a joint adaptation and mitigation plan, while about 33% lack any form of stand-alone local climate plan [1]. Local climate plans are a new emerging field of application for urban planning and, in this sector appropriate geovisualization techniques could be useful tool in supporting make decisions about actions for local energy and climate plans. Geovisualization can help decision makers or researchers to transfer information to stakeholders and people. In this work we discuss geovisualization approach for energy consumptions and renovation scenarios for private and public buildings delivered in a specific case study: Potenza Municipality. This tool allows to visualize energy consumptions at urban scale thorough a geo- database including individual buildings information with several functions: i.e. to identify urban areas where take action with higher priority. The application to case study of Potenza Municipality is a component of a wider process of developing the Sustainable Energy and Climate Action Plan (SECAP). It shows the potential of geovisualization as a tool to support decisions making and monitoring of actions to be included in the plan.

Luigi Santopietro, Giuseppe Faruolo, Francesco Scorza, Anna Rossi, Marco Tancredi, Angelo Pepe, Michele Giordano
Studying the Spatial Distribution of Volunteered Geographic Data Through a Non-parametric Approach

Nowadays, new knowledge on the immaterial characteristics of surrounding landscapes can easily be produced by relying on volunteer contributions. However, the spatial distribution of the collected data may be influenced by the contributor’s location. Using data sets derived from the administration of a map-based survey, aimed at collecting explicit spatial information on sites perceived as having positive and negative qualities in Friuli Venezia Giulia (Italy), a descriptive analysis and a non-parametric procedure are employed to study the relevance of a respondent’s municipality of reference on the mapping activity.The findings indicate that the volunteered geographic data collected in the survey are not uniformly distributed across the study area and that a different spatial relationship exists between mapped elements and a respondent’s residence when the two different attributes of interest are considered. The results underline the importance of considering volunteers’ characteristics when engaging local populations in participatory initiatives.

Giorgia Bressan, Gian Pietro Zaccomer, Luca Grassetti
Describing the Residential Valorisation of Urban Space at the Street Level. The French Riviera as Example

There is a growing concern regarding the use of relatively coarse units for the aggregation of various spatial information. Researchers thus suggest that the street segment might be better suited than areal units for carrying out such a task. Furthermore, the street segment has recently become one of the most prominent spatial units, for example, to study street network centrality, retail density, and urban form. In this paper, we thus propose to use the street segment as unit of analysis for calculating the residential valorisation of urban space. To be more specific, we define a protocol that characterises street segments through a measure of central tendency and one of dispersion of prices. Moreover, through Bayesian clustering, it classifies street segments according to the most probable combination house type-valuation to provide a picture of local submarkets. We apply this methodology to the housing transactions exchanged in the French Riviera, in the period 2008–2017, and observe that outputs seem to align with local specificities of the housing market of that region. We suggest that the proposed protocol can be useful as an explorative tool to question and interpret the housing market, in any metropolitan region, at a fine level of spatial granularity.

Alessandro Venerandi, Giovanni Fusco
A Toolset to Estimate the Effects of Human Activities in Maritime Spatial Planning

Marine space is overall under increasing pressures from human activities. Traditionally, the activities taken place in oceans and seas were related to fishery and transport of goods and people. Today, offshore energy production – oil, gas, and wind, aquaculture, and sea-based tourism are important contributors to the global economy. This creates competition and conflicts between various uses and requires an overall regulation and planning. Maritime activities generate pressures on the marine ecosystems, and in many areas severe impacts can be observed. Maritime spatial planning is seen as an instrument to manage the seas and oceans in a more sustainable way, but information and tools are needed. The current paper describes a tool to assess the cumulative impacts of maritime activities on the marine ecosystems combined with a tool to assess the conflicts and synergies between these activities.

Henning Sten Hansen, Ida Maria Bonnevie
Towards a High-Fidelity Assessment of Urban Green Spaces Walking Accessibility

Urban Public Green Spaces (UPGS) available at walking distance are a vital component of urban quality of life, of citizens’ health, and ultimately of the right to the city. Their demand has suddenly become even more ostensive due to the measures of “social distancing” and the restrictions of movement imposed in many countries during the COVID-19 outbreak, showing the importance of the public urban parks and green open spaces located near homes and accessible by foot. Hence, the idea of “green self-sufficiency” at the local, neighbourhood and sub-neighbourhood level has emerged as a relevant objective to pursue. For this purpose, we have constructed a high-fidelity evaluation model to assess the walking accessibility of UPGS at the highly granular spatial scale of street network nodes. The evaluation procedure is based on a novel index constructed around the concept of distance-cumulative deficit, scoring nodes with respect to all the available UPGS within their catchment area of slope-corrected walking distance of 2 km. To showcase the possible outputs of the evaluation procedure and their exploratory analyses, we present an application on the city of Cagliari, Italy. In doing that, we argue that the proposed evaluation approach is an advancement over the traditional (density-based) approaches of assessment of green area availability, and that it provides an intuitive, flexible and extendable tool useful to better evaluate and understand the current and the potential accessibility of urban green space, and to support urban planning, policy making and design.

Ivan Blečić, Valeria Saiu, Giuseppe A. Trunfio
Count Regression and Machine Learning Approach for Zero-Inflated Over-Dispersed Count Data. Application to Micro-Retail Distribution and Urban Form

This paper investigates the relationship between urban form and the spatial distribution of micro-retail activities. In the last decades, several works demonstrated how configurational properties of the street network and morphological descriptors of the urban built environment are significantly related to store distribution. However, two main challenges still need to be addressed. On the one side, the combined effect of different urban form properties should be considered providing a holistic study of the urban form and its relationship to retail patterns. On the other, analytical approaches should consider the discrete, skewed and zero-inflated nature of the micro-retail distribution. To overcome these limitations, this work compares two sophisticated modelling procedure: Penalised Count Regression and Machine Learning approaches. While the former is specifically conceived to account for retail count distribution, the latter can capture non-linear behaviours in the data. The two modelling procedures are implemented on the same large dataset of street-based measures describing the urban form of the French Riviera. The outcomes of the two modelling approaches are compared in terms of prediction performance and selection frequencies of the most recurrent variables among the implemented models.

Alessandro Araldi, Alessandro Venerandi, Giovanni Fusco
Modeling the Determinants of Urban Fragmentation and Compaction Phenomena in the Province of Matera (Basilicata Region - Italy)

The main objective of the present study was to integrate a logistic regression (LR) model and a geographic information system (GIS) technique to analyze the urban transformations patterns and investigate the relationship between urban transformation dynamics and its various determinant forces.The case study concerns the territory of the province of Matera, in the region of Basilicata (southern Italy) where the main transformation phenomenon corresponds to the dynamics of urban sprinkling. The definition of the variables, corresponding to the dynamics of urban fragmentation and compaction, will be carried out through spatial analyses concerning the temporal variation of the sprinkling index.The relationships between the dependent variables (Y) fragmentation and compaction and the independent variables (X) referring to different factors will be analyzed through two logistic regressions. The time interval considered is 1998-2013 and the determining factors (driving forces) refer to physical characteristics, proximity analysis to roads or cities, socioeconomic factors and land use policies. The results consist of two maps showing the probability of variation of the dependent variables whose accuracy will be evaluated using the Relative Operational Characteristic Index (ROC).

Giorgia Dotoli, Lucia Saganeiti, Angela Pilogallo, Francesco Scorza, Beniamino Murgante
Geoprofiling in the Context of Civil Security: KDE Process Optimisation for Hotspot Analysis of Massive Emergency Location Data

In the performance of their duties, authorities and organisations with safety and security tasks face major challenges. As a result, the need to expand the knowledge and skills of security forces in a targeted manner through knowledge, systemic and technological solutions is increasing. Of particular importance for this inhomogeneous end user group is the time factor and thus in general also space, distance, and velocity. Authorities focus on people, goods, and infrastructure in the field of prevention, protection, and rescue. For purposive tactical, strategic, and operational planning, geodata and information about past and ongoing operations dispatched and archived at control centers can be used. For that reason, a rule-based process for the geovisual evaluation of massive spatio-temporal data is developed using geoinformation methods, techniques, and technologies by the example of operational emergency data of fire brigade and rescue services. This contribution to the extension of the KDE for hotspot analysis has the goal to put the professional and managerial personnel in a position to create well-founded geoprofiles based on the spatial-temporal location, distribution, and typology of emergency mission hotspots. In doing so, significant data is generated for the neighborhood of the operations in abstract spatial segments, and is used to calculate distance measures for the Kernel Density Estimation (KDE) process. At the end there is a completely derived rule-based kde process for the geovisual analysis of massive spatio-temporal mass data for hotspot geoprofiling.

Julia Gonschorek
Assessment of Post Fire Soil Erosion with ESA Sentinel-2 Data and RUSLE Method in Apulia Region (Southern Italy)

Fires are one of the main causes of environmental degradation as they have an impact on flora and fauna, can also strongly influence ecological and geomorphological processes and permanently compromise the functionality of the ecosystems and soils on which they impact. The severity of the fire event influences the superficial hydrological response and the consequent loss of soil. Precipitation on the basins recently affected by fires produces an increase in the outflow which commonly transports and deposits large volumes of sediment, both inside and downstream of the burned area. In the years following the fire, the loss of soil is very high and the degradation processes of the soils are much greater than in the pre-event. The aim of this study is to evaluate the potential annual loss due to post-fire erosion using remote sensing techniques, RUSLE (Revised Universal Soil Loss Equation) methodology and GIS tecniques in nine different event occurred in 2019 in the northern part of the Apulia Region (Southern Italy). Geographic Information System techniques and remote sensing data have been adopted to study the post-fire soil erosion risk. Satellite images are the most appropriate for environmental monitoring as they provide high resolution multispectral optical images, infact are able to monitor the development of vegetation by assessing the water content an changes in chlorophyll levels. This study can be useful to spatial planning authorities as a tool for assessing and monitoring eroded soil in areas affected by fires, representing a useful tool for land management.

Valentina Santarsiero, Gabriele Nolè, Antonio Lanorte, Biagio Tucci, Lucia Saganeiti, Angela Pilogallo, Francesco Scorza, Beniamino Murgante

International Workshop on Geomatics for Resource Monitoring and Management (GRMM 2020)

Frontmatter
Coupled Use of Hydrologic-Hydraulic Model and Geomorphological Descriptors for Flood-Prone Areas Evaluation: A Case Study of Lama Lamasinata

The delineation of flood risk maps is a fundamental step in planning urban areas management. This evaluation can be carried out by hydraulic/hydrological modelling that allows obtaining water depths and related flooded areas. In this way, it is possible to mitigate and contain the catastrophic effects of floods, which become more frequent in the last decades. These events result in losses of both human lives and assets. In addition, the growing availability of high-resolution topographic data (i.e. Digital Terrain Models - DTM), due to new technologies for measuring surface elevation, gave a strong impulse to the development of new techniques capable of providing rapid and reliable identification of flood susceptibility. In this study, two methodologies for mapping flood-prone areas in karst ephemeral streams in Puglia region (Southern Italy) are compared, highlighting how DTM-based technologies are a precious source of information in data-poor environments. Results are in perfect agreement with previous studies on similar areas, showing the marked influences of topography in defining flood-prone areas. These researches can also be useful in investigating a wider gamma of hydrological-related aspects, in particular with respect to the social behavior of communities.

Beatrice Lioi, Andrea Gioia, Vincenzo Totaro, Gabriella Balacco, Vito Iacobellis, Giancarlo Chiaia
Combined Photogrammetric and Laser Scanning Survey to Support Fluvial Sediment Transport Analyses

This paper presents the contribution of digital surveying techniques for the estimation of fluvial sediment transport. The aim is to create predictive models able to minimize or reduce the hydrogeological hazard, especially before or during critical meteorological events. The case study is the Caldone stream, a watercourse located in the municipality of Lecco (Italy). Structure-from-Motion photogrammetry and terrestrial laser scanning techniques were used to collect metric data about the morphology of the riverbed. Data acquisition was carried out to create a digital model of visible and submerged parts of the riverbed. Then, a second area with a sedimentation pool was selected to monitor the variation of the depth induced by progressive accumulation of sediments. As the pool is constantly covered by water, a low-cost bathymetric drone was used coupling the measured depth values with total station measurements to track the drone. Finally, the paper describes the implementation of an on-line data delivery platform able to facilitate retrieval of heterogeneous geospatial data, which are used in the developed numerical model for sediment transport. This service aims at providing simplified access to specific map layers without requiring knowledge about data formats and reference systems.

Luigi Barazzetti, Riccardo Valente, Fabio Roncoroni, Mattia Previtali, Marco Scaioni
Road Infrastructure Monitoring: An Experimental Geomatic Integrated System

Road infrastructures systems are critical in many regions of Italy, counting thousands of bridges and viaducts that were built over several decades. A monitoring system is therefore necessary to monitor the health of these bridges and to indicate whether they need maintenance.Different parameters affect the health of an infrastructure, but it would be very difficult to install a network of sensors of various kinds on each viaduct.For this purpose, we want to finalize the use of geomatics technologies to monitor infrastructures for early warning issues and introducing automations in the data acquisition and processing phases.This study describes an experimental sensor network system, based on long term monitoring in real-time while an adaptive neuro-fuzzy system is used to predict the deformations of GPS-bridge monitoring points.The proposed system integrates different data (used to describe the various behaviour scenarios on the structural model), and then it reworks them through machine learning techniques, in order to train the network so that, once only the monitored parameters (displacements) have been entered as input data, it can return an alert parameter.So, the purpose is to develop a real-time risk predictive system that can replicate various scenarios and capable to alert, in case of imminent hazards. The experimentation conducted in relation to the possibility of transmitting an alert parameter in real time (transmitted through the help of an experimental control unit) obtained by predicting the behavior of the structure using only displacement data during monitoring is particularly interesting.

Vincenzo Barrile, Antonino Fotia, Ernesto Bernardo, Giuliana Bilotta, Antonino Modafferi
Geospatial Tools in Support of Urban Planning: A Possible Role of Historical Maps in Programming a Sustainable Future for Cities

In urban planning, a numerical and spatially based approach is expected to drive to the “best” choice. In this work a GIS-based procedure is proposed to model territorial dynamics by comparing maps of two different periods (1830 and 2000). The study area is located in the urban fringe of Torino (NW Italy) that suffered from important changes especially in the last 60 years. A workflow was defined and applied, based on a multi-criteria approach implemented by GIS. With reference to existing maps, strength and direction of forces, opposing urban to rural/semi-natural surroundings, were mapped and described operating through the following scheme: a) vectorization and qualification of both impacted (rural) and impacting (urban) landscape elements; b) implementation of spatially dependent functions representing strength and direction of urban pushes against rural; c) qualification of rural areas majorly exposed to urban growth. Accordingly, some maps, useful to read the playing urban growth dynamics, were generated and some applications proposed. Nevertheless some limitations still persist: the proposed methodology is based on simplified hypotheses, mainly related to the definition of spatial indices that somehow depend on the type of information that available maps contain. A second limitation is related to the persistence of a great component of subjectivity during the extraction of the starting information from the available maps and weights assignation.

E. Borgogno-Mondino, A. Lessio
Indoor Positioning Methods – A Short Review and First Tests Using a Robotic Platform for Tunnel Monitoring

The aim of this work is to provide a review of the main indoor positioning methodologies, in order to evidence their strengths and weaknesses, and explore the potential of the integration in an Unmanned Ground Vehicle built for tunnel monitoring purposes. A robotic platform, named Bulldog, has been designed and assembled by Sipal S.p.a., with the support of the research group Applied Geomatic laboratory (AGlab) of the Politecnico di Bari, in the definition of the data processing pipeline. Preliminary results show that the integration of indoor positioning techniques in the Bulldog platform represents an important advance for accurate monitoring and analysis of a tunnel during the construction stage, allowing a fast and reliable survey of the indoor environment and requiring, at this prototypal stage of development, only a remote supervision by the operator. Expected improvements will allow to carry out tunnel monitoring activities in a fully autonomous mode, bringing benefit for the safety of people involved in the construction works and the accuracy of the acquired dataset.

Alberico Sonnessa, Mirko Saponaro, Vincenzo Saverio Alfio, Alessandra Capolupo, Adriano Turso, Eufemia Tarantino
Application of the Self-organizing Map (SOM) to Characterize Nutrient Urban Runoff

Urban stormwater runoff is considered worldwide as one of the most critical diffuse pollutions since it transports contaminants that threaten the quality of receiving water bodies and represent a harm to the aquatic ecosystem. Therefore, a thorough analysis of nutrient build-up and wash-off from impervious surfaces is crucial for effective stormwater-treatment design. In this study, the self-organizing map (SOM) method was used to simplify a complex dataset that contains precipitation, flow rate, and water-quality data, and identify possible patterns among these variables that help to explain the main features that impact the processes of nutrient build-up and wash-off from urban areas. Antecedent dry weather, among the rainfall-related characteristics, and sediment transport resulted in being the most significant factors in nutrient urban runoff simulations. The outcomes of this work will contribute to facilitating informed decision making in the design of management strategies to reduce pollution impacts on receiving waters and, consequently, protect the surrounding ecological environment.

Angela Gorgoglione, Alberto Castro, Andrea Gioia, Vito Iacobellis
Parallel Development of Comparable Photogrammetric Workflows Based on UAV Data Inside SW Platforms

A wide range of industrial applications benefits from the accessibility of image-based techniques for three-dimensional modelling of different multi-scale objects. In the last decade, along with the technological progress mainly achieved with the use of Unmanned Aerial Vehicles (UAVs), there has been an exponential growth of software platforms enabled to return photogrammetric products. On the other hand, the different levels of final product accuracy resulting from the adoption of different processing approaches in various softwares have not yet been fully understood. To date, there is no validation analysis in literature focusing on the comparability of such products, not even in relation to the use of workflows commonly allowed inside various software platforms. The lack of detailed information about the algorithms implemented in the licensed platforms makes the whole interpretation even more complex.This work therefore aims to provide a comparative evaluation of three photogrammetric softwares commonly used in the industrial field, in order to obtain coherent, if not exactly congruent results. After structuring the overall processing workflow, the processing pipelines were accurately parameterized to make them comparable in both licensed and open-source softwares. For the best interpretation of the results derived from the generation of point clouds processed by the same image dataset, the obtainable values of root-mean-square error (RMSE) were analyzed, georeferencing models as the number of GCPs varied. The tests carried out aimed at investigating the elements shared by the platforms tested, with the purpose of supporting future studies to define a unique index for the accuracy of final products.

Mirko Saponaro, Adriano Turso, Eufemia Tarantino
Road Cadastre an Innovative System to Update Information, from Big Data Elaboration

The proposed research activity is based on the study and development of advanced monitoring techniques for the inspection and mapping of road infrastructures. Data detection (initial and periodic) is one of the most important phases of the process to be followed for the knowledge of the current state of road infrastructure and this is fundamental for the design and intervention choices. This operation can be done both through traditional (such as GNSS receivers, total motorized station and 3D laser scanner) and innovative tools (such as remote sensing, mobile mapping systems vehicles or road drones and APAs). Recently, technological development has offered the possibility to use tools that allow continuous detection of the object to be investigated, and there is the possibility to have multiple sensors at the same time that allow to make high-performance surveys. The aim of the research will be the design and implementation of an innovative measurement and component sensor system to be equipped on technological systems for data acquisition. The research also consists on the implementation of dedicated algorithms for the management of the amount of georeferenced data obtained and their representation on GIS (Geographic Information System) platforms as “open and updatable” thematic cartography. In this context, the establishment and update of the Road Cadastre is also included, intended (in our application) as a computer tool for storing, querying, managing and visualizating of all the data that the owner/manager acquired on its road network. In it will be possible to represent the elements inherent geometric characteristics elements of the roads and their relevance, as well as the permanent installations and services related to the needs of the circulation; in this way we can obtain a continuos updated database that allow rapid selective searches for topics.

Vincenzo Barrile, Antonino Fotia, Ernesto Bernardo, Giuliana Bilotta
Land-Cover Mapping of Agricultural Areas Using Machine Learning in Google Earth Engine

Land-cover mapping is critically needed in land-use planning and policy making. Compared to other techniques, Google Earth Engine (GEE) offers a free cloud of satellite information and high computation capabilities. In this context, this article examines machine learning with GEE for land-cover mapping. For this purpose, a five-phase procedure is applied: (1) imagery selection and pre-processing, (2) selection of the classes and training samples, (3) classification process, (4) post-classification, and (5) validation. The study region is located in the San Salvador basin (Uruguay), which is under agricultural intensification. As a result, the 1990 land-cover map of the San Salvador basin is produced. The new map shows good agreements with past agriculture census and reveals the transformation of grassland to cropland in the period 1990–2018.

Florencia Hastings, Ignacio Fuentes, Mario Perez-Bidegain, Rafael Navas, Angela Gorgoglione
A Methodological Proposal to Support Estimation of Damages from Hailstorms Based on Copernicus Sentinel 2 Data Times Series

Hail is one of the risks that most frightens farmers and one of the few currently insured climatic-related phenomena. In the last years, a significant increase occurred of adverse events affecting crops, highlighting that ordinary strategies of insurance companies should migrate to a more dynamic management. In this work a prototype of service based on remotely sensed data is presented aimed at supporting evaluation of hail impacts on crops by mapping and qualifying areas damaged by hail. Further, a comparison was done testing effectiveness of approaches based on short term (i.e. with reference to images acquired immediately after the event) and long term (i.e. with reference to images acquired close to crop harvest) analysis. Investigation was solicited by the Reale Mutua insurance company and focused on a strong hailstorm occurred on 6th July 2019 in the Vercelli province (Piemonte - NW Italy). The analysis was based on Copernicus Sentinel-2 level 2A imagery. A times series made of 29 NDVI maps was generated for the growing season 2019 (from March to October) and analyzed at pixel level looking for NDVI trend anomalies possibly related to crop damages. Phenological behavior of damaged crops (NDVI local temporal profile) was compared with those of unharmed fields to verify and assess the impact of the phenomenon. Results showed evident anomalies along the local NDVI temporal profile of damaged cropped pixels, permitting a fine mapping of the affected areas. Surprisingly, short and long term approaches led to different conclusions: some areas, appearing significantly damaged immediately after the event, showed a vegetative recover with the proceeding of the growing season (temporary damages). Differently, some others showed that damages detected after the events never turned into a better situation (permanent damages). This new information could drive to considered a revision of the ordinary insurance procedures that are currently used by companies to certify and quantify damages of crops after adverse events. It can therefore said that, the high temporal resolution of the Copernicus Sentinel 2 mission can significantly contribute to improve evaluating procedures in the insurance sector by introducing temporal variables.

F. Sarvia, S. De Petris, E. Borgogno-Mondino
A Multidisciplinary Approach for Multi-risk Analysis and Monitoring of Influence of SODs and RODs on Historic Centres: The ResCUDE Project

The presented paper describes the ReSCUDE project, developed by the Department of Civil, Environmental, Land, Construction and Chemistry (DICATECh) of the Polytechnic University of Bari under the grant Attraction and International Mobility, of the Italian Ministry of Education, University and Research. The project focuses on the evaluation of the effects of Slow-Onset Disasters (SODs), and Rapid Onset Disasters (RODs) on historic town centres. To this end, an integrated approach based on innovative geomatics, building techniques and advanced behavioural models, is being applied to the old town built area of Ascoli Satriano (FG) and Molfetta (BA) in the Apulia Region (Italy). Over the next three years, the ResCUDE project will allow to perform in-depth analyses on the historic built environment of the identified case studies, fostering the processes of its knowledge, assessment, control, management and design, in connection to the risks deriving from ROD and SOD events. The expected outputs will be useful to define possible scenarios for civil defence purposes and undertake actions aimed at risk mitigation.

Alberico Sonnessa, Elena Cantatore, Dario Esposito, Francesco Fiorito
Road Extraction for Emergencies from Satellite Imagery

After earthquakes, international and national organizations must overcome many challenges in rescue operations. Among these, the knowledge of the territory and of the roads is fundamental for international aid. The maps that volunteers make are a valuable asset, showing the roads in the area affected by the seismic events, a knowledge which is necessary to bring rescue. This was very helpful during many earthquakes as in Haiti (on 2010-01-12) and in Nepal (on 2015-04-25) to support the humanitarian organizations. Many volunteers can contribute remotely to mapping little known or inaccessible regions with crowdsourcing actions, by tracing maps from satellite imagery or aerial photographs even if staying far from the affected site.This research, still in progress, aims at experiencing quickly obtaining roads through the so-called Object Based Image Analysis (OBIA), by extracting it from satellite data, semi-automatically or automatically, with a segmentation that starts from concepts of Mathematical Morphology. We compared it with a classification in ENVI and, using an algorithm in GIS, we verified the goodness of the method.The good results obtained encourage further research on fast techniques for map integration for humanitarian emergencies moreover the results were implemented on open street map.

Vincenzo Barrile, Giuliana Bilotta, Antonino Fotia, Ernesto Bernardo
Extracting Land Cover Data Using GEE: A Review of the Classification Indices

Land Use/Land Cover (LU/LC) data includes most of the information suitable for tackling many environmental issues. Remote sensing is largely recognized as the most significant method to extract them through the application of various techniques. They can be extracted through the application of many techniques. Among the several classification approaches, the index-based method has been recognized as the best one to gather LU/LC information from different images sources. The present work is intended to assess its performance exploiting the great potentialities of Google Earth Engine (GEE), a cloud-processing environment introduced by Google to storage and handle a large number of information. Twelve atmospherically corrected Landsat satellite images were collected on the experimental site of Siponto, in Southern Italy. Once the clouds masking procedure was completed, a large number of indices were implemented and compared in GEE platform to detect sparse and dense vegetation, water, bare soils and built-up areas. Among the tested algorithms, only NDBaI2, CVI, WI2015, SwiRed and STRed indices showed satisfying performance. Although NDBaI2 was able to extract all the main LU/LC categories with a high Overall Accuracy (OA) (82.59%), the other mentioned indices presented a higher accuracy than the first one but are able to identify just few classes. An interesting performance is shown by the STRed index since it has a very high OA and can extract mining areas, water and green zones. GEE appeared the best solution to manage the geospatial big data.

Alessandra Capolupo, Cristina Monterisi, Giacomo Caporusso, Eufemia Tarantino
Post-processing of Pixel and Object-Based Land Cover Classifications of Very High Spatial Resolution Images

The state of the art is plenty of classification methods. Pixel-based methods include the most traditional ones. Although these achieved high accuracy when classifying remote sensing images, some limits emerged with the advent of very high-resolution images that enhanced the spectral heterogeneity within a class.Therefore, in the last decade, new classification methods capable of overcoming these limits have undergone considerable development. Within this research, we compared the performances of an Object-based and a Pixel-Based classification method, the Random Forests (RF) and the Object-Based Image Analysis (OBIA), respectively. Their ability to quantify the extension and the perimeter of the elements of each class was evaluated through some performance indices. Algorithm parameters were calibrated on a subset, then, applied on the whole image. Since these algorithms perform accurately in quantifying the elements areas, but worse if we consider the perimeters length, hence, the aim of this research was to setup some post-processing techniques to improve, in particular, this latter performance.Algorithms were applied on peculiar classes of an area comprising the Isole dello Stagnone di Marsala oriented natural reserve, in north-western corner of Si-cily, salt pans and agricultural settlements. The area was covered by a World View-2 multispectral image consisting of eight spectral bands spanning from visible to near-infrared wavelengths and characterized by a spatial resolution of two meters. Both classification algorithms did not quantify accurately object perimeters; especially RF. Post-processing algorithm improved the estimates, which however remained more accurate for OBIA than for RF.

Tommaso Sarzana, Antonino Maltese, Alessandra Capolupo, Eufemia Tarantino
Evaluation of Changes in the City Fabric Using Multispectral Multi-temporal Geospatial Data: Case Study of Milan, Italy

In recent decades the global effects of climate change have requested for a more sustainable approach in thinking and planning of our cities, making them more inclusive, safe and resilient. In terms of consumption of natural resources and pollution, cities are seen as entities with most significant impact to the natural environment. Strategic policies focused on tackling the challenges induced by climate change suggest in fact the necessity to start from the management and operating models of the cities themselves. This study illustrates an initial evaluation of parameters for purposes of urban generation studies using optical multi-spectral satellite imagery from Landsat-5, Landsat-8 and Sentinel-2 missions. The changes in land occupation and urban density are the first aspects chosen to be examined for the period 1985–2020. The focus was given on possible modifications occurred in occasion of Milano Expo 2015. The paper firstly explores the known best band combination for observation of urban fabric. Suggestions derived have then been calibrated with reference to ground truth data, while the image pairs over the 35 years span were then build with selected bands. Finally, all image pairs have been processed for Principal Component Analysis in order to identify possible “hot-spots” of significant changes. The results found on the image pair 2006–2015 have been explored in detail and checked upon official orthophotos. Monitoring of changes in urban fabric using multispectral optical imagery can provide valuable insights for further evaluation of single urban generation interventions. Such contributions could be considered in the processes of urban planning policies in a more systematic manner.

Branka Cuca
Supporting Assessment of Forest Burned Areas by Aerial Photogrammetry: The Susa Valley (NW Italy) Fires of Autumn 2017

In October 2017, a large wildfire occurred in the Susa valley (Italian Western Alps) affecting wide areas of mixed forests (Pinus sylvetris L.; Fagus sylvatica L., Quercus pubescens Willd.) with a spot pattern. Few days after the event an aerial survey operated by an RGB camera Sony ILCE-7RM2-a7R II was done with the aim of testing a digital photogrammetry-based 3D rapid mapping of fire effects. Flight altitude was about 800 m above ground level (AGL) determining an average image GSD of about 0.2 m. Image block adjustment was performed in Agisoft PhotoScan vs 1.2.4 using 18 ground control points that were recognized over true color orthoimages (GSD = 0.4 m). Height values of GCPs were obtained from a 5 m grid size DTM. Both orthoimages and DTM were obtained for free from the Piemonte Region Cartographic Office (ICE dataset 2010). A point cloud, having an average density of 7 pt/m2 and covering 14 km2 was generated, filtered and regularized to generate the correspondent DSM (Digital Surface Model) with a grid size of 0.5 m. With reference to the above-mentioned ICE DTM, a Canopy Height Model (CHM) was generated by grid differencing with a grid size of 0.5 m. A true-orthoimage was also generated having a GSD of 0.5 m. The latter was used to map burned areas by a pixel based unsupervised classification approach operating with reference to the pseudo GNDVI image, previously computed from the native red and green bands (no radiometric calibration was applied aimed at converting back the raw digital numbers to reflectance). Results were compared with 2 official datasets that were generated after the event from satellite data, one produced by the Piemonte Region and the other one by the Copernicus Emergency System. In order to test differences between burned and not-burned areas, point density, point spacing and canopy heights were computed and compared looking for evidences of geometrical differences possibly characterizing burned areas in respect of the not burned ones. Results showed that no significant differences were found between point density and point spacing in burned and not burned area. There was a significant difference in CHM minimum values distribution between burned and not-burned areas while maximum values distribution does not change significantly, proving that fire change crown structure but tree height remain unchanged. These results suggest that aerial photogrammetry could detect fire effect on forest having higher accuracy respect to ordinary approaches used in forest disturbance ecology.

S. De Petris, E. J. Momo, E. Borgogno-Mondino
Urban Geology: The Geological Instability of the Apulian Historical Centers

The tourist success of the Apulian region is certainly associated with the great landscape and architectural heritage. The union of these aspects is particularly evident in the territory and in the urban centers of Salento, Murgia and the surroundings in the coastal territory. The conservation and sustainability of these places, however, often confronts the most invasive terms of hydrogeological instability (earthquakes, floods, landslides, etc.). It is true that sometimes in the Apulian urban centers geological phenomena of instability are observed much less evident than those mentioned but more subtle and equally invasive and harmful: for example subsidence, sink holes, karst.In order to avoid the aforementioned effects, when planning and growing an urban environment, it is necessary for decision-makers, engineers, planners and managers to take into account the physical parameters of the urban area, as well as susceptibility to dangers natural.The geology and geomorphology of an area are crucial to guarantee the sustainable management of the territory and consequently in the protection of human life in urban areas.With this work we illustrate some examples of hydrogeological instabilities that have been observed in the historical centers of some Apulian cities and that can significantly affect strategies for preventing damage to things and people.The cases of the hydrogeological subsidence in Acquaviva delle Fonti, in Castellana Grotte and Cutrofiano have been described and analyzed.

Alessandro Reina, Maristella Loi
Satellite Stereo Data Comprehensive Benchmark for DSM Extraction

The quite recent availability of satellite stereo pairs allows users to extract three-dimensional data that can be used in different domain of applications, such as urban planning, energy, emergency management, etc. This research paper aims to extract digital surface models (DSM) from satellite stereo pairs acquired by three different satellites (Deimos-2, Pléiades-1 and WorldView-3) over the area of the city of Turin (Italy). The results are then assessed in term of geometric accuracy comparing them with a cadastral point height dataset, used as benchmark. The comparison, in terms of difference height values (between the DSM and the benchmark), is calculated on a set of sample points. Just two of the generated DSM guaranteed a high height accuracy level useful for the domain of application, such as existing cartography update, emergency management, building damage assessment, roof slope and solar incoming radiation assessment. Further developments will investigate different blending techniques and software that could provide more accurate results.

G. Mansueto, P. Boccardo, A. Ajmar

International Workshop on Software Quality (SQ 2020)

Frontmatter
Cross-Project Vulnerability Prediction Based on Software Metrics and Deep Learning

Vulnerability prediction constitutes a mechanism that enables the identification and mitigation of software vulnerabilities early enough in the development cycle, improving the security of software products, which is an important quality attribute according to ISO/IEC 25010. Although existing vulnerability prediction models have demonstrated sufficient accuracy in predicting the occurrence of vulnerabilities in the software projects with which they have been trained, they have failed to demonstrate sufficient accuracy in cross-project prediction. To this end, in the present paper we investigate whether the adoption of deep learning along with software metrics may lead to more accurate cross-project vulnerability prediction. For this purpose, several machine learning (including deep learning) models are constructed, evaluated, and compared based on a dataset of popular real-world PHP software applications. Feature selection is also applied with the purpose to examine whether it has an impact on cross-project prediction. The results of our analysis indicate that the adoption of software metrics and deep learning may result in vulnerability prediction models with sufficient performance in cross-project vulnerability prediction. Another interesting conclusion is that the performance of the models in cross-project prediction is enhanced when the projects exhibit similar characteristics with respect to their software metrics.

Ilias Kalouptsoglou, Miltiadis Siavvas, Dimitrios Tsoukalas, Dionysios Kehagias
Software Requirement Catalog on Acceptability, Usability, Internationalization and Sustainability for Contraception mPHRs

Contraception Mobile Personal Health Records (mPHRs) are efficient mobile health applications (apps) to increase awareness about fertility and contraception and to allow women to access, track, manage, and share their health data with healthcare providers. This paper aims to develop a requirements catalog, according to standards, guidelines, and relevant literature to e-health technology and psychology. The requirements covered by this catalog are Acceptability, Usability, Sustainability, and Internationalization (i18n). This catalog can be very useful for developing, evaluating, and auditing contraceptive apps, as well as helping stakeholders and developers identify potential requirements for their mPHRs to improve them.

Manal Kharbouch, Ali Idri, Leanne Redman, Hassan Alami, José Luis Fernández-Alemán, Ambrosio Toval
Quantifying Influential Communities in Granular Social Networks Using Fuzzy Theory

Community detection and centrality analysis in social networks are identified as pertinent research topics in the field of social network analysis. Community detection focuses on identifying the sub-graphs (communities) which have dense connections within it as compared to outside of it, whereas centrality analysis focuses on identifying significant nodes in a social network based on different aspects of importance. A number of research works have focused on identifying community structure in large-scale network. However, very less effort has been emphasized on quantifying the influence of the communities. In this paper, group of nodes that are likely to form communities are first uncovered and then they are quantified based on the influencing ability in the network. Identifying exact boundaries of communities are quite challenging in large scale network. The major contribution in this paper is to develop a model termed as FRC-FGSN (Fuzzy Rough Communities in Fuzzy Granular Social Network), to identify the communities with the help of fuzzy and rough set theory. The proposed model is based on a idea that, the degree of belongingness a node in a community may not be binary but can be models through fuzzy membership. The second contribution is to quantifying the influence of the community using eigenvector centrality. In order to improve the scalability, several steps in the proposed model have been implemented using map-reduce programming paradigm in a cluster-computing framework like Hadoop. Comparative analysis of FRC-FGSN with other parallel algorithms as available in the literature has been presented to demonstrate the scalability and effectiveness of the algorithm.

Anisha Kumari, Ranjan Kumar Behera, Abhishek Sai Shukla, Satya Prakash Sahoo, Sanjay Misra, Sanatanu Kumar Rath
Human Factor on Software Quality: A Systematic Literature Review

Ensuring software quality is an important step towards a successful project. Since software development is a human-oriented process, it is possible to say that any factor affecting people will directly affect software quality and success. The aim of this study is to reveal which factors affect humans. For this purpose, we conducted a systematic literature review. We identified 80 related primary studies from the literature. We defined 7 research questions. For answering research questions, we extracted data from the primary studies. We researched human factors, methods for data collection and data analysis, publication types and years. Factors are grouped into 3 main groups: Personal factors, interpersonal factors, and organizational factors. The results show that personal factors are the most important category of human factors. It is seen that the most researched factors among personal factors are “experience” and “education”.

Elcin Guveyi, Mehmet S. Aktas, Oya Kalipsiz
Gamified e-Health Solution to Promote Postnatal Care in Morocco

The postnatal period is a critical phase in both the lives of the mothers and the newborns. Due to all the inherent changes that occur during this period, quality care is crucial during this period to enhance the wellbeing of the mothers and the newborns. In Morocco, the neglection of postnatal care services are often associated to poor communication, financial difficulties and cultural barriers. Mobile technology constitutes therefore a promising approach to bridge this gap and promote postnatal care. In order to improve the effectiveness of mobile technology, gamification has become a powerful feature to boost motivation and induce fun and interactivity into the mobile solutions’ tasks. Based on a previous review on mobile applications for postnatal care available in app repositories, a set of requirements have been identified to build a comprehensive mobile solution that cater the needs of both the mothers and the newborns. These requirements have, then, been enriched with real needs elicited at maternity Les orangers that belongs to the University Hospital Avicenne of Rabat. Along with the functional and non-functional requirements, gamification aspects have been also analyzed. After the analysis and design phases, a pilot version of the solution called ‘Mamma&Baby’ has been implemented using android framework. ‘Mamma&Baby’ is a mobile solution dedicated to assist new mothers during their postnatal period. As future work, it is expected to fully integrate the gamification elements into the solution and conduct an empirical evaluation of the overall quality and the potential of the solution with real puerperal women.

Lamyae Sardi, Ali Idri, Taoufik Rachad, Leanne Redman, Hassan Alami
Handling Faults in Service Oriented Computing: A Comprehensive Study

Recently, service-oriented computing paradigms have become a trending development direction, in which software systems are built using a set of loosely coupled services distributed over multiple locations through a service-oriented architecture. Such systems encounter different challenges, as integration, performance, reliability, availability, etc., which made all associated testing activities to be another major challenge to avoid their faults and system failures. Services are considered the substantial element in service-oriented computing. Thus, the quality of services and the service dependability in a web service composition have become essential to manage faults within these software systems. Many studies addressed web service faults from diverse perspectives. In this paper, a comprehensive study is conducted to investigate the different perspectives to manipulate web service faults, including fault tolerance, fault injection, fault prediction and fault localization. An extensive comparison is provided, highlighting the main research gaps, challenges and limitations of each perspective for web services. An analytical discussion is then followed to suggest future research directions that can be adopted to face such obstacles by improving fault handling capabilities for an efficient testing in service-oriented computing systems.

Roaa ElGhondakly, Sherin Moussa, Nagwa Badr
Requirements Re-usability in Global Software Development: A Systematic Mapping Study

In global software development, requirements re-usability is a common practice which ultimately helps to maintain project quality and reduce both development time and cost. However, when a large-scale project is distributed, there are some critical factors needed to be maintained and managed for reusing requirements and it is considered a challenging job to interrelate the requirements between two identical projects. In this study, we have pointed out 48 challenges faced and 43 mitigation techniques used when implementing requirements re-usability in global software development projects among distributed teams. The challenges distributed teams frequently encounter can be divided into three considering issues as Communication, Coordination and Control of distributed teams in global software development. The results from this study can be used to plan development strategies while reusing requirements in distributed manners.

Syeda Sumbul Hossain, Yeasir Arafat, Tanvir Amin, Touhid Bhuiyan
Inspecting JavaScript Vulnerability Mitigation Patches with Automated Fix Generation in Mind

Software security has become a primary concern for both the industry and academia in recent years. As dependency on critical services provided by software systems grows globally, a potential security threat in such systems poses higher and higher risks (e.g. economical damage, a threat to human life, criminal activity).Finding potential security vulnerabilities at the code level automatically is a very popular approach to aid security testing. However, most of the methods based on machine learning and statistical models stop at listing potentially vulnerable code parts and leave their validation and mitigation to the developers. Automatic program repair could fill this gap by automatically generating vulnerability mitigation code patches. Nonetheless, it is still immature, especially in targeting security-relevant fixes.In this work, we try to establish a path towards automatic vulnerability fix generation techniques in the context of JavaScript programs. We inspect 361 actual vulnerability mitigation patches collected from vulnerability databases and GitHub. We found that vulnerability mitigation patches are not short on average and in many cases affect not just program code but test code as well. These results point towards that a general automatic repair approach targeting all the different types of vulnerabilities is not feasible. The analysis of the code properties and fix patterns for different vulnerability types might help in setting up a more realistic goal in the area of automatic JavaScript vulnerability repair.

Péter Hegedűs
Software Process Improvement Assessment for Cloud Application Based on Fuzzy Analytical Hierarchy Process Method

The miracles of ordinary programming development face exceptional difficulties which relate to Software process development. Main objective of this method is to make the prioritization-based intelligent plan for Software Process Improvement (SPI). Attainment elements utilizing the Fuzzy Analytical Hierarchy (AHP) Process technique. A special phase that was seen became additionally assessed through using a non-useful gathering with use of SPI. Inside subsequent stage, a multi-degree dynamic AHP tools became applied to fashion out and increase the perceptive depiction of the perceived segments and their requests. The repercussions of the Fuzzy AHP method are novel during this evaluation zone because it has been fairly utilized in cloud software improvement. This paper proposes the radical approach with use of Fuzzy AHP in the examination of Global Software Development (GSD) and SPI, which enables with expelling the untidiness and shortcoming inside the assessment of the procedure development phases related to cloud application headway.

Surjeet Dalal, Akshat Agrawal, Neeraj Dahiya, Jatin Verma
InDubio: A Combinator Library to Disambiguate Ambiguous Grammars

To infer an abstract model from source code is one of the main tasks of most software quality analysis methods. Such abstract model is called Abstract Syntax Tree and the inference task is called parsing. A parser is usually generated from a grammar specification of a (programming) language and it converts source code of that language into said abstract tree representation. Then, several techniques traverse this tree to assess the quality of the code (for example by computing source code metrics), or by building new data structures (e.g, flow graphs) to perform further analysis (such as, code cloning, dead code, etc). Parsing is a well established technique. In recent years, however, modern languages are inherently ambiguous which can only be fully handled by ambiguous grammars.In this setting disambiguation rules, which are usually included as part of the grammar specification of the ambiguous language, need to be defined. This approach has a severe limitation: disambiguation rules are not first class citizens. Parser generators offer a small set of rules that can not be extended or changed. Thus, grammar writers are not able to manipulate nor define a new specific rule that the language he is considering requires.In this paper we present a tool, name InDubio, that consists of an extensible combinator library of disambiguation filters together with a generalized parser generator for ambiguous grammars. InDubio defines a set of basic disambiguation rules as abstract syntax tree filters that can be combined into more powerful rules. Moreover, the filters are independent of the parser generator and parsing technology, and consequently, they can be easily extended and manipulated. This paper presents InDubio in detail and also presents our first experimental results.

José Nuno Macedo, João Saraiva
A Data-Mining Based Study of Security Vulnerability Types and Their Mitigation in Different Languages

The number of people accessing online services is increasing day by day, and with new users, comes a greater need for effective and responsive cyber-security. Our goal in this study was to find out if there are common patterns within the most widely used programming languages in terms of security issues and fixes. In this paper, we showcase some statistics based on the data we extracted for these languages. Analyzing the more popular ones, we found that the same security issues might appear differently in different languages, and as such the provided solutions may vary just as much.We also found that projects with similar sizes can produce extremely different results, and have different common weaknesses, even if they provide a solution to the same task. These statistics may not be entirely indicative of the projects’ standards when it comes to security, but they provide a good reference point of what one should expect. Given a larger sample size they could be made even more precise, and as such a better understanding of the security relevant activities within the projects written in given languages could be achieved.

Gábor Antal, Balázs Mosolygó, Norbert Vándor, Péter Hegedűs
The SDK4ED Platform for Embedded Software Quality Improvement - Preliminary Overview

Maintaining high level of quality with respect to important quality attributes is critical for the success of modern software applications. Hence, appropriate tooling is required to help developers and project managers monitor and optimize software quality throughout the overall Software Development Lifecycle (SDLC). Moreover, embedded software engineers and developers need support to manage complex interdependencies and inherent trade-offs between design and run-time qualities. To this end, in an attempt to address these issues, we are developing the SDK4ED Platform as part of the ongoing EU-funded SDK4ED project, a software quality system that enables the monitoring and optimization of software quality, with emphasis on embedded software. The purpose of this technical paper is to provide an overview of the SDK4ED Platform and present the main novel functionalities that have been implemented within the platform until today.

Miltiadis Siavvas, Dimitrios Tsoukalas, Charalampos Marantos, Angeliki-Agathi Tsintzira, Marija Jankovic, Dimitrios Soudris, Alexander Chatzigeorgiou, Dionysios Kehagias
Backmatter
Metadata
Title
Computational Science and Its Applications – ICCSA 2020
Editors
Prof. Dr. Osvaldo Gervasi
Beniamino Murgante
Prof. Sanjay Misra
Dr. Chiara Garau
Ivan Blečić
David Taniar
Dr. Bernady O. Apduhan
Ana Maria A. C. Rocha
Prof. Eufemia Tarantino
Prof. Carmelo Maria Torre
Prof. Yeliz Karaca
Copyright Year
2020
Electronic ISBN
978-3-030-58811-3
Print ISBN
978-3-030-58810-6
DOI
https://doi.org/10.1007/978-3-030-58811-3

Premium Partner