Skip to main content

2020 | Buch

Computational Science – ICCS 2020

20th International Conference, Amsterdam, The Netherlands, June 3–5, 2020, Proceedings, Part V

herausgegeben von: Dr. Valeria V. Krzhizhanovskaya, Dr. Gábor Závodszky, Michael H. Lees, Prof. Jack J. Dongarra, Prof. Dr. Peter M. A. Sloot, Sérgio Brissos, João Teixeira

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The seven-volume set LNCS 12137, 12138, 12139, 12140, 12141, 12142, and 12143 constitutes the proceedings of the 20th International Conference on Computational Science, ICCS 2020, held in Amsterdam, The Netherlands, in June 2020.*

The total of 101 papers and 248 workshop papers presented in this book set were carefully reviewed and selected from 719 submissions (230 submissions to the main track and 489 submissions to the workshops). The papers were organized in topical sections named:

Part I: ICCS Main Track

Part II: ICCS Main Track

Part III: Track of Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Track of Agent-Based Simulations, Adaptive Algorithms and Solvers; Track of Applications of Computational Methods in Artificial Intelligence and Machine Learning; Track of Biomedical and Bioinformatics Challenges for Computer Science

Part IV: Track of Classifier Learning from Difficult Data; Track of Complex Social Systems through the Lens of Computational Science; Track of Computational Health; Track of Computational Methods for Emerging Problems in (Dis-)Information Analysis

Part V: Track of Computational Optimization, Modelling and Simulation; Track of Computational Science in IoT and Smart Systems; Track of Computer Graphics, Image Processing and Artificial Intelligence

Part VI: Track of Data Driven Computational Sciences; Track of Machine Learning and Data Assimilation for Dynamical Systems; Track of Meshfree Methods in Computational Sciences; Track of Multiscale Modelling and Simulation; Track of Quantum Computing Workshop

Part VII: Track of Simulations of Flow and Transport: Modeling, Algorithms and Computation; Track of Smart Systems: Bringing Together Computer Vision, Sensor Networks and Machine Learning; Track of Software Engineering for Computational Science; Track of Solving Problems with Uncertainties; Track of Teaching Computational Science; Track of UNcErtainty QUantIficatiOn for ComputationAl modeLs

*The conference was canceled due to the COVID-19 pandemic.

Inhaltsverzeichnis

Frontmatter

Computational Optimization, Modelling and Simulation

Frontmatter
Information Theory-Based Feature Selection: Minimum Distribution Similarity with Removed Redundancy

Feature selection is an important preprocessing step in pattern recognition. In this paper, we presented a new feature selection approach in two-class classification problems based on information theory, named minimum Distribution Similarity with Removed Redundancy (mDSRR). Different from the previous methods which use mutual information and greedy iteration with a loss function to rank the features, we rank features according to their distribution similarities in two classes measured by relative entropy, and then remove the high redundant features from the sorted feature subsets. Experimental results on datasets in varieties of fields with different classifiers highlight the value of mDSRR on selecting feature subsets, especially so for choosing small size feature subset. mDSRR is also proved to outperform other state-of-the-art methods in most cases. Besides, we observed that the mutual information may not be a good practice to select the initial feature in the methods with subsequent iterations.

Yu Zhang, Zhuoyi Lin, Chee Keong Kwoh
On the Potential of the Nature-Inspired Algorithms for Pure Binary Classification

With the advent of big data, interest for new data mining methods has increased dramatically. The main drawback of traditional data mining methods is the lack of comprehensibility. In this paper, the firefly algorithm was employed for standalone binary classification, where each solution is represented by two classification rules that are easy understandable by users. Implicitly, the feature selection is also performed by the algorithm. The results of experiments, conducted on three well-known datasets publicly available on web, were comparable with the results of the traditional methods in terms of accuracy and, therefore, the huge potential was exhibited by the proposed method.

Iztok Fister Jr., Iztok Fister, Dušan Fister, Grega Vrbančič, Vili Podgorelec
Analytical Techniques for the Identification of a Musical Score: The Musical DNA

In the information age, one of the main research field that is being developed is the one related to how to improve the quality of the search engine as regards knowing how to manage the information contained in a document in order to extract its content and interpret it. On the one hand due to the heterogeneity of the information contained on the web (text, image, video, musical scores), and on the other hand to satisfy the user who generally searches for information of a very different type. This paper describes the development and evaluation of an analytical method for the analysis of musical score considered in its symbolic level. The developed method is based on the analysis of the fundamental elements of the musical grammar and takes into account the distance between the sounds (which characterize a melody) and their duration (which makes the melody active and alive). The method has been tested on a set of different musical scores, realizing an algorithm in order to identity a musical score in a database.

Michele Della Ventura
Reduced-Cost Constrained Modeling of Microwave and Antenna Components: Recent Advances

Electromagnetic (EM) simulation models are ubiquitous in the design of microwave and antenna components. EM analysis is reliable but CPU intensive. In particular, multiple simulations entailed by parametric optimization or uncertainty quantification may considerably slow down the design processes. In order to address this problem, it is possible to employ fast metamodels. Here, the popular solution approaches are approximation surrogates, which are versatile and easily accessible. Notwithstanding, the major issue for conventional modeling methods is the curse of dimensionality. In the case of high-frequency components, an added difficulty are highly nonlinear outputs that need to be handled. A recently reported constrained modeling attempts to broaden the applicability of approximation surrogates by confining the surrogate model setup to a small subset of the parameter space. The said region contains the parameter vectors corresponding to high-quality designs w.r.t. the considered figures of interest, which allows for a dramatic reduction of the number of training samples needed to render reliable surrogates without formally restricting the parameter ranges. This paper reviews the recent techniques employing these concepts and provides real-world illustration examples of antenna and microwave structures.

Anna Pietrenko-Dabrowska, Slawomir Koziel, Leifur Leifsson
Aerodynamic Shape Optimization for Delaying Dynamic Stall of Airfoils by Regression Kriging

The phenomenon of dynamic stall produce adverse aerodynamic loading which can adversely affect the structural strength and life of aerodynamic systems. Aerodynamic shape optimization (ASO) provides an effective approach for delaying and mitigating dynamic stall characteristics without the addition of auxiliary system. ASO, however, requires multiple evaluations time-consuming computational fluid dynamics models. Metamodel-based optimization (MBO) provides an efficient approach to alleviate the computational burden. In this study, the MBO approach is utilized for the mitigation of dynamic stall characteristics while delaying dynamic stall angle of the flow past wind turbine airfoils. The regression Kriging metamodeling technique is used to approximate the objective and constrained functions. The airfoil shape design variables are described with six PARSEC parameters. A total of 60 initial samples are used to construct the metamodel, which is further refined with 20 infill points using expected improvement. The metamodel is validated with the normalized root mean square error based on 20 test data samples. The refined metamodel is used to search for the optimal design using a multi-start gradient-based method. The results show that an optimal design with a $$3^\circ $$ delay in dynamic stall angle as well a reduction in the severity of pitching moment coefficients can be obtained.

Vishal Raul, Leifur Leifsson, Slawomir Koziel
Model-Based Sensitivity Analysis of Nondestructive Testing Systems Using Machine Learning Algorithms

Model-based sensitivity analysis is crucial in quantifying which input variability parameter is important for nondestructive testing (NDT) systems. In this work, neural networks (NN) and convolutional NN (CNN) are shown to be computationally efficient at making model prediction for NDT systems, when compared to models such as polynomial chaos expansions, Kriging and polynomial chaos Kriging (PC-Kriging). Three different ultrasonic benchmark cases are considered. NN outperform these three models for all the cases, while CNN outperformed these three models for two of the three cases. For the third case, it performed as well as PC-Kriging. NN required 48, 56 and 35 high-fidelity model evaluations, respectively, for the three cases to reach within $$1\%$$ accuracy of the physics model. CNN required 35, 56 and 56 high-fidelity model evaluations, respectively, for the same three cases.

Jethro Nagawkar, Leifur Leifsson, Roberto Miorelli, Pierre Calmon
Application of Underdetermined Differential Algebraic Equations to Solving One Problem from Heat Mass Transfer

This paper addresses a mathematical model of the boiling of subcooled liquid in an annular channel. The model is presented by a mixed system of ordinary differential equations, algebraic relations and a single partial differential equation, which, written together, can be viewed as an underdetermined differential algebraic equation with a partial differential equation attached. Using the tools of the differential algebraic equation theory, we reveal some important qualitative properties of this system, such as its existence domain, and propose a numerical method for its solution. The numerical experiments demonstrated that within the found existence domain the mathematical model adequately represents real-life boiling processes that occur in the experimental setup.

Viktor F. Chistyakov, Elena V. Chistyakova, Anatoliy A. Levin
Fully-Asynchronous Fully-Implicit Variable-Order Variable-Timestep Simulation of Neural Networks

State-of-the-art simulations of detailed neurons follow the Bulk Synchronous Parallel execution model. Execution is divided in equidistant communication intervals, with parallel neurons interpolation and collective communication guiding synchronization. Such simulations, driven by stiff dynamics or wide range of time scales, struggle with fixed step interpolation methods, yielding excessive computation on intervals of quasi-constant activity and inaccurate interpolation of periods of high volatility in solution. Alternative adaptive timestepping methods are inefficient in parallel executions due to computational imbalance at the synchronization barriers. We introduce a distributed fully-asynchronous execution model that removes global synchronization, allowing for long variable timestep interpolations of neurons. Asynchronicity is provided by point-to-point communication notifying neurons’ time advancement to synaptic connectivities. Timestepping is driven by scheduled neuron advancements based on interneuron synaptic delays, yielding an exhaustive yet not speculative execution. Benchmarks on 64 Cray XE6 compute nodes demonstrate reduced number of interpolation steps, higher numerical accuracy and lower runtime compared to state-of-the-art methods. Efficiency is shown to be activity-dependent, with scaling of the algorithm demonstrated on a simulation of a laboratory experiment.

Bruno Magalhães, Michael Hines, Thomas Sterling, Felix Schürmann
Deep Learning Assisted Memetic Algorithm for Shortest Route Problems

Finding the shortest route between a pair of origin and destination is known to be a crucial and challenging task in intelligent transportation systems. Current methods assume fixed travel time between any pairs, thus the efficiency of these approaches is limited because the travel time in reality can dynamically change due to factors including the weather conditions, the traffic conditions, the time of the day and the day of the week, etc. To address this dynamic situation, we propose a novel two-stage approach to find the shortest route. Firstly deep learning is utilised to predict the travel time between a pair of origin and destination. Weather conditions are added into the input data to increase the accuracy of travel time predicition. Secondly, a customised Memetic Algorithm is developed to find shortest route using the predicted travel time. The proposed memetic algorithm uses genetic algorithm for exploration and local search for exploiting the current search space around a given solution. The effectiveness of the proposed two-stage method is evaluated based on the New York City taxi benchmark dataset. The obtained results demonstrate that the proposed method is highly effective compared with state-of-the-art methods.

Ayad Turky, Mohammad Saiedur Rahaman, Wei Shao, Flora D. Salim, Doug Bradbrook, Andy Song
A Relaxation Algorithm for Optimal Control Problems Governed by Two-Dimensional Conservation Laws

We develop a class of numerical methods for solving optimal control problems governed by nonlinear conservation laws in two space dimensions. The relaxation approximation is used to transform the nonlinear problem to a semi-linear diagonalizable system with source terms. The relaxing system is hyperbolic and it can be numerically solved without need to either Riemann solvers for space discretization or a non-linear system of algebraic equations solvers for time discretization. In the current study, the optimal control problem is formulated for the relaxation system and at the relaxed limit its solution converges to the relaxed equation of conservation laws. An upwind method is used for reconstruction of numerical fluxes and an implicit-explicit scheme is used for time stepping. Computational results are presented for a two-dimensional inviscid Burgers problem.

Michael Herty, Loubna Salhi, Mohammed Seaid
Genetic Learning Particle Swarm Optimization with Interlaced Ring Topology

Genetic learning particle swarm optimization (GL-PSO) is a hybrid optimization method based on particle swarm optimization (PSO) and genetic algorithm (GA). The GL-PSO method improves the performance of PSO by constructing superior exemplars from which individuals of the population learn to move in the search space. However, in case of complex optimization problems, GL-PSO exhibits problems to maintain appropriate diversity, which leads to weakening an exploration and premature convergence. This makes the results of this method not satisfactory. In order to enhance the diversity and adaptability of GL-PSO, and as an effect of its performance, in this paper, a new modified genetic learning method with interlaced ring topology and flexible local search operator has been proposed. To assess the impact of the introduced modifications on performance of the proposed method, an interlaced ring topology has been integrated with GL-PSO only (referred to as GL-PSOI) as well as with a flexible local search operator (referred to as GL-PSOIF). The new strategy was tested on a set of benchmark problems and a CEC2014 test suite. The results were compared with five different variants of PSO, including GL-PSO, GGL-PSOD, PSO, CLPSO and HCLPSO to demonstrate the efficiency of the proposed approach.

Bożena Borowska
Low Reynolds Number Swimming with Slip Boundary Conditions

We investigate the classical Taylor’s swimming sheet problem in a viscoelastic fluid, as well as in a mixture of a viscous fluid and a viscoelastic fluid. Extensions of the standard Immersed Boundary (IB) Method are proposed so that the fluid media may satisfy partial slip or free-slip conditions on the moving boundary. Our numerical results indicate that slip may lead to substantial speed enhancement for swimmers in a viscoelastic fluid and in a viscoelastic two-fluid mixture. Under the slip conditions, the speed of locomotion is dependent in a nontrivial way on both the viscosity and elasticity of the fluid media. In a two-fluid mixture with free-slip network, the swimming speed is also significantly affected by the drag coefficient and the network volume fraction.

Hashim Alshehri, Nesreen Althobaiti, Jian Du
Trilateration-Based Multilevel Method for Minimizing the Lennard-Jones Potential

Simulating atomic evolution for the mechanics and structure of materials presents an ever-growing challenge due to the huge number of degrees of freedom borne from the high-dimensional spaces in which increasingly high-fidelity material models are defined. To efficiently exploit the domain-, data-, and approximation-based hierarchies hidden in many such problems, we propose a trilateration-based multilevel method to initialize the underlying optimization and benchmark its application on the simple yet practical Lennard-Jones potential. We show that by taking advantage of a known hierarchy present in this problem, not only a faster convergence, but also a better local minimum can be achieved comparing to random initial guess.

Jithin George, Zichao (Wendy) Di
A Stochastic Birth-Death Model of Information Propagation Within Human Networks

The fixation probability of a mutation in a population network is a widely-studied phenomenon in evolutionary dynamics. This mutation model following a Moran process finds a compelling application in modeling information propagation through human networks. Here we present a stochastic model for a two-state human population in which each of N individual nodes subscribes to one of two contrasting messages, or pieces of information. We use a mutation model to describe the spread of one of the two messages labeled the mutant, regulated by stochastic parameters such as talkativity and belief probability for an arbitrary fitness r of the mutant message. The fixation of mutant information is analyzed for an unstructured well-mixed population and simulated on a Barabási-Albert graph to mirror a human social network of $$N = 100$$ individuals. Chiefly, we introduce the possibility of a single node speaking to multiple information recipients or listeners, each independent of one another, per a binomial distribution. We find that while in mixed populations, the fixation probability of the mutant message is strongly correlated with the talkativity (sample correlation $$\rho = 0.96$$) and belief probability ($$\rho = -0.74$$) of the initial mutant, these correlations with respect to talkativity ($$\rho = 0.61$$) and belief probability ($$\rho = -0.49$$) are weaker on BA graph simulations. This indicates the likely effect of added stochastic noise associated with the inherent construction of graphs and human networks.

Prasidh Chhabria, Winnie Lu
A Random Line-Search Optimization Method via Modified Cholesky Decomposition for Non-linear Data Assimilation

This paper proposes a line-search optimization method for non-linear data assimilation via random descent directions. The iterative method works as follows: at each iteration, quadratic approximations of the Three-Dimensional-Variational (3D-Var) cost function are built about current solutions. These approximations are employed to build sub-spaces onto which analysis increments can be estimated. We sample search-directions from those sub-spaces, and for each direction, a line-search optimization method is employed to estimate its optimal step length. Current solutions are updated based on directions along which the 3D-Var cost function decreases faster. We theoretically prove the global convergence of our proposed iterative method. Experimental tests are performed by using the Lorenz-96 model, and for reference, we employ a Maximum-Likelihood-Ensemble-Filter (MLEF) whose ensemble size doubles that of our implementation. The results reveal that, as the degree of observational operators increases, the use of additional directions can improve the accuracy of results in terms of $$\ell _2$$-norm of errors, and even more, our numerical results outperform those of the employed MLEF implementation.

Elias D. Nino-Ruiz
A Current Task-Based Programming Paradigms Analysis

Task-based paradigm models can be an alternative to MPI. The user defines atomic tasks with a defined input and output with the dependencies between them. Then, the runtime can schedule the tasks and data migrations efficiently over all the available cores while reducing the waiting time between tasks. This paper focus on comparing several task-based programming models between themselves using the LU factorization as benchmark.HPX, PaRSEC, Legion and YML+XMP are task-based programming models which schedule data movement and computational tasks on distributed resources allocated to the application. YML+XMP supports parallel and distributed tasks with XscalableMP, a PGAS language. We compared their performances and scalability are compared to ScaLAPACK, an highly optimized library which uses MPI to perform communications between the processes on up to 64 nodes. We performed a block-based LU factorization with the task-based programming model on up to a matrix of size $$49512\,\times \,49512$$. HPX is performing better than PaRSEC, Legion and YML+XMP but not better than ScaLAPACK. YML+XMP has a better scalability than HPX, Legion and PaRSEC. Regent has trouble scaling from 32 nodes to 64 nodes with our algorithm.

Jérôme Gurhem, Serge G. Petiton
Radial Basis Functions Based Algorithms for Non-Gaussian Delay Propagation in Very Large Circuits

In this paper, we discuss methods for determining delay distributions in modern Very Large Scale Integration design. The delays have a non-Gaussian nature, which is a challenging task to solve and is a stumbling block for many approaches. The problem of finding delays in VLSI circuits is equivalent to a graph optimisation problem. We propose algorithms that aim at fast and very accurate calculations of statistical delay distributions. The speed of execution is achieved by utilising previously obtained analytical results for delay propagation through one logic gate. The accuracy is achieved by preserving the shapes of non-Gaussian delay distribution while traversing the graph of a circuit. The discussion on the methodology to handle non-Gaussian delay distributions is the core of the present study. The proposed algorithms are tested and compared with delay distributions obtained through Monte Carlo simulations, which is the standard verification procedure for this class of problems.

Dmytro Mishagli, Elena Blokhina
Ant Colony Optimization Implementation for Reversible Synthesis in Walsh-Hadamard Domain

Reversible circuits are one of the technologies that can provide future low energy circuits. The synthesis of an optimal reversible circuit for a given function is an np-hard problem. The meta-heuristic approaches are one of the most promising methods for these types of optimization problems. In this paper, a new approach for ACO reversible synthesis is presented. Usually, authors build an ACO system with the use of truth table or permutation representation of the reversible function. In this work, a Walsh spectral representation of a Boolean function is used. This allows dividing search spaces into smaller “promising” areas with well-defined transition operations between them. As a result, we can minimize the enormous search space and generate better solutions than obtained by ACO synthesis with classical reversible function representation. The proposed approach was applied to benchmark reversible functions of 4,5 and 6 variables and compared to other meta-heuristic results and best-known solutions.

Krzysztof Podlaski
COEBA: A Coevolutionary Bat Algorithm for Discrete Evolutionary Multitasking

Multitasking optimization is an emerging research field which has attracted lot of attention in the scientific community. The main purpose of this paradigm is how to solve multiple optimization problems or tasks simultaneously by conducting a single search process. The main catalyst for reaching this objective is to exploit possible synergies and complementarities among the tasks to be optimized, helping each other by virtue of the transfer of knowledge among them (thereby being referred to as Transfer Optimization). In this context, Evolutionary Multitasking addresses Transfer Optimization problems by resorting to concepts from Evolutionary Computation for simultaneous solving the tasks at hand. This work contributes to this trend by proposing a novel algorithmic scheme for dealing with multitasking environments. The proposed approach, coined as Coevolutionary Bat Algorithm, finds its inspiration in concepts from both co-evolutionary strategies and the metaheuristic Bat Algorithm. We compare the performance of our proposed method with that of its Multifactorial Evolutionary Algorithm counterpart over 15 different multitasking setups, composed by eight reference instances of the discrete Traveling Salesman Problem. The experimentation and results stemming therefrom support the main hypothesis of this study: the proposed Coevolutionary Bat Algorithm is a promising meta-heuristic for solving Evolutionary Multitasking scenarios.

Eneko Osaba, Javier Del Ser, Xin-She Yang, Andres Iglesias, Akemi Galvez
Convex Polygon Packing Based Meshing Algorithm for Modeling of Rock and Porous Media

In this work, we propose new packing algorithm designed for the generation of polygon meshes to be used for modeling of rock and porous media based on the virtual element method. The packing problem to be solved corresponds to a two-dimensional packing of convex-shape polygons and is based on the locus operation used for the advancing front approach. Additionally, for the sake of simplicity, we decided to restrain the polygon rotation in the packing process. Three heuristics are presented to simplify the packing problem: density heuristic, gravity heuristic and the multi-layer packing. The decision made by those three heuristic are prioritizing on minimizing the area, inserting polygons on the minimum Y coordinate and pack polygons in multiple layers dividing the input in multiple lists, respectively. Finally, we illustrate the potential of the generated meshes by solving a diffusion problem, where the discretized domain consisted in polygons and spaces with different conductivities. Due to the arbitrary shape of polygons and spaces that are generated by the packing algorithm, the virtual element method was used to solve the diffusion problem numerically.

Joaquí­n Torres, Nancy Hitschfeld, Rafael O. Ruiz, Alejandro Ortiz-Bernardin

Computational Science in IoT and Smart Systems

Frontmatter
Modelling Contextual Data for Smart Environments. Case Study of a System to Support Mountain Rescuers

Context-aware pervasive systems are complex, due to the need to gather detailed environmental information and to perform a variety of context reasoning processes in order to adapt behaviours accordingly. These operations are merged seamlessly. We show the feasibility and vitality of a fully designed system for mountain rescue operations, with various aspects of the contextual processing in middleware, as well as analyse its context life cycle. The system is verified through intensive experiments with a rich set of categorised context data. The contextual processing is shown in different weather scenarios. The service is geared towards software development, converging IoT (Internet of Things) and cloud computing with specific reference to smart application scenarios.

Radosław Klimek
Fuzzy Intelligence in Monitoring Older Adults with Wearables

Monitoring older adults with wearable sensors and IoT devices requires collecting data from various sources and proliferates the number of data that should be collected in the monitoring center. Due to the large storage space and scalability, Clouds became an attractive place where the data can be stored, processed, and analyzed in order to perform the monitoring on large scale and possibly detect dangerous situations. The use of fuzzy sets in the monitoring and detection processes allows incorporating expert knowledge and medical standards while describing the meaning of various sensor readings. Calculations related to fuzzy processing and data analysis can be performed on the Edge devices which frees the Cloud platform from performing costly operations, especially for many connected IoT devices and monitored people. In this paper, we show a solution that relies on fuzzy rules while classifying health states of monitored adults and we investigate the computational cost of rules evaluation in the Cloud and on the Edge devices.

Dariusz Mrozek, Mateusz Milik, Bożena Małysiak-Mrozek, Krzysztof Tokarz, Adam Duszenko, Stanisław Kozielski
Deep Analytics for Management and Cybersecurity of the National Energy Grid

The United States’s energy grid could fall into victim to numerous cyber attacks resulting in unprecedented damage to national security. The smart concept devices including electric automobiles, smart homes and cities, and the Internet of Things (IoT) promise further integration but as the hardware, software, and network infrastructure becomes more integrated they also become more susceptible to cyber attacks or exploitation. The Defense Information Systems Agency (DISA)’s Big Data Platform (BDP), deep analytics, and unsupervised machine learning (ML) have the potential to address resource management, cybersecurity, and energy network situation awareness. In this paper, we demonstrate their potential using the Pecan Street data. We also show an unsupervised ML such as lexical link analysis (LLA) as a causal learning tool to discover the causes for anomalous behavior related to energy use and cybersecurity.

Ying Zhao
Regression Methods for Detecting Anomalies in Flue Gas Desulphurization Installations in Coal-Fired Power Plants Based on Sensor Data

In the industrial world, the Internet of Things produces an enormous amount of data that we can use as a source for machine learning algorithms to optimize the production process. One area of application of this kind of advanced analytics is Predictive Maintenance, which involves early detection of faults based on existing metering. In this paper, we present the concept of a portable solution for a real-time condition monitoring system allowing for early detection of failures based on sensor data retrieved from SCADA systems. Although the data processed in systems, such as SCADA, are not initially intended for purposes other than controlling the production process, new technologies on the edge of big data and IoT remove these limitations and provide new possibilities of using advanced analytics. This paper shows how regression-based techniques can be adapted to fault detection based on actual process data from the oxygenating compressors in the flue gas desulphurization installation in a coal-fired power plant.

Marek Moleda, Alina Momot, Dariusz Mrozek
Autonomous Guided Vehicles for Smart Industries – The State-of-the-Art and Research Challenges

Autonomous Guided Vehicles (AGVs) are considered to be one of the critical enabling technologies for smart manufacturing. This paper focus on the application of AGVs in new generations of manufacturing systems including: (i) the fusion between AGVs and collaborative robots; (ii) the application of machine to machine communication for integrating AGVs with the production environment and (iii) AI-driven analytics that is focused on the data that is produced and consumed by AGV. This work aims to evoke discussion and elucidate the current research opportunities, highlight the relationship between different subareas and suggest possible courses of action.

Rafal Cupek, Marek Drewniak, Marcin Fojcik, Erik Kyrkjebø, Jerry Chun-Wei Lin, Dariusz Mrozek, Knut Øvsthus, Adam Ziebinski
IoT-Based Cow Health Monitoring System

Good health and wellbeing of animals are essential to dairy cow farms and sustainable production of milk. Unfortunately, day-to-day monitoring of animals condition is difficult, especially in large farms where employees do not have enough time to observe animals and detect first symptoms of diseases. This paper presents an automated, IoT-based monitoring system designed to monitor the health of dairy cows. The system is composed of hardware devices, a cloud system, an end-user application, and innovative techniques of data measurements and analysis algorithms. The system was tested in a real-life scenario and has proved it can effectively monitor animal welfare and the estrus cycle.

Olgierd Unold, Maciej Nikodem, Marek Piasecki, Kamil Szyc, Henryk Maciejewski, Marek Bawiec, Paweł Dobrowolski, Michał Zdunek
Visual Self-healing Modelling for Reliable Internet-of-Things Systems

Internet-of-Things systems are comprised of highly heterogeneous architectures, where different protocols, application stacks, integration services, and orchestration engines co-exist. As they permeate our everyday lives, more of them become safety-critical, increasing the need for making them testable and fault-tolerant, with minimal human intervention. In this paper, we present a set of self-healing extensions for Node-RED, a popular visual programming solution for IoT systems. These extensions add runtime verification mechanisms and self-healing capabilities via new reusable nodes, some of them leveraging meta-programming techniques. With them, we were able to implement self-modification of flows, empowering the system with self-monitoring and self-testing capabilities, that search for malfunctions, and take subsequent actions towards the maintenance of health and recovery. We tested these mechanisms on a set of scenarios using a live physical setup that we called SmartLab. Our results indicate that this approach can improve a system’s reliability and dependability, both by being able to detect failing conditions, as well as reacting to them by self-modifying flows, or triggering countermeasures.

João Pedro Dias, Bruno Lima, João Pascoal Faria, André Restivo, Hugo Sereno Ferreira
Comparative Analysis of Time Series Databases in the Context of Edge Computing for Low Power Sensor Networks

Selection of an appropriate database system for edge IoT devices is one of the essential elements that determine efficient edge-based data analysis in low power wireless sensor networks. This paper presents a comparative analysis of time series databases in the context of edge computing for IoT and Smart Systems. The research focuses on the performance comparison between three time-series databases: TimescaleDB, InfluxDB, Riak TS, as well as two relational databases, PostgreSQL and SQLite. All selected solutions were tested while being deployed on a single-board computer, Raspberry Pi. For each of them, the database schema was designed, based on a data model representing sensor readings and their corresponding timestamps. For performance testing, we developed a small application that was able to simulate insertion and querying operations. The results of the experiments showed that for presented scenarios of reading data, PostgreSQL and InfluxDB emerged as the most performing solutions. For tested insertion scenarios, PostgreSQL turned out to be the fastest. Carried out experiments also proved that low-cost, single-board computers such as Raspberry Pi can be used as small-scale data aggregation nodes on edge device in low power wireless sensor networks, that often serve as a base for IoT-based smart systems.

Piotr Grzesik, Dariusz Mrozek
Conversational Interface for Managing Non-trivial Internet-of-Things Systems

Internet-of-Things has reshaped the way people interact with their surroundings. In a smart home, controlling the lights is as simple as speaking to a conversational assistant since everything is now Internet-connected. But despite their pervasiveness, most of the existent IoT systems provide limited out-of-the-box customization capabilities. Several solutions try to attain this issue leveraging end-user programming features that allow users to define rules to their systems, at the cost of discarding the easiness of voice interaction. However, as the number of devices increases, along with the number of household members, the complexity of managing such systems becomes a problem, including finding out why something has happened. In this work we present Jarvis, a conversational interface to manage IoT systems that attempts to address these issues by allowing users to specify time-based rules, use contextual awareness for more natural interactions, provide event management and support causality queries. A proof-of-concept was used to carry out a quasi-experiment with non-technical participants that provides evidence that such approach is intuitive enough to be used by common end-users.

André Sousa Lago, João Pedro Dias, Hugo Sereno Ferreira
Improving Coverage Area in Sensor Deployment Using Genetic Algorithm

Wireless sensor networks (WSN) are a collection of autonomous nodes with a limited battery life. They are used in various fields such as health, industry, home automation. Due to their limited resources and constraints, WSNs face several problems. One of these problems is the optimal coverage of a observed area. Indeed, whatever the domain, ensuring optimal network coverage remains a very important issue in WSNs, especially when the number of sensors is limited. In this paper, we aim to cover a two-dimensional Euclidean area with a given number of sensors by using genetic algorithm in order to find the best placement to ensure a good network coverage. The maximum coverage problem addressed in this paper is based on the calculation of the total area covered by deployed sensor nodes. We first, define the problem of maximum coverage. For a given number of sensors, the proposed algorithm find the best position to maximize the sensor area coverage. Finally, the results show that the proposed method well maximize the sensor area coverage.

Frantz Tossa, Wahabou Abdou, Eugène C. Ezin, Pierre Gouton
Object-Oriented Internet Reactive Interoperability

Information and Communication Technology has provided society with a vast variety of distributed applications. By design, the deployment of this kind of application has to focus primarily on communication. This article addresses research results on the systematic approach to the design of the meaningful Machine to Machine (M2M) communication targeting distributed mobile applications in the context of new emerging disciplines, i.e. Industry 4.0 and Internet of Things. This paper contributes to the design of a new architecture of mobile IoT solutions designed atop of the M2M communication and composed as multi-vendor cyber-physicals systems. The described reusable library supporting this architecture designed using the reactive interoperability archetype proves that the concept enables a systematic approach to the development and deployment of software applications against mobile IoT solutions based on international standards. Dependency injection and adaptive programming engineering techniques have been engaged to develop a full-featured reference application program and make the proposed solution scalable and robust against deployment environment continuous modifications. The article presents an executive summary of the proof of concept and describes selected conceptual and experimental results achieved as an outcome of the open-source project Object-Oriented Internet targeting multi-vendor plug-and-produce interoperability scenario.

Mariusz Postół
Impact of Long-Range Dependent Traffic in IoT Local Wireless Networks on Backhaul Link Performance

Performance evaluation in Internet of Things (IoT) networks is becoming more and more important due to the increasing demand for quality of service (QoS). In addition to basic statistical properties based on the distribution of interarrival times of packets, actual network traffic exhibits correlations over a wide range of time scales associated with long-range dependence (LRD). This article focuses on examining the impact of both LRD and number of nodes that transmit packets in a typical IoT wireless local network, on performance of the backhaul link. The analysis of latency and packet loss led to an interesting observation that the aggregation of packet streams, originating from single nodes, lowers the importance of LRD, even causing an underestimation of performance results when compared to the queueing system with Markovian input.

Przemyslaw Wlodarski

Computer Graphics, Image Processing and Artificial Intelligence

Frontmatter
OpenGraphGym: A Parallel Reinforcement Learning Framework for Graph Optimization Problems

This paper presents an open-source, parallel AI environment (named OpenGraphGym) to facilitate the application of reinforcement learning (RL) algorithms to address combinatorial graph optimization problems. This environment incorporates a basic deep reinforcement learning method, and several graph embeddings to capture graph features, it also allows users to rapidly plug in and test new RL algorithms and graph embeddings for graph optimization problems. This new open-source RL framework is targeted at achieving both high performance and high quality of the computed graph solutions. This RL framework forms the foundation of several ongoing research directions, including 1) benchmark works on different RL algorithms and embedding methods for classic graph problems; 2) advanced parallel strategies for extreme-scale graph computations, as well as 3) performance evaluation on real-world graph solutions.

Weijian Zheng, Dali Wang, Fengguang Song
Weighted Clustering for Bees Detection on Video Images

This work describes a bee detection system to monitor bee colony conditions. The detection process on video images has been divided into 3 stages: determining the regions of interest (ROI) for a given frame, scanning the frame in ROI areas using the DNN-CNN classifier, in order to obtain a confidence of bee occurrence in each window in any position and any scale, and form one detection window from a cloud of windows provided by a positive classification. The process has been performed by a method of weighted cluster analysis, which is the main contribution of this work. The paper also describes a process of building the detector, during which the main challenge was the selection of clustering parameters that gives the smallest generalization error.The results of the experiments show the advantage of the cluster analysis method over the greedy method and the advantage of the optimization of cluster analysis parameters over standard-heuristic parameter values, provided that a sufficiently long learning fragment of the movie is used to optimize the parameters.

Jerzy Dembski, Julian Szymański
Improved Two-Step Binarization of Degraded Document Images Based on Gaussian Mixture Model

Image binarization is one of the most relevant preprocessing operations influencing the results of further image analysis conducted for many purposes. During this step a significant loss of information occurs and the use of inappropriate thresholding methods may cause difficulties in further shape analysis or even make it impossible to recognize different shapes of objects or characters. Some of the most typical applications utilizing the analysis of binary images are Optical Character Recognition (OCR) and Optical Mark Recognition (OMR), which may also be applied for unevenly illuminated natural images, as well as for challenging degraded historical document images, considered as typical benchmarking tools for image binarization algorithms.To face the still valid challenge of relatively fast and simple, but robust binarization of degraded document images, a novel two-step algorithm utilizing initial thresholding, based on the modelling of the simplified image histogram using Gaussian Mixture Model (GMM) and the Monte Carlo method, is proposed in the paper. This approach can be considered as the extension of recently developed image preprocessing method utilizing Generalized Gaussian Distribution (GGD), based on the assumption of its similarity to the histograms of ground truth binary images distorted by Gaussian noise. The processing time of the first step, producing the intermediate images with partially removed background information, may be significantly reduced due to the use of the Monte Carlo method.The proposed improved approach leads to even better results, not only for well-known DIBCO benchmarking databases, but also for more demanding Bickley Diary dataset, allowing the use of some well-known classical binarization methods, including the global ones, in the second step of the algorithm.

Robert Krupiński, Piotr Lech, Krzysztof Okarma
Cast Shadow Generation Using Generative Adversarial Networks

We propose a computer graphics pipeline for 3D rendered cast shadow generation using generative adversarial networks (GANs). This work is inspired by the existing regression models as well as other convolutional neural networks such as the U-Net architectures which can be geared to produce believable global illumination effects. Here, we use a semi-supervised GANs model comprising of a PatchGAN and a conditional GAN which is then complemented by a U-Net structure. We have adopted this structure because of its training ability and the quality of the results that come forth. Unlike other forms of GANs, the chosen implementation utilises colour labels to generate believable visual coherence. We carried forth a series of experiments, through laboratory generated image sets, to explore the extent at which colour can create the correct shadows for a variety of 3D shadowed and un-shadowed images. Once an optimised model is achieved, we then apply high resolution image mappings to enhance the quality of the final render. As a result, we have established that the chosen GANs model can produce believable outputs with the correct cast shadows with plausible scores on PSNR and SSIM similarity index metrices.

Khasrouf Taif, Hassan Ugail, Irfan Mehmood
Medical Image Enhancement Using Super Resolution Methods

Deep Learning image processing methods are gradually gaining popularity in a number of areas including medical imaging. Classification, segmentation, and denoising of images are some of the most demanded tasks. In this study, we aim at enhancing optic nerve head images obtained by Optical Coherence Tomography (OCT). However, instead of directly applying noise reduction techniques, we use multiple state-of-the-art image Super-Resolution (SR) methods. In SR, the low-resolution (LR) image is upsampled to match the size of the high-resolution (HR) image. With respect to image enhancement, the upsampled LR image can be considered as low quality, noisy image, and the HR image would be the desired enhanced version of it. We experimented with several image SR architectures, such as super-resolution Convolutional Neural Network (SRCNN), very deep Convolutional Network (VDSR), deeply recursive Convolutional Network (DRCN), and enhanced super-resolution Generative Adversarial Network (ESRGAN). Quantitatively, in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), the SRCNN, VDSR, and DRCN significantly improved the test images. Although the ERSGAN showed the worst PSNR and SSIM, qualitatively, it was the best one.

Koki Yamashita, Konstantin Markov
Plane Space Representation in Context of Mode-Based Symmetry Plane Detection

This paper describes various representations of the space of planes. The main focus is on the plane space representation in the symmetry plane detection in $$E^3$$ where many candidate planes for many pairs of points of the given object are created and then the most often candidate is found as a mode in the candidate space, so-called Mode-based approach. The result depends on the representation used in the mode-seeking process. The most important aspect is how well distances in the space correspond to similarities of the actual planes with respect to the input object. So, we describe various usable distance functions and compare them both theoretically and practically. The results suggest that, when using the Mode-based approach, representing planes by reflection transformations is the best way but other simpler representations are applicable as well. On the other hand, representations using 3D dual spaces are not very appropriate. Furthermore, we introduce a novel way of representing the reflection transformations using dual quaternions.

Lukáš Hruda, Ivana Kolingerová, Miroslav Lávička
Impression Curve as a New Tool in the Study of Visual Diversity of Computer Game Levels for Individual Phases of the Design Process

Impression curve is a widely used method in urban and landscape design to assess visual diversity of the space. In these studies, the method is applied for game level design. The goal of conducted research was the analysis of space perception in successive design phases related to the process of game environment formation. Next steps of the design process define the space burdened with more and more information. It aims to evaluate if initial assumptions, made by a designer at the beginning of the designing process, are maintained with the increase in the number of details and the content of locations. These studies are also a background for research in automation of visual diversity assessment. This, in turn, is related to making a player focused and interested during a gameplay, by the means of space defining an action scene. By applying a method from domain of urban planning and architecture in human-computer interaction (HCI) studies related to virtual space, we show that both - defining the surroundings and its impact on recipient - are subject to the same rules in either case.

Jarosław Andrzejczak, Marta Osowicz, Rafał Szrajber
Visual Analysis of Computer Game Output Video Stream for Gameplay Metrics

This work contains a solution for game metrics analysis based on a visual data stream dedicated for the player. The solution does not require interference in the programming code of the analyzed game and it is only based on image processing. It is possible to analyze several aspects of the game simultaneously, for example health/energy bars, current weapon used, number of objects worn (aid kits, ammunition). There have been presented methods using cascading classifiers and their training to detect the desired objects on the screen and to prepare data for other stages of processing, e.g. OCR. The effect of the methods is a gameplay chart that allows a thorough analysis of the player’s actions in the game world and his or her advancement. The solution is fast enough that it can be used not only in previously recorded gameplay analysis, but also in real time during simultaneous gameplay.

Kamil Kozłowski, Marcin Korytkowski, Dominik Szajerman
Depth Map Estimation with Consistent Normals from Stereo Images

The total variation regularization of non-convex data terms in continuous variational models can be convexified by the so called functional lifting, which may be considered as a continuous counterpart of Ishikawa’s method for multi-label discrete variational problems. We solve the resulting convex continuous variational problem by the augmented Lagrangian method. Application of this method to the dense depth map estimation allows us to obtain a consistent normal field to the depth surface as a byproduct. We illustrate the method with numerical examples of the depth map estimation for rectified stereo image pairs.

Alexander Malyshev
Parametric Learning of Associative Functional Networks Through a Modified Memetic Self-adaptive Firefly Algorithm

Functional networks are a powerful extension of neural networks where the scalar weights are replaced by neural functions. This paper concerns the problem of parametric learning of the associative model, a functional network that represents the associativity operator. This problem can be formulated as a nonlinear continuous least-squares minimization problem, solved by applying a swarm intelligence approach based on a modified memetic self-adaptive version of the firefly algorithm. The performance of our approach is discussed through an illustrative example. It shows that our method can be successfully applied to solve the parametric learning of functional networks with unknown functions.

Akemi Gálvez, Andrés Iglesias, Eneko Osaba, Javier Del Ser
Dual Formulation of the TV-Stokes Denoising Model for Multidimensional Vectorial Images

The TV-Stokes denoising model for a vectorial image defines a denoised vector field in the form of the gradient of a scalar function. The dual formulation naturally leads to a Chambolle-type algorithm, where the most time consuming part is application of the orthogonal projector onto the range space of the gradient operator. This application can be efficiently executed by the fast cosine transform taking advantage of the fast Fourier transform. Convergence of the Chambolle-type iteration can be improved by Nesterov’s acceleration.

Alexander Malyshev
Minimizing Material Consumption of 3D Printing with Stress-Guided Optimization

3D printing has been widely used in daily life, industry, architecture, aerospace, crafts, art, etc. Minimizing 3D printing material consumption can greatly reduce the costs. Therefore, how to design 3D printed objects with less materials while maintain structural soundness is an important problem. The current treatment is to use thin shells. However, thin shells have low strength. In this paper, we use stiffeners to stiffen 3D thin-shell objects for increasing the strength of the objects and propose a stress guided optimization framework to achieve minimum material consumption. First, we carry out finite element calculations to determine stress distribution in 3D objects and use the stress distribution to guide random generation of some points called seeds. Then we map the 3D objects and seeds to a 2D space and create a Voronoi Diagram from the seeds. The stiffeners are taken to be the edges of the Voronoi Diagram whose intersections with the edges of each of the triangles used to represent the polygon models of the 3D objects are used to define stiffeners. The obtained intersections are mapped back to 3D polygon models and the cross-section size of stiffeners is minimized under the constraint of the required strength. Monte-Carlo simulation is finally introduced to repeat the process from random seed generation to cross-section size optimization of stiffeners. Many experiments are presented to demonstrate the proposed framework and its advantages.

Anzong Zheng, Shaojun Bian, Ehtzaz Chaudhry, Jian Chang, Habibollah Haron, Lihua You, Jianjun Zhang
Swarm Intelligence Approach for Rational Global Approximation of Characteristic Curves for the Van der Waals Equation of State

The Van der Waals (VdW) equation of state is a popular generalization of the law of ideal gases proposed time ago. In many situations, it is convenient to compute the characteristic curves of the VdW equation of state, called binodal and spinodal curves. Typically, they are constructed through data fitting from a collection of data points represented in the two-dimensional pressure-volume plane. However, the resulting models are still limited and can be further enhanced. In this paper, we propose to extend this polynomial approach by using a rational function as a fitting function. In particular, we consider a rational free-form Bézier curve, which provides a global approximation to the shape of the curve. This rational approach is more flexible than the polynomial one owing to some extra parameters, the weights. Unfortunately, data fitting becomes more difficult as these new parameters have also to be computed. In this paper we address this problem through a powerful nature-inspired swarm intelligence method for continuous optimization called the bat algorithm. Our experimental results show that the method can reconstruct the characteristic curves with very good accuracy.

Almudena Campuzano, Andrés Iglesias, Akemi Gálvez
Backmatter
Metadaten
Titel
Computational Science – ICCS 2020
herausgegeben von
Dr. Valeria V. Krzhizhanovskaya
Dr. Gábor Závodszky
Michael H. Lees
Prof. Jack J. Dongarra
Prof. Dr. Peter M. A. Sloot
Sérgio Brissos
João Teixeira
Copyright-Jahr
2020
Electronic ISBN
978-3-030-50426-7
Print ISBN
978-3-030-50425-0
DOI
https://doi.org/10.1007/978-3-030-50426-7