Skip to main content
Top

2021 | Book

Computational Science – ICCS 2021

21st International Conference, Krakow, Poland, June 16–18, 2021, Proceedings, Part IV

Editors: Prof. Maciej Paszynski, Prof. Dr. Dieter Kranzlmüller, Dr. Valeria V. Krzhizhanovskaya, Prof. Jack J. Dongarra, Prof. Dr. Peter M.A. Sloot

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The six-volume set LNCS 12742, 12743, 12744, 12745, 12746, and 12747 constitutes the proceedings of the 21st International Conference on Computational Science, ICCS 2021, held in Krakow, Poland, in June 2021.*

The total of 260 full papers and 57 short papers presented in this book set were carefully reviewed and selected from 635 submissions. 48 full and 14 short papers were accepted to the main track from 156 submissions; 212 full and 43 short papers were accepted to the workshops/ thematic tracks from 479 submissions. The papers were organized in topical sections named:

Part I: ICCS Main Track

Part II: Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Applications of Computational Methods in Artificial Intelligence and Machine Learning; Artificial Intelligence and High-Performance Computing for Advanced Simulations; Biomedical and Bioinformatics Challenges for Computer Science

Part III: Classifier Learning from Difficult Data; Computational Analysis of Complex Social Systems; Computational Collective Intelligence; Computational Health

Part IV: Computational Methods for Emerging Problems in (dis-)Information Analysis; Computational Methods in Smart Agriculture; Computational Optimization, Modelling and Simulation; Computational Science in IoT and Smart Systems

Part V: Computer Graphics, Image Processing and Artificial Intelligence; Data-Driven Computational Sciences; Machine Learning and Data Assimilation for Dynamical Systems; MeshFree Methods and Radial Basis Functions in Computational Sciences; Multiscale Modelling and Simulation

Part VI: Quantum Computing Workshop; Simulations of Flow and Transport: Modeling, Algorithms and Computation; Smart Systems: Bringing Together Computer Vision, Sensor Networks and Machine Learning; Software Engineering for Computational Science; Solving Problems with Uncertainty; Teaching Computational Science; Uncertainty Quantification for Computational Models

*The conference was held virtually.

Chapter “Intelligent Planning of Logistic Networks to Counteract Uncertainty Propagation” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Table of Contents

Frontmatter
Correction to: Static and Dynamic Comparison of Pozyx and DecaWave UWB Indoor Localization Systems with Possible Improvements

The original version of this chapter was revised. In reference 34, the surname of the first author was incorrect. The surname has been corrected from “Porti” to “Potortì.”

Maciej Paszynski, Dieter Kranzlmüller, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, Peter M. A. Sloot

Computational Methods for Emerging Problems in (dis-)Information Analysis

Frontmatter
The Methods and Approaches of Explainable Artificial Intelligence

Artificial Intelligence has found innumerable applications, becoming ubiquitous in the contemporary society. From making unnoticeable, minor choices to determining people’s fates (the case of predictive policing). This fact raises serious concerns about the lack of explainability of those systems. Finding ways to enable humans to comprehend the results provided by AI is a blooming area of research right now. This paper explores the current findings in the field of Explainable Artificial Intelligence (xAI), along with xAI methods and solutions that realise them. The paper provides an umbrella perspective on available xAI options, sorting them into a range of levels of abstraction, starting from community-developed code snippets implementing facets of xAI research all the way up to comprehensive solutions utilising state-of-the-art achievements in the domain.

Mateusz Szczepański, Michał Choraś, Marek Pawlicki, Aleksandra Pawlicka
Fake or Real? The Novel Approach to Detecting Online Disinformation Based on Multi ML Classifiers

Background: the machine learning (ML) techniques have been implemented in numerous applications and domains, including health-care, security, entertainment, and sports. This paper presents how ML can be used for detecting fake news. The problem of online disinformation has recently become one of the most challenging issues of computer science. Methods: in this research, a fake news detection method based on multi classifiers (CNN, XGBoost, Random Forest, Naive Bayes, SVM) has been developed. In the proposed method, two classifiers cooperate; consequently, they obtain better results. Realistic, publicly available data was used in order to train and test the classifiers, Results: in the article, several experiments were presented; they differ in the implemented classifiers, and some improved parameters. Promising results (accuracy = 0.95, precision = 0.99, recall = 0.91, and F1-score = 0.95) were reported. Conclusion: the presented research proves that machine learning is a promising approach to fake news detection.

Martyna Tarczewska, Anna Marciniak, Agata Giełczyk
Transformer Based Models in Fake News Detection

The article presents models for detecting fake news and the results of the analyzes of the application of these models. The precision, f1-score, recall metrics were proposed as a measure of the model quality assessment. Neural network architectures, based on the state-of-the-art solutions of the Transformer type were applied to create the models. The computing capabilities of the Google Colaboratory remote platform, as well as the Flair library, made it feasible to obtain reliable, robust models for fake news detection. The problem of disinformation and fake news is an important issue for modern societies, which commonly use state-of-the-art telecommunications technologies. Artificial intelligence and deep learning techniques are considered to be effective tools in protection against these undesirable phenomena.

Sebastian Kula, Rafał Kozik, Michał Choraś, Michał Woźniak
Towards Model-Agnostic Ensemble Explanations

Explainable Artificial Intelligence (XAI) methods form a large portfolio of different frameworks and algorithms. Although the main goal of all of explanation methods is to provide an insight into the decision process of AI system, their underlying mechanisms may differ. This may result in very different explanations for the same tasks. In this work, we present an approach that aims at combining several XAI algorithms into one ensemble explanation mechanism via quantitative, automated evaluation framework. We focus on model-agnostic explainers to provide most robustness and we demonstrate our approach on image classification task.

Szymon Bobek, Paweł Bałaga, Grzegorz J. Nalepa

Computational Methods in Smart Agriculture

Frontmatter
Bluetooth Low Energy Livestock Positioning for Smart Farming Applications

Device localization provides additional information and context to IoT systems, including Agriculture 4.0 and Smart Farming. However, enabling localization incurs additional requirements and trade-offs that often do not fit into application constraints – use of specific radio technologies, increased communication, computational, and energy costs. This paper presents a localization method that was designed for Smart Farming and applies to a wide range of radio technologies and IoT systems. The method was verified in a real-life IoT system dedicated to monitor cow health and behavior. In a large multi-path environment, with a large number of obstacles, using only 10 anchors, the system achieves an average localization error equal to 6.3 m. This allows to use the proposed approach for animal tracking and activity monitoring which is beneficial for well-being assessment.

Maciej Nikodem
Monitoring the Uniformity of Fish Feeding Based on Image Feature Analysis

The main purpose of the conducted research is the development and experimental verification of the methods for detection of fish feeding as well as checking its uniformity in the recirculating aquaculture systems (RAS) using machine vision. A particular emphasis has been set on the methods useful for rainbow trout farming.Obtained results, based on the analysis of individual video frames, convince that the estimation of feeding uniformity in individual RAS-based farming ponds is possible using the selected local image features without the necessity of camera calibration. The experimental results have been achieved for the images acquired in the RAS-based rainbow trout farming ponds and verified using some publicly available video sequences from tilapia and catfish feeding.

Piotr Lech, Krzysztof Okarma, Agata Korzelecka-Orkisz, Adam Tański, Krzysztof Formicki
A New Multi-objective Approach to Optimize Irrigation Using a Crop Simulation Model and Weather History

Optimization of water consumption in agriculture is necessary to preserve freshwater reserves and reduce the environment’s burden. Finding optimal irrigation and water resources for crops is necessary to increase the efficiency of water usage. Many optimization approaches maximize crop yield or profit but do not consider the impact on the environment. We propose a machine learning approach based on the crop simulation model WOFOST to assess the crop yield and water use efficiency. In our research, we use weather history to evaluate various weather scenarios. The application of multi-criteria optimization based on the non-dominated sorting genetic algorithm-II (NSGA-II) allows users to find the dates and volume of water for irrigation, maximizing the yield and reducing the total water consumption. In the study case, we compared the effectiveness of NSGA-II with Monte Carlo search and a real farmer’s strategy. We showed a decrease in water consumption simultaneously with increased sugar-beet yield using the NSGA-II algorithm. Our approach yielded a higher potato crop than a farmer with a similar level of water consumption. The NSGA-II algorithm received an increase in yield for potato crops, but water use efficiency remained at the farmer’s level. NSGA-II used water resources more efficiently than the Monte Carlo search and reduced water losses to the lower soil horizons.

Mikhail Gasanov, Daniil Merkulov, Artyom Nikitin, Sergey Matveev, Nikita Stasenko, Anna Petrovskaia, Mariia Pukalchik, Ivan Oseledets

Computational Optimization, Modelling and Simulation

Frontmatter
Expedited Trust-Region-Based Design Closure of Antennas by Variable-Resolution EM Simulations

The observed growth in the complexity of modern antenna topologies fostered a widespread employment of numerical optimization methods as the primary tools for final adjustment of the system parameters. This is mainly caused by insufficiency of traditional design closure approaches, largely based on parameter sweeping. Reliable evaluation of complex antenna structures requires full-wave electromagnetic (EM) analysis. Yet, EM-driven parametric optimization is, more often than not, extremely costly, especially when global search is involved, e.g., performed with population-based metaheuristic algorithms. Over the years, numerous methods of lowering these expenditures have been proposed. Among these, the methods exploiting variable-fidelity simulations started gaining certain popularity. Still, such frameworks are predominantly restricted to two levels of fidelity, referred to as coarse and fine models. This paper introduces a reduced-cost trust-region gradient-based algorithm involving variable-resolution simulations, in which the fidelity of EM analysis is selected from a continuous spectrum of admissible levels. The algorithm is launched with the coarsest discretization level of the antenna under design. As the optimization process converges, for reliability reasons, the model fidelity is increased to reach the highest level at the final stage. The proposed algorithm allows for a significant reduction of the computational cost (up to sixty percent with respect to the reference trust-region algorithm) without compromising the design quality, which is corroborated by thorough numerical experiments involving four broadband antenna structures.

Slawomir Koziel, Anna Pietrenko-Dabrowska, Leifur Leifsson
Optimum Design of Tuned Mass Dampers for Adjacent Structures via Flower Pollination Algorithm

It is a very known issue that tuned mass dampers (TMDs) on an effective system for structures subjected to earthquake excitations. TMDs can be also used as a protective system for adjacent structures that may pound to each other. With a suitable optimization methodology, it is possible to find an optimally tuned TMD that is effective in reducing the responses of structure with an additional protective feature that reduces the amount of required seismic gap between adjacent structures by using an objective function. This function considers the displacement of structures with respect to each other. As the optimization methodology, the flower pollination algorithm (FPA) is used in finding the optimum parameters of TMDs of both structures. The method was evaluated on two 10-story adjacent structures and the optimum results were compared with harmony search (HS) based methodology.

Sinan Melih Nigdeli, Gebrail Bekdaş, Xin-She Yang
On Fast Multi-objective Optimization of Antenna Structures Using Pareto Front Triangulation and Inverse Surrogates

Design of contemporary antenna systems is a challenging endeavor, where conceptual developments and initial parametric studies, interleaved with topology evolution, are followed by a meticulous adjustment of the structure dimensions. The latter is necessary to boost the antenna performance as much as possible, and often requires handling several and often conflicting objectives, pertinent to both electrical and field properties of the structure. Unless the designer’s priorities are already established, multi-objective optimization (MO) is the preferred way of yielding the most comprehensive information about the best available design trade-offs. Notwithstanding, MO of antennas has to be carried out at the level of full-wave electromagnetic (EM) simulation models which poses serious difficulties due to high computational costs of the process. Popular mitigation methods include surrogate-assisted procedures; however, rendering reliable metamodels is problematic at higher-dimensional parameter spaces. This paper proposes a simple yet efficient methodology for multi-objective design of antenna structures, which is based on sequential identification of the Pareto-optimal points using inverse surrogates, and triangulation of the already acquired Pareto front representation. The two major benefits of the presented procedure are low computational complexity, and uniformity of the produced Pareto set, as demonstrated using two microstrip structures, a wideband monopole and a planar quasi-Yagi. In both cases, ten-element Pareto sets are generated at the cost of only a few hundreds of EM analyses of the respective devices. At the same time, the savings over the state-of-the-art surrogate-based MO algorithm are as high as seventy percent.

Anna Pietrenko-Dabrowska, Slawomir Koziel, Leifur Leifsson
Optimizations of a Generic Holographic Projection Model for GPU’s

Holographic projections are volumetric projections that make use of the wave-like nature of light and may find use in applications such as volumetric displays, 3D printing, lithography and LIDAR. Modelling different types of holographic projectors is straightforward but challenging due to the large number of samples that are required. Although computing capabilities have improved, recent simulations still have to make trade-offs between accuracy, performance and level of generalization. Our research focuses on the development of optimizations that make optimal use of modern hardware, allowing larger and higher-quality simulations to be run. Several algorithms are proposed; (1) a brute force algorithm that can reach 20% of the theoretical peak performance and reached a $$43{\times }$$ 43 × speedup w.r.t. a previous GPU implementation and (2) a Monte Carlo algorithm that is another magnitude faster but has a lower accuracy. These implementations help researchers to develop and test new holographic devices.

Mark Voschezang, Martin Fransen
Similarity and Conformity Graphs in Lighting Optimization and Assessment

Lighting affects everyday life in terms of safety, comfort and quality of life. On the other side it consumes significant amounts of energy. Thanks to the effect of scale, even a small unit reduction of a power efficiency yields the significant energy and cost savings.Unfortunately, planning a highly optimized lighting installation is a task of the high complexity, due to a huge number of variants to be checked. In such circumstances it becomes necessary to use a formal model, applicable for automated bulk processing, which allows finding the best setup or estimating resultant installation power in an acceptable time, i.e., in hours rather than days. This paper introduces such a formal model relying on the similarity and conformity graph concepts. The examples of their practical application in outdoor lighting planning are also presented. Applying those structures allows reducing substantially a processing time required for planning large scale installations.

Artur Basiura, Adam Sędziwy, Konrad Komnata
Pruned Simulation-Based Optimal Sailboat Path Search Using Micro HPC Systems

Simulation-based optimal path search algorithms are often solved using dynamic programming, which is typically computationally expensive. This can be an issue in a number of cases including near-real-time autonomous robot or sailboat path planners. We show the solution to this problem which is both effective and (energy) efficient. Its three key elements – an accurate and efficient estimator of the performance measure, two-level pruning (which augments the estimator-based search space reduction with smart simulation and estimation techniques), and an OpenCL-based spmd-parallelisation of the algorithm – are presented in detail. The included numerical results show the high accuracy of the estimator (the medians of relative estimation errors smaller than 0.003), the high efficacy of the two-level pruning (search space and computing time reduction from seventeen to twenty times), and the high parallel speedup (its maximum observed value was almost 40). Combining these effects gives (up to) 782 times faster execution. The proposed approach can be applied to various domains. It can be considered as an optimal path planing framework parametrised by a problem specific performance measure heuristic/estimator.

Roman Dębski, Bartlomiej Sniezynski
Two Stage Approach to Optimize Electricity Contract Capacity Problem for Commercial Customers

The electricity tariffs available to Polish customers depend on the voltage level to which the customer is connected as well as contracted capacity in line with the user demand profile. Each consumer, before connecting to the power grid, declares the demand for maximum power which is considered a contracted capacity. Maximum power is the basis for calculating fixed charges for electricity consumption. Usually, the maximum power for the household user is controlled through a circuit breaker. For the industrial and business users the maximum power is controlled and metered through the peak meters. If the peak demand exceeds the contracted capacity, a penalty charge is applied to the exceeded amount which is up to ten times the basic rate. In this article, we present a solution for entrepreneurs which is based on the implementation of two stage approach to predict maximal load values and the moments of exceeding the contracted capacity in the short-term, i.e., up to one month ahead. The forecast is further used to optimize the capacity volume to be contracted in the following month to minimize network charge for exceeding the contracted level. As shown experimentally with two datasets, the application of multiple output forecast artificial neural network model and genetic algorithm for load optimization delivers significant benefits to the customers.

Rafik Nafkha, Tomasz Ząbkowski, Krzysztof Gajowniczek
Improved Design Closure of Compact Microwave Circuits by Means of Performance Requirement Adaptation

Numerical optimization procedures have been widely used in the design of microwave components and systems. Most often, optimization algorithms are applied at the later stages of the design process to tune the geometry and/or material parameter values. To ensure sufficient accuracy, parameter adjustment is realized at the level of full-wave electromagnetic (EM) analysis, which creates perhaps the most important bottleneck due to the entailed computational expenses. The cost issue hinders utilization of global search procedures, whereas local routines often fail when the initial design is of insufficient quality, especially in terms of the relationships between the current and the target operating frequencies. This paper proposes a procedure for automated adaptation of the performance requirements, which aims at improving the reliability of the parameter tuning process in the challenging situations as described above. The procedure temporarily relaxes the requirements to ensure that the existing solution can be improved, and gradually tightens them when close to terminating the optimization process. The amount and the timing of specification adjustment is governed by evaluating the design quality at the current design, and the convergence status of the algorithm. The proposed framework is validated using two examples of microstrip components (a coupler and a power divider), and shown to well handle design scenarios that turn infeasible for conventional approaches, in particular, when decent starting points are unavailable.

Slawomir Koziel, Anna Pietrenko-Dabrowska, Leifur Leifsson
Graph-Grammar Based Longest-Edge Refinement Algorithm for Three-Dimensional Optimally p Refined Meshes with Tetrahedral Elements

Finite element method is a popular way of solving engineering problems in geoengineering. Three-dimensional grids employed for approximation the formation layers are often constructed from tetrahedral finite elements. The refinement algorithms that avoids hanging nodes are desired in order to avoid constrained approximation on broken edges and faces. We present a new mesh refinement algorithm for such the tetrahedral grids, with the following features (1) it is a two-level algorithm, refining the elements’ faces first, followed by the refinement of the elements’ interiors; (2) for the face refinements it employs the graph-grammar based version of the longest-edge refinement algorithm to avoid the hanging nodes; and (3) it allows for nearly perfect parallel execution of the second stage, refining the element interiors. We describe the algorithm using the graph-grammar based formalism. We verify the properties of the algorithm, by breaking 5,000 tetrahedral elements, and checking their angles and proportions. On the generated meshes without hanging nodes we span the polynomial basis functions of the optimal order, selected via metaheuristic optimization algorithm. We use them for the projection based interpolation of formation layers.

Albert Mosiałek, Andrzej Szaflarski, Rafał Pych, Marek Kisiel-Dorohinicki, Maciej Paszyński, Anna Paszyńska
Elitism in Multiobjective Hierarchical Strategy

The paper focuses on complex metaheuristic algorithms, namely multi-objective hierarchical strategy, which consists of a dynamically evolving tree of interdependent demes of individuals. The main contribution presented in this paper is the introduction of elitism in a form of an archive, locally into the demes and globally into the whole tree and developing necessary updates between them. The newly proposed algorithms (utilizing elitism) are compared with their previous versions as well as with the best state of the art multi-objective metaheuristics.

Michał Idzik, Radosław Łazarz, Aleksander Byrski

Open Access

Modelling and Forecasting Based on Recurrent Pseudoinverse Matrices

Time series modelling and forecasting techniques have a wide spectrum of applications in several fields including economics, finance, engineering and computer science. Most available modelling and forecasting techniques are applicable to a specific underlying phenomenon and its properties and lack generality of application, while more general forecasting techniques require substantial computational time for training and application. Herewith, we present a general modelling framework based on a recursive Schur - complement technique, that utilizes a set of basis functions, either linear or non-linear, to form a model for a general time series. The basis functions need not be orthogonal and their number is determined adaptively based on fitting accuracy. Moreover, no assumptions are required for the input data. The coefficients for the basis functions are computed using a recursive pseudoinverse matrix, thus they can be recomputed for different input data. The case of sinusoidal basis functions is presented. Discussions around stability of the resulting model and choice of basis functions is also provided. Numerical results depicting the applicability and effectiveness of the proposed technique are given.

Christos K. Filelis-Papadopoulos, Panagiotis E. Kyziropoulos, John P. Morrison, Philip O‘Reilly
Semi-analytical Monte Carlo Optimisation Method Applied to the Inverse Poisson Problem

The research is focused on the numerical analysis of the inverse Poisson problem, namely the identification of the unknown (input) load source function, being the right-hand side function of the second order differential equation. It is assumed that the additional measurement data of the solution (output) function are available at few isolated locations inside the problem domain. The problem may be formulated as the non-linear optimisation problem with inequality constrains.The proposed solution approach is based upon the well-known Monte Carlo concept with a random walk technique, approximating the solution of the direct Poisson problem at selected point(s), using series of random simulations. However, since it may deliver the linear explicit relation between the input and the output at measurement locations only, the objective function may be analytically differentiated with the respect to unknown load parameters. Consequently, they may be determined by the solution of the small system of algebraic equations. Therefore, drawbacks of traditional optimization algorithms, computationally demanding, time-consuming and sensitive to their parameters, may be removed. The potential power of the proposed approach is demonstrated on selected benchmark problems with various levels of complexity.

Sławomir Milewski
Modeling the Contribution of Agriculture Towards Soil Nitrogen Surplus in Iowa

The Midwest state of Iowa in the US is one of the major producers of corn, soybean, ethanol, and animal products, and has long been known as a significant contributor of nitrogen loads to the Mississippi river basin, supplying the nutrient-rich water to the Gulf of Mexico. Nitrogen is the principal contributor to the formation of the hypoxic zone in the northern Gulf of Mexico with a significant detrimental environmental impact. Agriculture, animal agriculture, and ethanol production are deeply connected to Iowa’s economy. Thus, with increasing ethanol production, high yield agriculture practices, growing animal agriculture, and the related economy, there is a need to understand the interrelationship of Iowa’s food-energy-water system to alleviate its impact on the environment and economy through improved policy and decision making. In this work, the Iowa food-energy-water (IFEW) system model is proposed that describes its interrelationship. Further, a macro-scale nitrogen export model of the agriculture and animal agriculture systems is developed. Global sensitivity analysis of the nitrogen export model reveals that the commercial nitrogen-based fertilizer application rate for corn production and corn yield are the two most influential factors affecting the surplus nitrogen in the soil.

Vishal Raul, Yen-Chen Liu, Leifur Leifsson, Amy Kaleita
An Attempt to Replace System Dynamics with Discrete Rate Modeling in Demographic Simulations

The usefulness of simulation in demographic research has been repeatedly confirmed in the literature. The most common simulation approach to model population trends is system dynamic (SD). Difficulties in a reliable mapping of population changes with SD approach have been however reported by some authors. Another simulation approach, i.e. discrete rate modeling (DRM), had not yet been used in population dynamics modelling, despite examples of this approach being used in the modelling of processes with similar internal dynamics. The purpose of our research is to verify if DRM can compete with the SD approach in terms of accuracy in simulating population changes and the complexity of the model. The theoretical part of the work describes the principles of the DRM approach and provides an overview of the applications of the DRM approach versus other simulation methods. The experimental part permits the conclusion that DRM approach does not match the SD in terms of comprehensive accuracy in mapping the behavior of cohorts of the complex populations. We have been however able to identify criteria for population segmentation that may lead to better results of DRM simulation against SD.

Jacek Zabawa, Bożena Mielczarek
New On-line Algorithms for Modelling, Identification and Simulation of Dynamic Systems Using Modulating Functions and Non-asymptotic State Estimators: Case Study for a Chosen Physical Process

The paper presents an advanced application of computation methodology with complicated algorithms and calculation methods dedicated to optimal identification and simulation of dynamic processes. These models may have an unknown structure (the order of a differential equation) and unknown parameters. The presented methodology uses non-standard algorithms for identification of such continuous-time models that can represent linear and non-linear physical processes. Typical approaches, presented in the literature, most often utilize discrete-time models. However, for the case of continuous-time differential equation models, in which both, the parameters and the derivatives of the output variable are unknown, the solution is not easy. In the paper, for the solution of the identification task, the convolution transformation of the differential equation with a special Modulating Function will be used. Also, to be able to properly simulate the behaviour of the process based on the obtained model, the exact state integral observers with minimal norm will be used for the reconstruction of the exact value of the initial conditions (not their estimate). For multidimensional process case, with multiple control signals (many inputs), additional problems arise that make continuous identification and observation of the vector state (and hence simulation) impossible by the use of the standard methods. Application of the above-mentioned methods for solving this problem will be also presented. Both algorithms, for the parameter identification and the state observation, will be implemented on-line in two independent but cooperating windows that will simultaneously move along the time axis. The presented algorithms will be tested using data collected during the heat exchange process in an industrial glass melting installation.

Witold Byrski, Michał Drapała, Jędrzej Byrski
Iterative Global Sensitivity Analysis Algorithm with Neural Network Surrogate Modeling

Global sensitivity analysis (GSA) is a method to quantify the effect of the input parameters on outputs of physics-based systems. Performing GSA can be challenging due to the combined effect of the high computational cost of each individual physics-based model, a large number of input parameters, and the need to perform repetitive model evaluations. To reduce this cost, neural networks (NNs) are used to replace the expensive physics-based model in this work. This introduces the additional challenge of finding the minimum number of training data samples required to train the NNs accurately. In this work, a new method is introduced to accurately quantify the GSA values by iterating over both the number of samples required to train the NNs, terminated using an outer-loop sensitivity convergence criteria, and the number of model responses required to calculate the GSA, terminated with an inner-loop sensitivity convergence criteria. The iterative surrogate-based GSA guarantees converged values for the Sobol’ indices and, at the same time, alleviates the specification of arbitrary accuracy metrics for the surrogate model. The proposed method is demonstrated in two cases, namely, an eight-variable borehole function and a three-variable nondestructive testing (NDT) case. For the borehole function, both the first- and total-order Sobol’ indices required 200 and $$10^5$$ 10 5 data points to terminate on the outer- and inner-loop sensitivity convergence criteria, respectively. For the NDT case, these values were 100 for both first- and total-order indices for the outer-loop sensitivity convergence, and $$10^6$$ 10 6 and $$10^3$$ 10 3 for the inner-loop sensitivity convergence, respectively, for the first- and total-order indices, on the inner-loop sensitivity convergence. The differences of the proposed method with GSA on the true functions are less than 3% in the analytical case and less than 10% in the physics-based case (where the large error comes from small Sobol’ indices).

Yen-Chen Liu, Jethro Nagawkar, Leifur Leifsson, Slawomir Koziel, Anna Pietrenko-Dabrowska
Forecasting Electricity Prices: Autoregressive Hybrid Nearest Neighbors (ARHNN) Method

The ongoing reshape of electricity markets has significantly stimulated electricity trading. Limitations in storing electricity as well as on-the-fly changes in demand and supply dynamics, have led price forecasts to be a fundamental aspect of traders’ economic stability and growth. In this perspective, there is a broad literature that focuses on developing methods and techniques to forecast electricity prices. In this paper, we develop a new hybrid method, called ARHNN, for electricity price forecasting (EPF) in day-ahead markets. A well performing autoregressive model, with exogenous variables, is the main forecasting instrument in our method. Contrarily to the traditional statistical approaches, in which the calibration sample consists of the most recent and successive observations, we employ the k-nearest neighbors (k-NN) instance-based learning algorithm and we select the calibration sample based on a similarity (distance) measure over a subset of the autoregressive model’s variables. The optimal levels of the k-NN parameter are identified during the validation period in a way that the forecasting error is minimized. We apply our method in the EPEX SPOT market in Germany. The results reveal a significant improvement in accuracy compared to commonly used approaches.

Weronika Nitka, Tomasz Serafin, Dimitrios Sotiros
Data-Driven Methods for Weather Forecast

In this paper, we propose efficient and practical data-driven methods for weather forecasts. We exploit the information brought by historical weather datasets to build machine-learning-based models. These models are employed to produce numerical forecasts, which can be improved by injecting additional data via data assimilation. Our approaches’ general idea is as follows: given a set of time snapshots of some dynamical system, we group the data by time across multiple days. These groups are employed to build first-order Markovian models that reproduce dynamics from time to time. Our numerical models’ precision can be improved via sequential data assimilation. Experimental tests are performed by using the National-Centers-for-Environmental-Prediction Department-of-Energy Reanalysis II dataset. The results reveal that numerical forecasts can be obtained within reasonable error magnitudes in the $$L_2$$ L 2 norm sense, and even more, observations can improve forecasts by order of magnitudes, in some cases.

Elias David Nino-Ruiz, Felipe J. Acevedo García
Generic Case of Leap-Frog Algorithm for Optimal Knots Selection in Fitting Reduced Data

The problem of fitting multidimensional reduced data $$\mathcal{M}_n$$ M n is discussed here. The unknown interpolation knots $$\mathcal{T}$$ T are replaced by optimal knots which minimize a highly non-linear multivariable function $$\mathcal{J}_0$$ J 0 . The numerical scheme called Leap-Frog Algorithm is used to compute such optimal knots for $$\mathcal{J}_0$$ J 0 via the iterative procedure based in each step on single variable optimization of $$\mathcal{J}_0^{(k,i)}$$ J 0 ( k , i ) . The discussion on conditions enforcing unimodality of each $$\mathcal{J}_0^{(k,i)}$$ J 0 ( k , i ) is also supplemented by illustrative examples both referring to the generic case of Leap-Frog. The latter forms a new insight on fitting reduced data and modelling interpolants of $$\mathcal{M}_n$$ M n .

Ryszard Kozera, Lyle Noakes, Artur Wiliński

Open Access

Intelligent Planning of Logistic Networks to Counteract Uncertainty Propagation

A major obstacle to stable and cost-efficient management of goods distribution systems is the bullwhip effect – reinforced demand uncertainty propagating among system nodes. In this work, by solving a formally established optimization problem, it is shown how one can mitigate the bullwhip effect, at the same minimizing transportation costs, in modern logistic networks with complex topologies. The flow of resources in the analyzed network is governed by the popular order-up-to inventory policy, which thrives to maintain sufficient stock at the nodes to answer a priori unknown, uncertain demand. The optimization objective is to decide how intensive a given transport channel should be used so that unnecessary goods relocation and the bullwhip effect are avoided while being able to fulfill demand requests. The computationally challenging optimization task is solved using a population-based evolutionary technique – Biogeography-Based Optimization. The results are verified in extensive simulations of a real-world transportation network.

Przemysław Ignaciuk, Adam Dziomdziora
Modeling Traffic Forecasts with Probability in DWDM Optical Networks

Dense wavelength division multiplexed networks enable operators to use more efficiently the bandwidth offered by a single fiber pair and thus make significant savings, both in operational and capital expenditures. In this study traffic demands pattern forecasts (with probability) in subsequent years are calculated using statistical methods. Subject to results of statistical analysis numerical methods are used to calculate traffic intensity in edges of a dense wavelength division multiplexed network both in terms of the number of channels allocated and the total throughput expressed in gigabits per second. For the calculation of traffic intensity a model based on mixed integer programming is proposed, which includes a detailed description of optical network resources. The study is performed for a practically relevant network within selected scenarios determined by realistic traffic demand sets.

Stanisław Kozdrowski, Piotr Sliwka, Sławomir Sujecki
Endogenous Factors Affecting the Cost of Large-Scale Geo-Stationary Satellite Systems

This work proposes the use of model-based sensitivity analysis to determine important internal factors that affect the cost of a large-scale complex engineered systems (LSCES), such as geo-stationary communication satellites. A physics-based satellite simulation model and a parametric cost model are combined to model a real-world satellite program whose data is extracted from selected acquisitions reports. A variance-based global sensitivity analysis using Sobol’ indices computationally aids in establishing internal factors. The internal factors in this work are associated with requirements of the program, operations and support, launch, ground equipment, personnel required to support and maintain the program. The results show that internal factors such as the system based requirements affect the cost of the program significantly. These important internal factors will be utilized to create a simulation-based framework that will aid in the design and development of future LSCES.

Nazareen Sikkandar Basha, Leifur Leifsson, Christina Bloebaum
Description of Electricity Consumption by Using Leading Hours Intra-day Model

This paper focuses on parametrization of one-day time series of electricity consumption. In order to parametrize such time series data mining technique was elaborated. The technique is based on the multivariate linear regression and is self-configurable, in other words a user does not need to set any model parameters upfront. The model finds the most essential data points whose values allow to model the electricity consumptions for remaining hours in the same day. The number of data points required to describe the whole time series depends on the demanded precision which is up to the user. We showed that the model with only four describing variables, describes 20 remaining hours very well, exhibiting dominant relative error about 1.5%. It is characterized by a high precision and allows finding non-typical days from the electricity demand point of view.

Krzysztof Karpio, Piotr Łukasiewicz, Rafik Nafkha, Arkadiusz Orłowski
The Problem of Tasks Scheduling with Due Dates in a Flexible Multi-machine Production Cell

In the paper we consider an NP-hard problem of tasks scheduling with due dates and penalties for the delay in a flexible production cell. Each task should be assigned to one of the cell’s machines and the order of their execution on machines should be determined. The sum of penalties for tardiness of tasks execution should be minimized. We propose to use the tabu search algorithm to solve the problem. Neighborhoods are generated by moves based on changing the order of tasks on the machine and changing the machine on which the task will be performed. We prove properties of moves that significantly accelerate the search of the neighborhoods and shorten the time of the algorithm execution and in result significantly improves the efficiency of the algorithm compared to the version that does not use these properties.

Wojciech Bożejko, Piotr Nadybski, Paweł Rajba, Mieczysław Wodecki
Discovering the Influence of Interruptions in Cycling Training: A Data Science Study

The usage of wearables in different sports has resulted in the potential of recording vast amounts of data that allow us to dive even deeper into sports training. This paper provides a novel approach to classifying stoppage events in cycling, and shows an analysis of interruptions in training that are caused when a cyclist encounters a road intersection where he/she must stop while cycling on the track. From 2,629 recorded cycling training sessions 3,731 viable intersection events were identified on which analysis was performed of heart-rate and speed data. It was discovered that individual intersections took an average of 4.08 s, affecting the speed and heart-rate of the cyclist before and after the event. We’ve also discovered that, after the intersection disruptions, the speed of the cyclist decreased and his heart-rate increased in comparison to his pre intersection event values.

Alen Rajšp, Iztok Fister Jr.
Analysis of Complex Partial Seizure Using Non-linear Duffing Van der Pol Oscillator Model

Complex partial seizures belong to the most common type of epileptic seizures. The main purpose of the case study is the application of the Van der Pol model oscillator to study brain activity during temporal left lobe seizures. The oscillator is characterized by three pairs of parameters: linear and two nonlinear, cubic and Van der Pol damping. The optimization based on the normalized power spectra of model output and real EEG signal is performed using a genetic algorithm. The results suggest that the estimated parameter values change during the course of the seizure, according to changes in brain waves generation. In the article, based on values of sensitivity factor of parameters, and, sample entropy non-stationary of considered seizure phases are analyzed. The onset of the seizure and the tangled stage belongs to strongly non-stationary processes.

Beata Szuflitowska, Przemyslaw Orlowski

Computational Science in IoT and Smart Systems

Frontmatter
A Review on Visual Programming for Distributed Computation in IoT

Internet-of-Things (IoT) systems are considered one of the most notable examples of complex, large-scale systems. Some authors have proposed visual programming (VP) approaches to address part of their inherent complexity. However, in most of these approaches, the orchestration of devices and system components is still dependent on a centralized unit, preventing higher degrees of dependability. In this work, we perform a systematic literature review (SLR) of the current approaches that provide visual and decentralized orchestration to define and operate IoT systems, reflecting upon a total of 29 proposals. We provide an in-depth discussion of these works and find out that only four of them attempt to tackle this issue as a whole, although still leaving a set of open research challenges. Finally, we argue that addressing these challenges could make IoT systems more fault-tolerant, with an impact on their dependability, performance, and scalability.

Margarida Silva, João Pedro Dias, André Restivo, Hugo Sereno Ferreira
Data Preprocessing, Aggregation and Clustering for Agile Manufacturing Based on Automated Guided Vehicles

Automated Guided Vehicles (AGVs) have become an indispensable component of Flexible Manufacturing Systems. AGVs are also a huge source of information that can be utilised by the data mining algorithms that support the new generation of manufacturing. This paper focuses on data preprocessing, aggregation and clustering in the new generation of manufacturing systems that use the agile manufacturing paradigm and utilise AGVs. The proposed methodology can be used as the initial step for production optimisation, predictive maintenance activities, production technology verification or as a source of models for the simulation tools that are used in virtual factories.

Rafal Cupek, Marek Drewniak, Tomasz Steclik
Comparison of Speech Recognition and Natural Language Understanding Frameworks for Detection of Dangers with Smart Wearables

Wearable IoT devices that can register and transmit human voice can be invaluable in personal situations, such as summoning assistance in emergency healthcare situations. Such applications would benefit greatly from automated voice analysis to detect and classify voice signals. In this paper, we compare selected Speech Recognition (SR) and Natural Language Understanding (NLU) frameworks for Cloud-based detection of voice-based assistance calls. We experimentally test several services for speech-to-text transcription and intention recognition available on selected large Cloud platforms. Finally, we evaluate the influence of the manner of speaking and ambient noise on the quality of recognition of emergency calls. Our results show that many services can correctly translate voice to text and provide a correct interpretation of caller intent. Still, speech artifacts (tone, accent, diction), which can differ even for each individual in various situations, significantly influences the performance of speech recognition.

Dariusz Mrozek, Szymon Kwaśnicki, Vaidy Sunderam, Bożena Małysiak-Mrozek, Krzysztof Tokarz, Stanisław Kozielski
A Decision Support System Based on Augmented Reality for the Safe Preparation of Chemotherapy Drugs

The preparation of chemotherapy drugs has always presented complex issues and challenges given the nature of the demand on the one hand, and the criticality of the treatments on the other. Chemotherapy involves handling special drugs that require specific precautions. These drugs are toxic and potentially harmful for people handling them. Their preparation entails therefore particular and complex procedures including preparation and control. The relevant control methods are often limited to the double visual control. The search for optimization and safety of pharmaco-technical processes leads to the use of new technologies with the main aim of improving patient care. In this respect, Augmented Reality (AR) technology can be an effective solution to support the control of chemotherapy preparations. It can be easily adapted to the chemotherapy drugs preparation environment. This paper introduces SmartPrep, an innovative decision support system (DSS) for the monitoring of chemotherapy drugs preparation. The proposed DSS uses the AR technology, through smart glasses, to facilitate and secure the preparation of these drugs. Controlling the preparation process is done with the help of the voice since hands are busy. SmartPrep was co-developed by the research laboratory CRISTAL, GRITA research group and the software publisher Computer Engineering.

Sarah Ben Othman, Hayfa Zgaya, Michèle Vasseur, Bertrand Décaudin, Pascal Odou, Slim Hammadi
Metagenomic Analysis at the Edge with Jetson Xavier NX

Nanopore sequencing technologies and devices such as MinION Nanopore enable cost-effective and portable metagenomic analysis. However, performing mobile metagenomics analysis in secluded areas requires computationally and energetically efficient Edge devices capable of running the whole analysis workflow without access to extensive computing infrastructure. This paper presents a study on using Edge devices such as Jetson Xavier NX as a platform for running real-time analysis. In the experiments, we evaluate it both from a performance and energy efficiency standpoint. For the purposes of this article, we developed a sample workflow, where raw nanopore reads are basecalled and later classified with Guppy and Kraken2 software. To provide an overview of the capabilities of Jetson Xavier NX, we conducted experiments in various scenarios and for all available power modes. The results of the study confirm that Jetson Xavier NX can serve as an energy-efficient, performant, and portable device for running real-time metagenomic experiments, especially in places with limited network connectivity, as it supports fully offline workflows. We also noticed that a lot of tools are not optimized to run on such Edge devices, and we see a great opportunity for future development in that area.

Piotr Grzesik, Dariusz Mrozek
Programming IoT-Spaces: A User-Survey on Home Automation Rules

The Internet-of-Things (IoT) has transformed everyday manual tasks into digital and automatable ones, giving way to the birth of several end-user development solutions that attempt to ease the task of configuring and automating IoT systems without requiring prior technical knowledge. While some studies reflect on the automation rules that end-users choose to program into their spaces, they are limited by the number of devices and possible rules that the tool under study supports. There is a lack of systematic research on (1) the automation rules that users wish to configure on their homes, (2) the different ways users state their intents, and (3) the complexity of the rules themselves—without the limitations imposed by specific IoT devices systems and end-user development tools. This paper surveyed twenty participants about home automation rules given a standard house model and device’s list, without limiting their creativity and resulting automation complexity. We analyzed and systematized the collected 177 scenarios into seven different interaction categories, representing the most common smart home interactions.

Danny Soares, João Pedro Dias, André Restivo, Hugo Sereno Ferreira
Application of the Ant Colony Algorithm for Routing in Next Generation Programmable Networks

New generation 5G technology provides mechanisms for network resources management to efficiently control dynamic bandwidth allocation and assure the Quality of Service (QoS) in terms of KPIs (Key Performance Indicators) that is important for delay or loss sensitive Internet of Things (IoT) services. To meet such application requirements, network resource management in Software Defined Networking (SDN), supported by Artificial Intelligence (AI) algorithms, comes with the solution. In our approach, we propose the solution where AI is responsible for controlling intent-based routing in the SDN network. The paper focuses on algorithms inspired by biology, i.e., the ant algorithm for selecting the best routes in a network with an appropriately defined objective function and constraints. The proposed algorithm is compared with the Mixed Integer Programming (MIP) based algorithm and a greedy algorithm. Performance of the above algorithms is tested and compared in several network topologies. The obtained results confirm that the ant colony algorithm is a viable alternative to the MIP and greedy algorithms and provide the base for further enhanced research for its effective application to programmable networks.

Stanisław Kozdrowski, Magdalena Banaszek, Bartosz Jedrzejczak, Mateusz Żotkiewicz, Zbigniew Kopertowski
Scalable Computing System with Two-Level Reconfiguration of Multi-channel Inter-Node Communication

The paper presents the architecture and organization of a reconfigurable inter-node communication system based on hierarchical embedding and logical multi-buses. The communication environment is a physical network with a bus topology or its derivatives (e.g. folded buses, mesh and toroidal bus network). In the system, multi-channel communication is forced through the use of tunable signal receivers/transmitters, with the buses or their derivatives being completely passive. In the physical environment, logical components (nodes, channels, paths) are distinguished on the basis of which logical connection networks separated from each other are created. The embedding used for this purpose is fundamentally different from the previous interpretations of this term. Improvement of communication and computational efficiency is achieved by changing the physical network architecture (e.g. the use of folded bus topologies, 2D and 3D networks), as well as the logical level by grouping system elements (processing nodes and bus channels) or their division. As a result, it is possible to ensure uniformity of communication and computational loads of system components. To enable formal design of the communication system, a method of hierarchy description and selection of its organization was proposed. In addition, methods of mathematical notation of bus topologies and the scope of their applications were analyzed. The work ends with a description of simulations and empirical research on the effectiveness of the proposed solutions. There is high flexibility of use and relatively low implementation price.

Miroslaw Hajder, Piotr Hajder, Mateusz Liput, Janusz Kolbusz
Real-Time Object Detection for Smart Connected Worker in 3D Printing

IoT and smart systems have been introduced into the advanced manufacturing, especially 3D printing with the trend of the fourth industrial revolution. The rapid development of computer vision and IoT devices in recent years has led the fruitful direction to the development of real-time machine state monitoring. In this study, computer vision technology was adopted into the Smart Connected Worker (SCW) system with the use case of 3D printing. Specifically, artificial intelligence (AI) models were investigated instead of discrete labor-intensive methods to monitor the machine state and predict the errors and risks for the advanced manufacturing. The model achieves accurate supervision in real-time for twenty-four hours a day, which can reduce human resource costs significantly. At the same time, the experiments demonstrate the feasibility of adopting AI technology to more aspects of the advanced manufacturing.

Shijie Bian, Tiancheng Lin, Chen Li, Yongwei Fu, Mengrui Jiang, Tongzi Wu, Xiyi Hang, Bingbing Li
Object-Oriented Internet Cloud Interoperability

Optimization of industrial processes requires further research on the integration of machine-centric systems with human-centric cloud-based services in the context of new emerging disciplines, namely the fourth industrial revolution coined as Industry 4.0 and Industrial Internet of Things. The following research aims at working out a new generic architecture and deployment scenario applicable to that integration. A reactive interoperability relationship of the communication parties is proposed to deal with the network traffic propagation asymmetry or assets’ mobility. Described solution based on the OPC Unified Architecture international standard relaxes issues related to the real-time multi-vendor environment. The discussion concludes that the embedded gateway software component best suits all requirements and thus has been implemented as a composable part of the selected reactive OPC UA framework which promotes separation of concerns and reusability.The proposals are backed by proof-of-concept reference implementations confirming the possibility of integrating selected cloud services with the OPC UA based cyber-physical system by applying the proposed architecture and deployment scenario. It is contrary to interconnecting cloud services with the selected OPC UA Server limiting the PubSub role to data export only.

Mariusz Postół, Piotr Szymczak
Static and Dynamic Comparison of Pozyx and DecaWave UWB Indoor Localization Systems with Possible Improvements

This paper investigates static and dynamic localization accuracy of two indoor localization systems using Ultra-wideband (UWB) technology: Pozyx and DecaWave DW1000. We present the results of laboratory research, which demonstrates how those two UWB systems behave in practice. Our research involves static and dynamic tests. A static test was performed in the laboratory using the different relative positions of anchors and the tag. For a dynamic test, we used a robot that was following the EvAAL-based track located between anchors. Our research revealed that both systems perform below our expectations, and the accuracy of both systems is worse than declared by the system manufacturers. The imperfections are especially apparent in the case of dynamic measurements. Therefore, we proposed a set of filters that allow for the improvement of localization accuracy.

Barbara Morawska, Piotr Lipiński, Krzysztof Lichy, Piotr Koch, Marcin Leplawy
Challenges Associated with Sensors and Data Fusion for AGV-Driven Smart Manufacturing

Data fusion methods enable the precision of measurements based on information from individual systems as well as many different subsystems to be increased. Besides, the data obtained in this way enables additional conclusions drawn from their work, e.g., detecting degradation of the work of subsystems. The article focuses on the possibilities of using data fusion to create Autonomous Guided Vehicles solutions in increasing precise positioning, navigation, and cooperation with the production environment, including docking. For this purpose, it was proposed that information from other manufacturing subsystems be used. This paper aims to review the current implementation possibilities and to identify the relationship between various research sub-areas.

Adam Ziebinski, Dariusz Mrozek, Rafal Cupek, Damian Grzechca, Marcin Fojcik, Marek Drewniak, Erik Kyrkjebø, Jerry Chun-Wei Lin, Knut Øvsthus, Piotr Biernacki
Dynamic Pricing and Discounts by Means of Interactive Presentation Systems in Stationary Point of Sales

The main purpose of this article was to create a model and simulate the profitability conditions of an interactive presentation system (IPS) with the recommender system (RS) used in the kiosk. 90 million simulations have been run in Python with SymPy to address the problem of discount recommendation offered to the clients according to their usage of the IPS.

Marcin Lewicki, Tomasz Kajdanowicz, Piotr Bródka, Janusz Sobecki
Profile-Driven Synthetic Trajectories Generation to Enhance Smart System Solutions

The knowledge of the individual trajectories of citizens’ mobility in the urban space is critical for smart cities. The data concerning trajectories from the providers of mobile phone services are still difficult to be obtained in practice and one of the considerable obstacles here are legal aspects. We have designed and implemented the tourist trajectories generator for objects located in a selected but arbitrary urban area. A generation process is based on the random selection of the pre-defined profiles of tourist activeness, including mobility patterns. It is possible to generate a practically unlimited number of trajectories, if needed, and they may also be directed at the certain specific types of behaviours. Thus obtained large sets of data may be used for understanding urban behaviours, calibrating urban models, recommending systems under construction, as well as anticipating the smart city further software testing.

Radosław Klimek, Arkadiusz Olesek
Augmenting Automatic Clustering with Expert Knowledge and Explanations

Cluster discovery from highly-dimensional data is a challenging task, that has been studied for years in the fields of data mining and machine learning. Most of them focus on automation of the process, resulting in the clusters that once discovered have to be carefully analyzed to assign semantics for numerical labels. However, it is often the case that such an explicit, symbolic knowledge about possible clusters is available prior to clustering and can be used to enhance the learning process. More importantly, we demonstrate how a machine learning model can be used to refine the expert knowledge and extend it with an aid of explainable AI algorithms. We present our framework on an artificial, reproducible dataset.

Szymon Bobek, Grzegorz J. Nalepa
Renewable Energy-Aware Heuristic Algorithms for Edge Server Selection for Stream Data Processing

Internet of Things and Edge computing are evolving, bringing data processing closer to the source and a result closer to the network’s edge. This distributed processing can increase energy consumption and carbon footprint. One solution to overcome the environment’s impact is using renewable energy sources such as photovoltaic panels to power both cloud and edge servers. Since solar energy is not available at night or can vary with cloudiness, the centers still rely on conventional energy sources. Any solar energy that exceeds the demand power of the computing infrastructure is put back to the grid. Fluctuations in energy output due to moving clouds can have a negative impact on conventional energy suppliers as they have to maintain a constant energy supply. This paper presents heuristic algorithms for selecting edge servers for data stream processing to manage renewable energy utilization and smooth out energy fluctuations.

Tomasz Szydlo, Chris Gniady
Dataset for Anomalies Detection in 3D Printing

Nowadays, the Internet of Things plays a significant role in many domains. Especially, Industry 4.0 is making significant usage of concepts like smart sensors and big data analysis. IoT devices are commonly used to monitor industry machines and detect anomalies in their work. This paper presents and describes a set of data streams coming from a working 3D printer. Among others, it contains accelerometer data of printer head, intrusion power and temperatures of the printer elements. In order to gain data, we lead to several printing malfunctions applied to the 3D model. The resulting dataset can therefore be used for anomalies detection research.

Tomasz Szydlo, Joanna Sendorek, Mateusz Windak, Robert Brzoza-Woch
Backmatter
Metadata
Title
Computational Science – ICCS 2021
Editors
Prof. Maciej Paszynski
Prof. Dr. Dieter Kranzlmüller
Dr. Valeria V. Krzhizhanovskaya
Prof. Jack J. Dongarra
Prof. Dr. Peter M.A. Sloot
Copyright Year
2021
Electronic ISBN
978-3-030-77970-2
Print ISBN
978-3-030-77969-6
DOI
https://doi.org/10.1007/978-3-030-77970-2

Premium Partner