Skip to main content
main-content

Über dieses Buch

The two-volume set LNCS 11973 and 11974 constitute revised selected papers from the Third International Conference on Numerical Computations: Theory and Algorithms, NUMTA 2019, held in Crotone, Italy, in June 2019.

This volume, LNCS 11973, consists of 34 full and 18 short papers chosen among papers presented at special streams and sessions of the Conference. The papers in part I were organized following the topics of these special sessions: approximation: methods, algorithms, and applications; computational methods for data analysis; first order methods in optimization: theory and applications; high performance computing in modelling and simulation; numbers, algorithms, and applications; optimization and management of water supply.

Inhaltsverzeichnis

Frontmatter

Approximation: Methods, Algorithms, and Applications

Frontmatter

Towards an Efficient Implementation of an Accurate SPH Method

A modified version of the Smoothed Particle Hydrodynamics (SPH) method is considered in order to overcome the loss of accuracy of the standard formulation. The summation of Gaussian kernel functions is employed, using the Improved Fast Gauss Transform (IFGT) to reduce the computational cost, while tuning the desired accuracy in the SPH method. This technique, coupled with an algorithmic design for exploiting the performance of Graphics Processing Units (GPUs), makes the method promising, as shown by numerical experiments.

Laura Antonelli, Daniela di Serafino, Elisa Francomano, Francesco Gregoretti, Marta Paliaga

A Procedure for Laplace Transform Inversion Based on Smoothing Exponential-Polynomial Splines

Multi-exponential decaying data are very frequent in applications and a continuous description of this type of data allows the use of mathematical tools for data analysis such as the Laplace Transform (LT). In this work a numerical procedure for the Laplace Transform Inversion (LTI) of multi-exponential decaying data is proposed. It is based on a new fitting model, that is a smoothing exponential-polynomial spline with segments expressed in Bernstein-like bases. A numerical experiment concerning the application of a LTI method applied to our spline model highlights that it is very promising in the LTI of exponential decay data.

Rosanna Campagna, Costanza Conti, Salvatore Cuomo

An Adaptive Refinement Scheme for Radial Basis Function Collocation

In this paper we present an adaptive refinement algorithm for solving elliptic partial differential equations via a radial basis function (RBF) collocation method. The adaptive scheme is based on the use of an error indicator, which is characterized by the comparison of two RBF collocation solutions evaluated on a coarser set and a finer one. This estimate allows us to detect the domain parts that need to be refined by adding points in the selected areas. Numerical results support our study and point out the effectiveness of our algorithm.

Roberto Cavoretto, Alessandra De Rossi

A 3D Efficient Procedure for Shepard Interpolants on Tetrahedra

The need of scattered data interpolation methods in the multivariate framework and, in particular, in the trivariate case, motivates the generalization of the fast algorithm for triangular Shepard method. A block-based partitioning structure procedure was already applied to make the method very fast in the bivariate setting. Here the searching algorithm is extended, it allows to partition the domain and nodes in cubic blocks and to find the nearest neighbor points that need to be used in the tetrahedral Shepard interpolation.

Roberto Cavoretto, Alessandra De Rossi, Francesco Dell’Accio, Filomena Di Tommaso

Interpolation by Bivariate Quadratic Polynomials and Applications to the Scattered Data Interpolation Problem

As specified by Little [7], the triangular Shepard method can be generalized to higher dimensions and to set of more than three points. In line with this idea, the hexagonal Shepard method has been recently introduced by combining six-points basis functions with quadratic Lagrange polynomials interpolating on these points and the error of approximation has been carried out by adapting, to the case of six points, the technique developed in [4]. As for the triangular Shepard method, the use of appropriate set of six-points is crucial both for the accuracy and the computational cost of the hexagonal Shepard method. In this paper we discuss about some algorithm to find useful six-tuple of points in a fast manner without the use of any triangulation of the nodes.

Francesco Dell’Accio, Filomena Di Tommaso

Comparison of Shepard’s Like Methods with Different Basis Functions

The problem of reconstruction of an unknown function from a finite number of given scattered data is well known and well studied in approximation theory. The methods developed with this goal are several and are successfully applied in different contexts. Due to the need of fast and accurate approximation methods, in this paper we numerically compare some variation of the Shepard method obtained by considering different basis functions.

Francesco Dell’Accio, Filomena Di Tommaso, Domenico Gonnelli

A New Remez-Type Algorithm for Best Polynomial Approximation

The best approximation problem is a classical topic of the approximation theory and the Remez algorithm is one of the most famous methods for computing minimax polynomial approximations. We present a slight modification of the (second) Remez algorithm where a new approach to update the trial reference is considered. In particular at each step, given the local extrema of the error function of the trial polynomial, the proposed algorithm replaces all the points of the trial reference considering some “ad hoc” oscillating local extrema and the global extremum (with its adjacent) of the error function. Moreover at each step the new trial reference is chosen trying to preserve a sort of equidistribution of the nodes at the ends of the approximation interval. Experimentally we have that this method is particularly appropriate when the number of the local extrema of the error function is very large. Several numerical experiments are performed to assess the real performance of the proposed method in the approximation of continuous and Lipschitz continuous functions. In particular, we compare the performance of the proposed method for the computation of the best approximant with the algorithm proposed in [17] where an update of the Remez ideas for best polynomial approximation in the context of the chebfun software system is studied.

Nadaniela Egidi, Lorella Fatone, Luciano Misici

An SVE Approach for the Numerical Solution of Ordinary Differential Equations

The derivative operator is reformulated as a Volterra integral operator of the first kind. So, the singular value expansion (SVE) of the kernel of such integral operator can be used to obtain new numerical methods to solve differential equations. We present such ideas in the solution of initial value problems for ordinary differential equations of first order. In particular, we develop an iterative scheme where global error in the solution of this problem is gradually reduced at each step. The global error is approximated by using the system of the singular functions in the aforementioned SVE.Some experiments are used to show the performances of the proposed numerical method.

Nadaniela Egidi, Pierluigi Maponi

Uniform Weighted Approximation by Multivariate Filtered Polynomials

The paper concerns the weighted uniform approximation of a real function on the $$d-$$cube $$[-1,1]^d$$, with $$d>1$$, by means of some multivariate filtered polynomials. These polynomials have been deduced, via tensor product, from certain de la Vallée Poussin type means on $$[-1,1]$$, which generalize classical delayed arithmetic means of Fourier partial sums. They are based on arbitrary sequences of filter coefficients, not necessarily connected with a smooth filter function. Moreover, in the continuous case, they involve Jacobi–Fourier coefficients of the function, while in the discrete approximation, they use function’s values at a grid of Jacobi zeros. In both the cases we state simple sufficient conditions on the filter coefficients and the underlying Jacobi weights, in order to get near–best approximation polynomials, having uniformly bounded Lebesgue constants in suitable spaces of locally continuous functions equipped with weighted uniform norm. The results can be useful in the construction of projection methods for solving Fredholm integral equations, whose solutions present singularities on the boundary. Some numerical experiments on the behavior of the Lebesgue constants and some trials on the attenuation of the Gibbs phenomenon are also shown.

Donatella Occorsio, Woula Themistoclakis

Computational Methods for Data Analysis

Frontmatter

A Travelling Wave Solution for Nonlinear Colloid Facilitated Mass Transport in Porous Media

Colloid facilitated solute transport through porous media is investigated. Sorption on the matrix is modelled by the linear equilibrium isotherm whereas sorption on colloidal sites is regulated by nonlinearly equilibrium vs nonequilibrium. A travelling wave-type solution is obtained to describe the evolution in both the liquid and colloidal concentration.

Salvatore Cuomo, Fabio Giampaolo, Gerardo Severino

Performance Analysis of a Multicore Implementation for Solving a Two-Dimensional Inverse Anomalous Diffusion Problem

In this work we deal with the solution of a two-dimensional inverse time fractional diffusion equation, involving a Caputo fractional derivative in his expression. Since we deal with a huge practical problem with a large domain, by starting from an accurate meshless localized collocation method using RBFs, here we propose a fast algorithm, implemented in a multicore architecture, which exploits suitable parallel computational kernels. More in detail, we firstly developed, a C code based on the numerical library LAPACK to perform the basic linear algebra operations and to solve linear systems, then, due to the high computational complexity and the large size of the problem, we propose a parallel algorithm specifically designed for multicore architectures and based on the Pthreads library. Performance analysis will show accuracy and reliability of our parallel implementation.

Pasquale De Luca, Ardelio Galletti, Giulio Giunta, Livia Marcellino, Marzie Raei

Adaptive RBF Interpolation for Estimating Missing Values in Geographical Data

The quality of datasets is a critical issue in big data mining. More interesting things could be found for datasets with higher quality. The existence of missing values in geographical data would worsen the quality of big datasets. To improve the data quality, the missing values are generally needed to be estimated using various machine learning algorithms or mathematical methods such as approximations and interpolations. In this paper, we propose an adaptive Radial Basis Function (RBF) interpolation algorithm for estimating missing values in geographical data. In the proposed method, the samples with known values are considered as the data points, while the samples with missing values are considered as the interpolated points. For each interpolated point, first, a local set of data points are adaptively determined. Then, the missing value of the interpolated point is imputed via interpolating using the RBF interpolation based on the local set of data points. Moreover, the shape factors of the RBF are also adaptively determined by considering the distribution of the local set of data points. To evaluate the performance of the proposed method, we compare our method with the commonly used k-Nearest Neighbor (kNN) interpolation and Adaptive Inverse Distance Weighted (AIDW) interpolation, and conduct three groups of benchmark experiments. Experimental results indicate that the proposed method outperforms the kNN interpolation and AIDW interpolation in terms of accuracy, but worse than the kNN interpolation and AIDW interpolation in terms of efficiency.

Kaifeng Gao, Gang Mei, Salvatore Cuomo, Francesco Piccialli, Nengxiong Xu

Stochastic Mechanisms of Information Flow in Phosphate Economy of Escherichia coli

In previous work, we have presented a computational model and experimental results that quantify the dynamic mechanisms of auto-regulation in E. coli in response to varying external phosphate levels. In a cycle of deterministic ODE simulations and experimental verification, our model predicts and explores phenotypes with various modifications at the genetic level that can optimise inorganic phosphate intake. Here, we extend our analysis with extensive stochastic simulations at a single-cell level so that noise due to small numbers of certain molecules, e.g., genetic material, can be better observed. For the simulations, we resort to a conservative extension of Gillespie’s stochastic simulation algorithm that can be used to quantify the information flow in the biochemical system. Besides the common time series analysis, we present a dynamic visualisation of the time evolution of the model mechanisms in the form of a video, which is of independent interest. We argue that our stochastic analysis of information flow provides insights for designing more stable synthetic applications that are not affected by noise.

Ozan Kahramanoğulları, Cansu Uluşeker, Martin M. Hancyzc

NMR Data Analysis of Water Mobility in Wheat Flour Dough: A Computational Approach

The understanding of the breadmaking process requires to understand the changes in water mobility of dough. The dough ingredients as well as the processing conditions determine the structure of baked products which in turn is responsible for their apparence, texture, taste and stability. The transition from wheat flour to dough is a complex process in which several transformations take place, including those associated with changes in water distribution [13]. The molecular mobility of water in foods can be studied with proton Nuclear Magnetic Resonance (1H NMR). In this study, the measured transverse relaxation times (T2) were considered to investigate the wheat dough development during mixing. The interactions of the flour polymers with water during mixing reduce water mobility and result in different molecular mobilities in dough. The molecular dynamics in heterogeneous systems are very complex. From a mathematical point of view the NMR relaxation decay is generally modelled by the linear superposition of a few exponential functions of the relaxation times. This could be a too rough model and the classical fitting approaches could fail to describe physical reality. A more appealing procedure consists in describing the NMR relaxation decay in integral form by the Laplace transform [2]. In this work a discrete Inverse Laplace Transform procedure is considered to obtain the relaxation times distribution of a dataset provided as case study.

Annalisa Romano, Rosanna Campagna, Paolo Masi, Gerardo Toraldo

First Order Methods in Optimization: Theory and Applications

Frontmatter

A Limited Memory Gradient Projection Method for Box-Constrained Quadratic Optimization Problems

Gradient Projection (GP) methods are a very popular tool to address box-constrained quadratic problems thanks to their simple implementation and low computational cost per iteration with respect, for example, to Newton approaches. It is however possible to include, in GP schemes, some second order information about the problem by means of a clever choice of the steplength parameter which controls the decrease along the anti-gradient direction. Borrowing the analysis developed by Barzilai and Borwein (BB) for an unconstrained quadratic programming problem, in 2012 Roger Fletcher proposed a limited memory steepest descent (LMSD) method able to effectively sweep the spectrum of the Hessian matrix of the quadratic function to optimize. In this work we analyze how to extend the Fletcher’s steplength selection rule to GP methods employed to solve box-constrained quadratic problems. Particularly, we suggest a way to take into account the lower and the upper bounds in the steplength definition, providing also a theoretical and numerical evaluation of our approach.

Serena Crisci, Federica Porta, Valeria Ruggiero, Luca Zanni

A Gradient-Based Globalization Strategy for the Newton Method

The Newton method is one of the most powerful methods for the solution of smooth unconstrained optimization problems. It has local quadratic convergence in a neighborhood of a local minimum where the Hessian is positive definite and Lipschitz continuous. Several strategies have been proposed in order to achieve global convergence. They are mainly based either on the modification of the Hessian together with a line search or on the adoption of a restricted-step strategy. We propose a globalization technique that combines the Newton and gradient directions, producing a descent direction on which a backtracking Armijo line search is performed. Our work is motivated by the effectiveness of gradient methods using suitable spectral step-length selection rules. We prove global convergence of the resulting algorithm, and quadratic rate of convergence under suitable second-order optimality conditions. A numerical comparison with a modified Newton method exploiting Hessian modifications shows the effectiveness of our approach.

Daniela di Serafino, Gerardo Toraldo, Marco Viola

On the Steplength Selection in Stochastic Gradient Methods

This paper deals with the steplength selection in stochastic gradient methods for large scale optimization problems arising in machine learning. We introduce an adaptive steplength selection derived by tailoring a limited memory steplength rule, recently developed in the deterministic context, to the stochastic gradient approach. The proposed steplength rule provides values within an interval, whose bounds need to be prefixed by the user. A suitable choice of the interval bounds allows to perform similarly to the standard stochastic gradient method equipped with the best-tuned steplength. Since the setting of the bounds slightly affects the performance, the new rule makes the tuning of the parameters less expensive with respect to the choice of the optimal prefixed steplength in the standard stochastic gradient method. We evaluate the behaviour of the proposed steplength selection in training binary classifiers on well known data sets and by using different loss functions.

Giorgia Franchini, Valeria Ruggiero, Luca Zanni

Efficient Block Coordinate Methods for Blind Cauchy Denoising

This paper deals with the problem of image blind deconvolution in presence of Cauchy noise, a type of non-Gaussian, impulsive degradation which frequently appears in engineering and biomedical applications. We consider a regularized version of the corresponding data fidelity function, by adding the total variation regularizer on the image and a Tikhonov term on the point spread function (PSF). The resulting objective function is nonconvex with respect to both the image and PSF block, which leads to the presence of several uninteresting local minima. We propose to tackle such challenging problem by means of a block coordinate linesearch based forward backward algorithm suited for nonsmooth nonconvex optimization. The proposed method allows performing multiple forward-backward steps on each block of variables, as well as adopting variable steplengths and scaling matrices to accelerate the progress towards a stationary point. The convergence of the scheme is guaranteed by imposing a linesearch procedure at each inner step of the algorithm. We provide some practical sound rules to adaptively choose both the variable metric parameters and the number of inner iterations on each block. Numerical experiments show how the proposed approach delivers better performances in terms of efficiency and accuracy if compared to a more standard block coordinate strategy.

Simone Rebegoldi, Silvia Bonettini, Marco Prato

High Performance Computing in Modelling and Simulation

Frontmatter

A Parallel Software Platform for Pathway Enrichment

Biological pathways are complex networks able to provide a view on the interactions among bio-molecules inside the cell. They are represented as a network, where the nodes are the bio-molecules, and the edges represent the interactions between two biomolecules. Main online repositories of pathways information include KEGG that is a repository of metabolic pathways, SIGNOR that comprises primarily signaling pathways, and Reactome that contains information about metabolic and signal transduction pathways. Pathways enrichment analysis is employed to help the researchers to discriminate relevant proteins involved in the development of both simple and complex diseases, and is performed with several software tools. The main limitation of the current enrichment tools are: (i) each tool can use only a single pathway source to compute the enrichment; (ii) researchers have to repeat the enrichment analysis several times with different tools (able to get pathway data from different data sources); (iii) enrichment results have to be manually merged by the user, a tedious and error-prone task even for a computer scientist. To face this issues, we propose a parallel enrichment tool named Parallel Enrichment Analysis (PEA) ables to retrieve at the same time pathways information from KEGG, Reactome, and SIGNOR databases, with which to automatically perform pathway enrichment analysis, allowing to reduce the computational time of some order of magnitude, as well as the automatic merging of the results.

Giuseppe Agapito, Mario Cannataro

Hierarchical Clustering of Spatial Urban Data

The growth of data volume collected in urban contexts opens up to their exploitation for improving citizens’ quality-of-life and city management issues, like resource planning (water, electricity), traffic, air and water quality, public policy and public safety services. Moreover, due to the large-scale diffusion of GPS and scanning devices, most of the available data are geo-referenced. Considering such an abundance of data, a very desirable and common task is to identify homogeneous regions in spatial data by partitioning a city into uniform regions based on pollution density, mobility spikes, crimes, or on other characteristics. Density-based clustering algorithms have been shown to be very suitable to detect density-based regions, i.e. areas in which urban events occur with higher density than the remainder of the dataset. Nevertheless, an important issue of such algorithms is that, due to the adoption of global parameters, they fail to identify clusters with varied densities, unless the clusters are clearly separated by sparse regions. In this paper we provide a preliminary analysis about how hierarchical clustering can be used to discover spatial clusters of different densities, in spatial urban data. The algorithm can automatically estimate the area of data having different densities, it can automatically estimate parameters for each cluster so as to reduce the requirement for human intervention or domain knowledge.

Eugenio Cesario, Andrea Vinci, Xiaotian Zhu

Improving Efficiency in Parallel Computing Leveraging Local Synchronization

In a parallel computing scenario, a complex task is typically split among many computing nodes, which are engaged to perform portions of the task in a parallel fashion. Except for a very limited class of application, computing nodes need to coordinate with each other in order to carry out the parallel execution in a consistent way. As a consequence, a synchronization overhead arises, which can significantly impair the overall execution performance. Typically, synchronization is achieved by adopting a centralized synchronization barrier involving all the computing nodes. In many application domains, though, such kind of global synchronization can be relaxed and a lean synchronization schema, namely local synchronization, can be exploited. By using local synchronization, each computing node needs to synchronize only with a subset of the other computing nodes. In this work, we evaluate the performance of the local synchronization mechanism when compared to the global synchronization approach. As a key performance indicator, the efficiency index is considered, which is defined as the ratio between useful computation time and total computation time, including the synchronization overhead. The efficiency trend is evaluated both analytically and through numerical simulation.

Franco Cicirelli, Andrea Giordano, Carlo Mastroianni

A General Computational Formalism for Networks of Structured Grids

Extended Cellular Automata (XCA) represent one of the most known parallel computational paradigm for the modeling and simulation of complex systems on stenciled structured grids. However, the formalism does not perfectly lend itself to the modeling of multiple automata were two or more models co-evolve by interchanging information and by synchronizing during the dynamic evolution of the system. Here we propose the Extended Cellular Automata Network (XCAN) formalism, an extension of the original XCA paradigm in which different automata are described by means of a graph, with vertices representing automata and inter-relations modeled by a set of edges. The formalism is applied to the modeling of a theoretical 2D/3D coupled system, where space/time variance and synchronization aspects are pointed out.

Donato D’Ambrosio, Paola Arcuri, Mario D’Onghia, Marco Oliverio, Rocco Rongo, William Spataro, Andrea Giordano, Davide Spataro, Alessio De Rango, Giuseppe Mendicino, Salvatore Straface, Alfonso Senatore

Preliminary Model of Saturated Flow Using Cellular Automata

A fully-coupled from surface to groundwater hydrological model is being developed based on the Extended Cellular Automata formalism (XCA), which proves to be very suitable for high performance computing. In this paper, a preliminary module related to three-dimensional saturated flow in porous media is presented and implemented by using the OpenCAL parallel software library. This allows to exploit distributed systems with heterogeneous computational devices. The proposed model is evaluated in terms of both accuracy and precision of modeling results and computational performance, using single layered three-dimensional test cases at different resolutions (from local to regional scale), simulating pumping from one or more wells, river-groundwater interactions and varying soil hydraulic properties. Model accuracy is compared with analytic, when available, or numerical (MODFLOW 2005) solution, while the computational performance is evaluated using an Intel Xeon CPU socket. Overall, the XCA-based model proves to be accurate and, mainly, computationally very efficient thanks to the many options and tools available with the OpenCAL library.

Alessio De Rango, Luca Furnari, Andrea Giordano, Alfonso Senatore, Donato D’Ambrosio, Salvatore Straface, Giuseppe Mendicino

A Cybersecurity Framework for Classifying Non Stationary Data Streams Exploiting Genetic Programming and Ensemble Learning

Intrusion detection systems have to cope with many challenging problems, such as unbalanced datasets, fast data streams and frequent changes in the nature of the attacks (concept drift). To this aim, here, a distributed genetic programming (GP) tool is used to generate the combiner function of an ensemble; this tool does not need a heavy additional training phase, once the classifiers composing the ensemble have been trained, and it can hence answer quickly to concept drifts, also in the case of fast-changing data streams. The above-described approach is integrated into a novel cybersecurity framework for classifying non stationary and unbalanced data streams. The framework provides mechanisms for detecting drifts and for replacing classifiers, which permits to build the ensemble in an incremental way. Tests conducted on real data have shown that the framework is effective in both detecting attacks and reacting quickly to concept drifts.

Gianluigi Folino, Francesco Sergio Pisani, Luigi Pontieri

A Dynamic Load Balancing Technique for Parallel Execution of Structured Grid Models

Partitioning computational load over different processing elements is a crucial issue in parallel computing. This is particularly relevant in the case of parallel execution of structured grid computational models, such as Cellular Automata (CA), where the domain space is partitioned in regions assigned to the parallel computing nodes. In this work, we present a dynamic load balancing technique that provides for performance improvements in structured grid model execution on distributed memory architectures. First tests implemented using the MPI technology have shown the goodness of the proposed technique in sensibly reducing execution times with respect to not-balanced parallel versions.

Andrea Giordano, Alessio De Rango, Rocco Rongo, Donato D’Ambrosio, William Spataro

Final Sediment Outcome from Meteorological Flood Events: A Multi-modelling Approach

Coastal areas are more and more exposed to the effects of climatic change. Intense local rainfalls increases the frequency of flash floods and/or flow-like subaerial and afterward submarine landslides. The overall phenomenon of flash flood is complex and involves different phases strongly connected: heavy precipitations in a short period of time, soil erosion, fan deltas forming at mouth and hyperpycnal flows and/or landslides occurrence. Such interrelated phases were separately modelled for simulation purposes by different computational models: Partial Differential Equations methods for weather forecasts and sediment production estimation and Cellular Automata for soil erosion by rainfall and subaerial sediment transport and deposit. Our aim is to complete the model for the last phase of final sediment outcome. This research starts from the results of the previous models and introduces the processes concerning the demolition of fan deltas by sea waves during a sea-storm and the subsequent transport of and sediments in suspension by current at the sea-storm end and their deposition and eventual flowing on the sea bed. A first reduced implementation of the new model SCIDDICA-ss2/w&c1 was applied on the partial reconstruction of the 2016 Bagnara case regarding the meteorological conditions and the flattening of Sfalassà’s fan delta.

Valeria Lupiano, Claudia R. Calidonna, Elenio Avolio, Salvatore La Rosa, Giuseppe Cianflone, Antonio Viscomi, Rosanna De Rosa, Rocco Dominici, Ines Alberico, Nicola Pelosi, Fabrizio Lirer, Salvatore Di Gregorio

Parallel Algorithms for Multifractal Analysis of River Networks

The dynamical properties of many natural phenomena can be related to their support fractal dimension. A relevant example is the connection between flood peaks produced in a river basin, as observed in flood hydrographs, and the multi-fractal spectrum of the river itself, according to the Multifractal Instantaneous Unit Hydrograph (MIUH) theory. Typically, the multifractal analysis of river networks is carried out by sampling large collections of points belonging to the river basin and analyzing the fractal dimensions and the Lipschitz-Hölder exponents of singularities through numerical procedures which involve different degrees of accuracy in the assessment of such quantities through different methods (box-counting techniques, the generalized correlation integral method by Pawelzik and Schuster (1987), the fixed-mass algorithms by Badii and Politi (1985), being some relevant examples). However, the higher accuracy in the determination of the fractal dimensions requires considerably higher computational times. For this reason, we recently developed a parallel version of some of the cited multifractal methods described above by using the MPI parallel library, by reaching almost optimal speed-ups in the computations. This will supply a tool for the assessment of the fractal dimensions of river networks (as well as of several other natural phenomena whose embedding dimension is 2 or 3) on massively parallel clusters or multi-core workstations.

Leonardo Primavera, Emilia Florio

A Methodology Approach to Compare Performance of Parallel Programming Models for Shared-Memory Architectures

The majority of current HPC applications are composed of complex and irregular data structures that involve techniques such as linear algebra, graph algorithms, and resource management, for which new platforms with varying computation-unit capacity and features are required. Platforms using several cores with different performance characteristics make a challenge the selection of the best programming model, based on the corresponding executing algorithm. To make this study, there are approaches in the literature, that go from comparing in isolation the corresponding programming models’ primitives to the evaluation of a complete set of benchmarks. Our study shows that none of them may provide enough information for a HPC application to make a programming model selection. In addition, modern platforms are modifying the memory hierarchy, evolving to larger shared and private caches or NUMA regions making the memory wall an issue to consider depending on the memory access patterns of applications. In this work, we propose a methodology based on Parallel Programming Patterns to consider intra and inter socket communication. In this sense, we analyze MPI, OpenMP and the hybrid solution MPI/OpenMP in shared-memory environments. We demonstrate that the proposed comparison methodology may give more accurate predictions in performance for given HPC applications and consequently a useful tool to select the appropriate parallel programming model.

Gladys Utrera, Marisa Gil, Xavier Martorell

Numbers, Algorithms, and Applications

Frontmatter

New Approaches to Basic Calculus: An Experimentation via Numerical Computation

The introduction of the first elements of calculus both in the first university year and in the last class of high schools, presents many problems both in Italy and abroad. Emblematic are the (numerous) cases in which students decide to change their course of study or give it up completely cause the difficulties with the first exam of mathematics, which usually deals with basic calculus. This work concerns an educational experimentation involving (with differentiated methods) about 170 students, part at the IPS “F. Besta” in Treviso (IT) with main focus on two 5th classes where the students’ age is about 19 years old, and part at the Liceo Classico Scientifico “XXV Aprile” in Pontedera, prov. of Pisa (IT). The experimental project aims to explore the teaching potential offered by non-classical approaches to calculus jointly with the so-called “unimaginable numbers”. In particular, we employed the computational method recently proposed by Y.D. Sergeyev and widely used both in mathematics, in applied sciences and, recently, also for educational purposes. In the paper will be illustrated tools, investigation methodologies, collected data (before and after the teaching unit), and the results of various class tests.

Luigi Antoniotti, Fabio Caldarola, Gianfranco d’Atri, Marco Pellegrini

A Multi-factor RSA-like Scheme with Fast Decryption Based on Rédei Rational Functions over the Pell Hyperbola

We propose a generalization of an RSA-like scheme based on Rédei rational functions over the Pell hyperbola. Instead of a modulus which is a product of two primes, we define the scheme on a multi-factor modulus, i.e. on a product of more than two primes. This results in a scheme with a decryption which is quadratically faster, in the number of primes factoring the modulus, than the original RSA, while preserving a better security. The scheme reaches its best efficiency advantage over RSA for high security levels, since in these cases the modulus can contain more primes. Compared to the analog schemes based on elliptic curves, as the KMOV cryptosystem, the proposed scheme is more efficient. Furthermore a variation of the scheme with larger ciphertext size does not suffer of impossible group operation attacks, as it happens for schemes based on elliptic curves.

Emanuele Bellini, Nadir Murru

Paradoxes of the Infinite and Ontological Dilemmas Between Ancient Philosophy and Modern Mathematical Solutions

The concept of infinity had, in ancient times, an indistinguishable development between mathematics and philosophy. We could also say that his real birth and development was in Magna Graecia, the ancient South of Italy, and it is surprising that we find, in that time, a notable convergence not only of the mathematical and philosophical point of view, but also of what resembles the first “computational approach” to “infinitely” or very large numbers by Archimedes. On the other hand, since the birth of philosophy in ancient Greece, the concept of infinite has been closely linked with that of contradiction and, more precisely, with the intellectual effort to overcome contradictions present in an account of Totality as fully grounded. The present work illustrates the ontological and epistemological nature of the paradoxes of the infinite, focusing on the theoretical framework of Aristotle, Kant and Hegel, and connecting the epistemological issues about the infinite to concepts such as the continuum in mathematics.

Fabio Caldarola, Domenico Cortese, Gianfranco d’Atri, Mario Maiolo

The Sequence of Carboncettus Octagons

Considering the classic Fibonacci sequence, we present in this paper a geometric sequence attached to it, where the word “geometric” must be understood in a literal sense: for every Fibonacci number $$F_n$$ we will in fact construct an octagon $$C_n$$ that we will call the n-th Carboncettus octagon, and in this way we obtain a new sequence $$\big \{C_n \big \}_{n}$$ consisting not of numbers but of geometric objects. The idea of this sequence draws inspiration from far away, and in particular from a portal visible today in the Cathedral of Prato, supposed work of Carboncettus marmorarius, and even dating back to the century before that of the writing of the Liber Abaci by Leonardo Pisano called Fibonacci (AD 1202). It is also very important to note that, if other future evidences will be found in support to the historical effectiveness of a Carboncettus-like construction, this would mean that Fibonacci numbers were known and used well before 1202. After the presentation of the sequence $$\big \{C_n\big \}_{n}$$, we will give some numerical examples about the metric characteristics of the first few Carboncettus octagons, and we will also begin to discuss some general and peculiar properties of the new sequence.

Fabio Caldarola, Gianfranco d’Atri, Mario Maiolo, Giuseppe Pirillo

On the Arithmetic of Knuth’s Powers and Some Computational Results About Their Density

The object of the paper are the so-called “unimaginable numbers”. In particular, we deal with some arithmetic and computational aspects of the Knuth’s powers notation and move some first steps into the investigation of their density. Many authors adopt the convention that unimaginable numbers start immediately after 1 googol which is equal to $$10^{100}$$, and G.R. Blakley and I. Borosh have calculated that there are exactly 58 integers between 1 and 1 googol having a nontrivial “kratic representation”, i.e., are expressible nontrivially as Knuth’s powers. In this paper we extend their computations obtaining, for example, that there are exactly 2 893 numbers smaller than $$10^{10\,000}$$ with a nontrivial kratic representation, and we, moreover, investigate the behavior of some functions, called krata, obtained by fixing at most two arguments in the Knuth’s power $$a\!\uparrow ^b\!c$$.

Fabio Caldarola, Gianfranco d’Atri, Pietro Mercuri, Valerio Talamanca

Combinatorics on n-sets: Arithmetic Properties and Numerical Results

The following claim was one of the favorite “initiation question” to mathematics of Paul Erdős: for every non-zero natural number n, each subset of $$I(2n)=\{1,2,\dots ,2n\}$$, having size $$n+1$$, contains at least two distinct elements of which the smallest divides the largest. This can be proved using the pigeonhole principle. On the other side, it is easy to see that there are subsets of I(2n) of size n without divisor-multiple pairs; we call them n-sets, and we study some of their combinatorial properties giving also some numerical results. In particular, we give a precise description of the elements that, for a fixed n, do not belong to every n-set, as well as the elements that do belong to all the n-sets. Furthermore, we give an algorithm to count the n-sets for a given n and, in this way, we can see the behavior of the sequence a(n) of the number of n-sets. We will present some different versions of the algorithm, along with their performances, and we finally show our numerical results, that is, the first 200 values of the sequence a(n) and of the sequence $$q(n):=a(n+1)/a(n)$$.

Fabio Caldarola, Gianfranco d’Atri, Marco Pellegrini

Numerical Problems in XBRL Reports and the Use of Blockchain as Trust Enabler

Financial statements are formal records of the financial activities that companies use to provide an accurate picture of their financial history. Their main purpose is to offer all the necessary data for an accurate assessment of the economic situation of a company and its ability to attract stakeholders. Our goal is to investigate how Benford’s law can be used to detect fraud in a financial report with the support of trustworthiness by blockchain.

Gianfranco d’Atri, Van Thanh Le, Dino Garrì, Stella d’Atri

Modelling on Human Intelligence a Machine Learning System

Recently, a huge set of systems, devoted to emotions recognition has been built, especially due to its application in many work domains, with the aims to understand human behaviour and to embody this knowledge into human-computer interaction or human-robot interaction. The recognition of human expressions is a very complex problem for artificial systems, caused by the extreme elusiveness of the phenomenon that, starting from six basic emotions, creates a series of intermediate variations, difficult to recognize by an artificial system. To overcome these difficulties, and expand artificial knowledge, a Machine Learning (ML) system has been designed with the specific aim to develop a recognition system modelled on human cognitive functions. Cohn-Kanade database images was used as data set. After training the ML, it was tested on a representative sample of unstructured data. The aim is to make computational algorithms more and more efficient in recognizing emotional expressions in the faces of human subjects.

Michela De Pietro, Francesca Bertacchini, Pietro Pantano, Eleonora Bilotta

Algorithms for Jewelry Industry 4.0

The industrial and technological revolution and the use of innovative software allowed to build a virtual world from which we can control the physical one. In particular, this development provided relevant benefits in the field of jewelry manufacturing industry using parametric modeling systems. This paper proposes a parametric design method to improve smart manufacturing in 4.0 jewelry industry. By using constrained collection of schemata, the so called Direct Acyclic Graphs (DAGs) and additive manufacturing technologies, we created a process by which customers are able to modify 3D virtual models and to visualize them, according to their preferences. In fact, by using the software packages Mathematica and Grasshopper, we exploited both the huge quantity of mathematical patterns (such as curves and knots), and the parametric space of these structures. A generic DAG, grouped into a unit called User Object, is a design tools shifting the focus from final shape to digital process. For this reason, it is capable to returns a huge number of unique combinations of the starting configurations, according to the customers preferences. The configurations chosen by the designer or by the customers, are 3D printed in wax-based resins and, later, ready to be merged, according to artisan jewelry handcraft. Two cases studio are proposed to show empirical evidences of the designed process to transform abstract mathematical equations into real physical forms.

Francesco Demarco, Francesca Bertacchini, Carmelo Scuro, Eleonora Bilotta, Pietro Pantano

Clustering Analysis to Profile Customers’ Behaviour in POWER CLOUD Energy Community

This paper presents a cluster analysis study on energy consumption dataset to profile “groups of customers” to whom address POWERCLOUD services. POWER CLOUD project (PON I& C2014–2020) aims to create an energy community where each consumer can become also energy producer (PROSUMER) and so exchange a surplus of energy produced by renewable sources with other users, or collectively purchase or sell wholesale energy. In this framework, an online questionnaire has been developed in order to collect data on consumers behaviour and their preferences. A clustering analysis was carried on the filled questionnaires using Wolfram Mathematica software, in particular FindClusters function, to automatically group related segments of data. In our work, clustering analysis allowed to better understand the energy consumption propensity according the identified demographic variables. Thus, the outcomes highlight how the availability to adopt technologies to be used in PowerCloud energy community, increases with the growth of the family unit and, a greater propensity is major present in the age groups of 18–24 and 25–34.

Lorella Gabriele, Francesca Bertacchini, Simona Giglio, Daniele Menniti, Pietro Pantano, Anna Pinnarelli, Nicola Sorrentino, Eleonora Bilotta

A Grossone-Based Numerical Model for Computations with Infinity: A Case Study in an Italian High School

The knowledge and understanding of abstract concepts systematically occur in the studies of mathematics. The epistemological approach of these concepts gradually becomes of higher importance as the level of abstraction and the risk of developing a “primitive concept” which is different from the knowledge of the topic itself increase. A typical case relates to the concepts of infinity and infinitesimal. The basic idea is to overturn the normal “concept-model” approach: no longer a concept which has to be studied and modeled in a further moment but rather a model that can be manipulated (from the calculation point of view) and that has to be associated to a concept that is compatible with the calculus properties of the selected model. In this paper the authors want to prove the usefulness of this new approach in the study of infinite quantities and of the infinitesimal calculus. To do this, they expose results of an experiment being a test proposed to students of a high school. The aim of the test is to demonstrate that this new solution could be useful in order to enforce ideas and acknowledgment about infinitesimal calculus. In order to do that, the authors propose a test to their students a first time without giving any theoretical information but only using an arithmetic/algebraic model. In a second moment, after some lectures, the students repeat the test showing that new better results come out. The reason is that after lessons, students could join new basic ideas or primitive concepts to their calculus abilities. By such doing they do not use a traditional “concept–model” but a new “model–concept” solution.

Francesco Ingarozza, Maria Teresa Adamo, Maria Martino, Aldo Piscitelli

A Computational Approach with MATLAB Software for Nonlinear Equation Roots Finding in High School Maths

The paper focuses on solving the nonlinear equation $$ f\left( x \right) = 0, $$ one of the classic topics of Numerical Analysis present in the syllabus of experimental sections of Italian high schools in secondary education. The main objective of this paper is to propose an example of constructivist teaching practice emphasizing the computational approach with the use of MATLAB software.MATLAB is a high-performance language for technical computing, but it is also suitable for high school maths class teaching because of its powerful numeric engine, combined with interactive visualization tools. All this helps to keep teaching and learning of this part of mathematics alive and attractive.

Annarosa Serpe

Task Mathematical Modelling Design in a Dynamic Geometry Environment: Archimedean Spiral’s Algorithm

Over the last twenty years, several research studies have recognized that integrating, not simply adding, technology (Computer Algebra System - CAS, Dynamic Geometry Software - DGS, spreadsheets, programming environments, etc.) in the teaching of mathematics helps students develop essential understandings about the nature, use, and limits of the tool and promotes deeper understanding of the mathematical concepts involved. Moreover, the use of technology in the Mathematics curricula can be important in providing the essential support to make mathematical modelling a more accessible mathematical activity for students. This paper presents an example of how technology can play a pivotal role in providing support to explore, represent and resolve tasks of mathematical modelling in the classroom. Specifically, a mathematical modelling task design on the tracing of Archimedean spiral with use of a Dynamic Geometry Environment is shown. The aim is to emphasize the meaning and the semantic value of this rich field of study that combines tangible objects and practical mechanisms with abstract mathematics.

Annarosa Serpe, Maria Giovanna Frassia

Optimization and Management of Water Supply

Frontmatter

Numerical Experimentations for a New Set of Local Indices of a Water Network

Very recently, a new set of local performance indices has been proposed for an urban water supply system together with a useful mathematical model or, better, framework that organizes and provides the tools to treat the complex of these local parameters varying from node to node. In this paper, such indices are considered and examined in relation to hydraulic software using Demand Driven Analysis (DDA) or Pressure Driven Analysis (PDA). We investigate the needed hypotheses to obtain effective numerical simulations employing, in particular, EPANET or WaterNetGen, and the concrete applicability to a real water supply system known in literature as the KL network.

Marco Amos Bonora, Fabio Caldarola, Joao Muranho, Joaquim Sousa, Mario Maiolo

Performance Management of Demand and Pressure Driven Analysis in a Monitored Water Distribution Network

A smart management of water distribution networks requires the infrastructure to be operated with high efficiency. For many years the hydraulic modelling of water distribution networks has been conditioned by the scarce availability of quality data but the technological advances contributed to overcome this drawback. The present work describes the research activity carried out about water distribution network modelling, focusing on model construction and calibration. For this purpose, WaterNetGen, an extension of the Epanet hydraulic simulation software, has been used. EPANET simulation model assumes that the required water demand is always fully satisfied regardless the pressure (Demand Driven Analysis - DDA), while WaterNetGen has a new solver assuming that the required water demand is only fully satisfied if the pressure conditions are adequate (Pressure Driven Analysis - PDA). A comparison between the software outputs is the starting point for a new method of allocating and distributing water demand and water losses along the network, leading to model results closer to the measurements obtained in the real network. The case study is the water distribution network of the municipality of Nicotera, in Southern Italy.

Marco Amos Bonora, Manuela Carini, Gilda Capano, Rocco Cotrona, Daniela Pantusa, Joaquim Sousa, Mario Maiolo

Algebraic Tools and New Local Indices for Water Networks:Some Numerical Examples

Very recently, a new set of local indices for urban water networks has been proposed by the authors, within a mathematical framework which is unprecedented for this field, as far as we know. Such indices can be viewed as the “elementary bricks” that can be used to construct as many global (and local) indices as one needs or wants, where the glue, or mortar, is given by the mathematical tools of the aforementioned framework coming mostly from linear algebra and vector analysis. In this paper, after a brief description of the setting as explained above, we recover, through new formulations, some well-known global indicators like the resilience index $$I_r$$ introduced by Todini. Then we also give some explicit numerical computations and examples, sometimes with the help of the hydraulic software EPANET 2.0.12.

Fabio Caldarola, Mario Maiolo

Identification of Contamination Potential Source (ICPS): A Topological Approach for the Optimal Recognition of Sensitive Nodes in a Water Distribution Network

The correct management of urban water networks have to be supported by monitoring and estimating water quality. The infrastructure maintenance status and the possibility of a prevention plan availability influence the potential risk of contamination. In this context, the Contamination Source Identification (CSI) models aim to identify the contamination source starting from the concentration values referring to the nodes. This paper proposes a methodology based on Dynamics of Network Pollution (DNP). The DNP approach, linked to the pollution matrix and the incidence matrix, allows a topological analysis on the network structure in order to identify the nodes and paths most sensitive to contamination, namely those that favor a more critical diffusion of the introduced contaminant. The procedure is proposed with the aim of optimally identifying the potential contamination points. By simulating the contamination of a synthetic network, using a bottom-up approach, an optimized procedure is defined to trace back to the chosen node as the most probable contamination source.

Gilda Capano, Marco Amos Bonora, Manuela Carini, Mario Maiolo

Seeking for a Trade-Off Between Accuracy and Timeliness in Meteo-Hydrological Modeling Chains

The level of detail achieved by operational General Circulation Models (e.g., the HRES 9 km resolution forecast recently launched by the ECMWF) raises questions about the most appropriate use of Limited Area Models, which provide for further dynamical downscaling of the weather variables. The two main objectives targeted in hydro-meteorological forecasts, i.e. accuracy and timeliness, are to some extent conflicting. Accuracy and precision of a forecast can be evaluated by proper statistical indices based on observations, while timeliness mainly depends on the spatial resolution of the grid and the computational resources used. In this research, several experiments are set up applying the Advanced Research Weather Research and Forecasting (WRF-ARW) Model to a weather event occurred in Southern Italy in 2018. Forecast accuracy is evaluated both for the HRES ECMWF output and that provided by WRF dynamical downscaling at different resolutions. Furthermore, timeliness of the forecast is assessed adding to the time needed for GCM output availability the time needed for Limited Area simulations at different resolutions and using varying core numbers. The research provides useful insights for the operational forecast in the study area, highlighting the level of detail required and the current weaknesses hindering correct forecast of the hydrological impact of extreme weather events.

Luca Furnari, Alfonso Senatore, Giuseppe Mendicino

Optimization Model for Water Distribution Network Planning in a Realistic Orographic Framework

Defining criteria for correct distribution of water resource is a common engineering problem. Stringent regulations on environmental impacts underline the need for sustainable management and planning of this resource usage, which is sensitive to many parameters. Optimization models are often used to deal with these problems, identifying the optimal configuration of a Water Distribution Network (WDN) in terms of minimizing an appropriate function propotional to he construction cost of the WDN. Generally, this cost function increases as the distance between the source-user connection increases, therefore in minimum cost optimization models is important to identify minimum source-user paths compatible with the orography. In this direction, the methodology presented in the present work proposes a useful approach to find minimum-length paths on surfaces, which moreover respect suitable hydraulic constraints and are therefore representative of reliable gravity water pipelines. The application of the approach is presented in a real case in Calabria.

Mario Maiolo, Joaquim Sousa, Manuela Carini, Francesco Chiaravalloti, Marco Amos Bonora, Gilda Capano, Daniela Pantusa

Scenario Optimization of Complex Water Supply Systems for Energy Saving and Drought-Risk Management

The management of complex water supply systems needs a close attention to economic aspects concerning high costs related to energy requirements in water transfers. Specifically, the optimization of activation schedules of water pumping plants is an important issue, especially managing emergency and costly water transfers under drought-risk. In such optimization context under uncertainty conditions, it is crucial to assure simultaneously energy savings and water shortage risk alleviating measures. The model formulation needs to highlight these requirements duality to guarantee an adequate water demand fulfillment respecting an energy saving policy. The proposed modeling approach has been developed using a two stages scenario optimization in order to consider a cost-risk balance, and to achieve simultaneously energy and operative costs minimization assuring an adequate water demand fulfillment for users. The optimization algorithm has been implemented using GAMS interfaced with CPLEX solvers. An application of the proposed optimization approach has been tested considering a water supply system located in a drought-prone area in North-West Sardinia (Italy). By applying this optimization procedure, a robust strategy in pumping activation was obtained for this real case water supply system.

Jacopo Napolitano, Giovanni M. Sechi

Optimizing Rainwater Harvesting Systems for Non-potable Water Uses and Surface Runoff Mitigation

Rainwater harvesting systems represent sustainable solutions that meet the challenges of water saving and surface runoff mitigation. The collected rainwater can be re-used for several purposes such as irrigation of green roofs and garden, flushing toilets, etc. Optimizing the water usage in each such use is a significant goal. To achieve this goal, we have considered TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and Rough Set method as Multi-Objective Optimization approaches by analyzing different case studies. TOPSIS was used to compare algorithms and evaluate the performance of alternatives, while Rough Set method was applied as a machine learning method to optimize rainwater-harvesting systems. Results by Rough Set method provided a baseline for decision-making and the minimal decision algorithm were obtained as six rules. In addition, The TOPSIS method ranked all case studies, and because we used several correlated attributes, the findings are more accurate from other simple ranking method. Therefore, the numerical optimization of rainwater harvesting systems will improve the knowledge from previous studies in the field, and provide an additional tool to identify the optimal rainwater reuse in order to save water and reduce the surface runoff discharged into the sewer system.

Stefania Anna Palermo, Vito Cataldo Talarico, Behrouz Pirouz

New Mathematical Optimization Approaches for LID Systems

Urbanization affects ecosystem health and downstream communities by changing the natural flow regime. In this context, Low Impact Development (LID) systems are important tools in sustainable development. There are many aspects in design and operation of LID systems and the choice of the selected LID and its location in the basin can affect the results. In this regard, the Mathematical Optimization Approaches can be an ideal method to optimize LIDs use. Here we consider the application of TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and Rough Set theory (multiple attributes decision-making method). An advantage of using the Rough Set method in LID systems is that the selected decisions are explicit, and the method is not limited by restrictive assumptions. This new mathematical optimization approach for LID systems improves previous studies on this subject. Moreover, it provides an additional tool for the analysis of essential attributes to select and optimize the best LID system for a project.

Behrouz Pirouz, Stefania Anna Palermo, Michele Turco, Patrizia Piro

Evaluation of an Integrated Seasonal Forecast System for Agricultural Water Management in Mediterranean Regions

The Euro-Mediterranean Center on Climate Change (CM-CC) seasonal forecasting system, based on the global coupled model CMCC-CM, performs seasonal forecasts every month, producing several ensemble integrations conducted for the following 6 months. In this study, a performance evaluation of the skills of this system is performed in two neighbouring Mediterranean medium-small size catchments located in Southern Italy, the Crati river and the Coscile river, whose hydrological cycles are particularly important for agricultural purposes.Initially, the performance of the system is evaluated comparing observed and simulated precipitation and temperature anomalies in the irrigation periods of the years 2011–2017. Forecasts issued on April 1st (i.e., at the beginning of the irrigation period) are evaluated, considering two lead times (first and second trimester). Afterward, the seasonal forecasts are integrated into a complete meteo-hydrological system. Precipitation and temperature provided by the global model are ingested in the spatially distributed and physically based In-STRHyM (Intermediate Space-Time Resolution Hydrological Model) model, which analyzes the hydrological impact of the seasonal forecasts.Though the predicted precipitation and temperature anomalies are not highly correlated with observations, the integrated seasonal forecast for the hydrological variables provides significant correlations between observed and predicted anomalies, especially concerning mean discharge (>0.65). Overall, the system showed to provide useful insights for agricultural water management in the study area.

Alfonso Senatore, Domenico Fuoco, Antonella Sanna, Andrea Borrelli, Giuseppe Mendicino, Silvio Gualdi

Optimization of Submarine Outfalls with a Multiport Diffuser Design

Immission of civil sewage into sea is realized to complete the onshore depurative process or to take out the already purified waste-water from the bathing area, ensuring a good perceived seawater quality. Anyhow the compliance of the pollutant concentrations limits is necessary to ensure safe bathing. The design of submarine pipes is usually completed contemplating a diffuser with a series of ports for the repartition of the wastewater discharge. The real process of pollutants diffusion into the sea, simulated with complex diffusion-dispersion models in a motion-field dependent from environmental conditions and drift speeds, affect the submarine pipe design. A design optimization procedure has been realized for the marine outfall pipe-diffuser system using a simplified zone model, subjected to a sensitivity analysis on the characteristic parameter. The method is shown using an example project for the submarine outfall at service for the sewage treatment plant of Belvedere Marittimo, on the southern Tyrrhenian Sea in Italy.

Salvatore Sinopoli, Marco Amos Bonora, Gilda Capano, Manuela Carini, Daniela Pantusa, Mario Maiolo

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise