Skip to main content
Top

2019 | Book

Computational Science – ICCS 2019

19th International Conference, Faro, Portugal, June 12–14, 2019, Proceedings, Part I

Editors: Dr. João M. F. Rodrigues, Dr. Pedro J. S. Cardoso, Dr. Jânio Monteiro, Prof. Roberto Lam, Dr. Valeria V. Krzhizhanovskaya, Michael H. Lees, Jack J. Dongarra, Peter M.A. Sloot

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The five-volume set LNCS 11536, 11537, 11538, 11539, and 11540 constitutes the proceedings of the 19th International Conference on Computational Science, ICCS 2019, held in Faro, Portugal, in June 2019.

The total of 65 full papers and 168 workshop papers presented in this book set were carefully reviewed and selected from 573 submissions (228 submissions to the main track and 345 submissions to the workshops). The papers were organized in topical sections named:

Part I: ICCS Main Track

Part II: ICCS Main Track; Track of Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Track of Agent-Based Simulations, Adaptive Algorithms and Solvers; Track of Applications of Matrix Methods in Artificial Intelligence and Machine Learning; Track of Architecture, Languages, Compilation and Hardware Support for Emerging and Heterogeneous Systems

Part III: Track of Biomedical and Bioinformatics Challenges for Computer Science; Track of Classifier Learning from Difficult Data; Track of Computational Finance and Business Intelligence; Track of Computational Optimization, Modelling and Simulation; Track of Computational Science in IoT and Smart Systems

Part IV: Track of Data-Driven Computational Sciences; Track of Machine Learning and Data Assimilation for Dynamical Systems; Track of Marine Computing in the Interconnected World for the Benefit of the Society; Track of Multiscale Modelling and Simulation; Track of Simulations of Flow and Transport: Modeling, Algorithms and Computation

Part V: Track of Smart Systems: Computer Vision, Sensor Networks and Machine Learning; Track of Solving Problems with Uncertainties; Track of Teaching Computational Science; Poster Track ICCS 2019

Chapter “Comparing Domain-decomposition Methods for the Parallelization of Distributed Land Surface Models” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.

Table of Contents

Frontmatter

ICCS Main Track

Frontmatter
Efficient Computation of Sparse Higher Derivative Tensors

The computation of higher derivatives tensors is expensive even for adjoint algorithmic differentiation methods. In this work we introduce methods to exploit the symmetry and the sparsity structure of higher derivatives to considerably improve the efficiency of their computation. The proposed methods apply coloring algorithms to two-dimensional compressed slices of the derivative tensors. The presented work is a step towards feasibility of higher-order methods which might benefit numerical simulations in numerous applications of computational science and engineering.

Jens Deussen, Uwe Naumann
Rational Approximation of Scientific Data

Scientific datasets are becoming increasingly challenging to transfer, analyze, and store. There is a need for methods to transform these datasets into compact representations that facilitate their downstream management and analysis, and ideally model the underlying scientific phenomena with defined numerical fidelity. To address this need, we propose nonuniform rational B-splines (NURBS) for modeling discrete scientific datasets; not only to compress input data points, but also to enable further analysis directly on the continuous fitted model, without the need for decompression. First, we evaluate three different methods for NURBS fitting, and compare their performance relative to unweighted least squares approximation (B-splines). We then extend current state-of-the-art B-spline adaptive approximation to NURBS; that is, adaptively determining optimal rational basis functions and weighted control point locations that approximate given input data points to prespecified accuracy. Additionally, we present a novel local adaptive algorithm to iteratively approximate large data input domains. This method takes advantage of NURBS local support to refine regions of the approximated model, acting locally on both input and model subdomains, without affecting other regions of the global approximation. We evaluate our methods in terms of approximated model compactness, achieved accuracy, and computational cost on both synthetic smooth functions and real-world scientific data.

Youssef S. G. Nashed, Tom Peterka, Vijay Mahadevan, Iulian Grindeanu
Design of a High-Performance Tensor-Vector Multiplication with BLAS

Tensor contraction is an important mathematical operation for many scientific computing applications that use tensors to store massive multidimensional data. Based on the Loops-over-GEMMs (LOG) approach, this paper discusses the design of high-performance algorithms for the mode-q tensor-vector multiplication using efficient implementations of the matrix-vector multiplication (GEMV). Given dense tensors with any non-hierarchical storage format, tensor order and dimensions, the proposed algorithms either directly call GEMV with tensors or recursively apply GEMV on higher-order tensor slices multiple times. We analyze strategies for loop-fusion and parallel execution of slice-vector multiplications with higher-order tensor slices. Using OpenBLAS, our parallel implementation attains 34.8 Gflops/s in single precision on a Core i9-7900X Intel Xeon processor. Our parallel version of the tensor-vector multiplication is on average 6.1x and up to 12.6x faster than state-of-the-art approaches.

Cem Bassoy
High Performance Partial Coherent X-Ray Ptychography

During the last century, X-ray science has enabled breakthrough discoveries in fields as diverse as medicine, material science or electronics, and recently, ptychography has risen as a reference imaging technique in the field. It provides resolutions of a billionth of a meter, macroscopic field of view, or the capability to retrieve chemical or magnetic contrast, among other features. The goal of ptychography is to reconstruct a 2D visualization of a sample from a collection of diffraction patterns generated from the interaction of a light source with the sample. Reconstruction involves solving a nonlinear optimization problem employing a large amount of measured data—typically two orders of magnitude bigger than the reconstructed sample—so high performance solutions are normally required. A common problem in ptychography is that the majority of the flux from the light sources is often discarded to define the coherence of an illumination. Gradient Decomposition of the Probe (GDP) is a novel method devised to address this issue. It provides the capability to significantly improve the quality of the image when partial coherence effects take place, at the expense of a three-fold increase of the memory requirements and computation. This downside, along with the fine-grained degree of parallelism of the operations involved in GDP, makes it an ideal target for GPU acceleration. In this paper we propose the first high performance implementation of GDP for partial coherence X-ray ptychography. The proposed solution exploits an efficient data layout and multi-gpu parallelism to achieve massive acceleration and efficient scaling. The experimental results demonstrate the enhanced reconstruction quality and performance of our solution, able process up to 4 million input samples per second on a single high-end workstation, and compare its performance with a reference HPC ptychography pipeline.

Pablo Enfedaque, Huibin Chang, Bjoern Enders, David Shapiro, Stefano Marchesini
Monte Carlo Analysis of Local Cross–Correlation ST–TBD Algorithm

The Track–Before–Detect (TBD) algorithms allow the estimation of the state of an object, even if the signal is hidden in the background noise. The application of local cross–correlation for the modified Information Update formula improves this estimation for extended objects (tens of cells in the measurement space) compared to the direct application of the Spatio–Temporal TBD (ST–TBD) algorithm. The Monte Carlo test was applied to evaluate algorithms by using a variable standard deviation of the additive Gaussian noise. The proposed solution does not require prior knowledge of the size or measured values of the object. Mean Absolute Error for the proposed algorithm is much lower, close to zero to about 0.8 standard deviation, which is not achieved for the ST–TBD.

Przemyslaw Mazurek, Robert Krupinski
Optimization of Demodulation for Air–Gap Data Transmission Based on Backlight Modulation of Screen

Air–gap is an efficient technique for the improving of computer security. Proposed technique uses backlight modulation of monitor screen for data transmission from infected computer. The optimization algorithm for the segmentation of video stream is proposed for the improving of data transmission robustness. This algorithm is tested using Monte Carlo approach with full frame analysis for different values of standard deviations of additive Gaussian noise. Achieved results show improvements for proposed selective image processing for low values of standard deviation about ten times.

Dawid Bak, Przemyslaw Mazurek, Dorota Oszutowska–Mazurek
Reinsertion Algorithm Based on Destroy and Repair Operators for Dynamic Dial a Ride Problems

The Dial-a-Ride Problem (DARP) consists in serving a set of customers who specify their pickup and drop-off locations using a fleet of vehicles. The aim of DARP is designing vehicle routes satisfying requests of customers and minimizing the total traveled distance. In this paper, we consider a real case of dynamic DARP service operated by Padam ( www.padam.io .) in which customers ask for a transportation service either in advance or in real time and get an immediate answer about whether their requests are accepted or rejected. A fleet of fixed number of vehicles is available during a working period of time to provide a transportation service. The goal is to maximize the number of accepted requests during the service. In this paper, we propose an original and novel online Reinsertion Algorithm based on destroy/repair operators to reinsert requests rejected by the online algorithm used by Padam. The proposed algorithm was implemented in the optimization engine of Padam and extensively tested on real hard instances up to 1011 requests and 14 vehicles. The results show that our method succeeds in improving the number of accepted requests.

Sven Vallée, Ammar Oulamara, Wahiba Ramdane Cherif-Khettaf
Optimization Heuristics for Computing the Voronoi Skeleton

A skeletal representation of geometrical objects is widely used in computer graphics, computer vision, image processing, and pattern recognition. Therefore, efficient algorithms for computing planar skeletons are of high relevance. In this paper, we focus on the algorithm for computing the Voronoi skeleton of a planar object represented by a set of polygons. The complexity of the considered algorithm is O(N log N), where N is the total number of polygon’s vertices. In order to improve the performance of the skeletonization algorithm, we proposed theoretically justified shape optimization heuristics, which are based on polygon simplification algorithms. We evaluated the efficiency of such heuristics using polygons extracted from MPEG 7 CE-Shape-1 dataset and measured the execution time of the skeletonization algorithm, computational overheads related to the introduced heuristics and the influence of the heuristic onto the accuracy of the resulting skeleton. As a result, we established the criteria allowing us to choose the optimal heuristics for Voronoi skeleton construction algorithm depending on the critical system’s requirements.

Dmytro Kotsur, Vasyl Tereshchenko
Transfer Learning for Leisure Centre Energy Consumption Prediction

Demand for energy is ever growing. Accurate prediction of energy demand of large buildings becomes essential for property managers to operate these facilitates more efficient and greener. Various temporal modelling provides a reliable yet straightforward paradigm for short term building energy prediction. However, newly constructed buildings and recently renovated buildings, or buildings that have energy monitoring systems newly installed, do not have sufficient data to develop accurate energy demand prediction models. In contrast, established buildings often have vast amounts of data collected which may be lying idle. The model learned from these buildings with huge data can be useful if transferred to buildings with little or no data. An ensemble tree-based machine learning algorithm and datasets from two leisure centres and an office building in Melbourne were used in this transfer learning investigation. The results show that transfer learning is a promising technique in predicting accurately under a new scenario as it can achieve similar or even better performance compared to learning on a full dataset. The results also demonstrated the importance of time series adaptation as a method of improving transfer learning.

Paul Banda, Muhammed A. Bhuiyan, Kevin Zhang, Andy Song
Forecasting Network Throughput of Remote Data Access in Computing Grids

Computing grids are key enablers of computational science. Researchers from many fields (High Energy Physics, Bioinformatics, Climatology, etc.) employ grids for execution of distributed computational jobs. These computing workloads are typically data-intensive. The current state of the art approach for data access in grids is data placement: a job is scheduled to run at a specific data center, and its execution commences only once the complete input data has been transferred there. An alternative approach is remote data access: a job may stream the input data directly from arbitrary storage elements. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined with data placement on the policy level, it can aid in the optimization of the network load, since these two data access methodologies partially exhibit nonoverlapping bottlenecks. However, in order to employ this technique systematically, the properties of its network throughput need to be studied carefully. This paper presents experimentally identified parameters of remote data access throughput, statistically tested formalization of these parameters and a derived throughput forecasting model. The model is applicable to large computing workloads, robust with respect to arbitrary dynamic changes in the grid infrastructure and exhibits a long-term prediction horizon. Its purpose is to assist various stakeholders of the grid in decision-making related to data access patterns. This work is based on measurements taken on the Worldwide LHC Computing Grid at CERN.

Volodimir Begy, Martin Barisits, Mario Lassnig, Erich Schikuta
Accurately Simulating Energy Consumption of I/O-Intensive Scientific Workflows

While distributed computing infrastructures can provide infrastructure-level techniques for managing energy consumption, application-level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that have been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed platform instrumented with power meters. We then conduct an analysis of power and energy consumption measurements. This analysis shows that power consumption is not linearly related to CPU utilization and that I/O operations significantly impact power, and thus energy, consumption. We then propose a power consumption model that accounts for I/O operations, including the impact of waiting for these operations to complete, and for concurrent task executions on multi-socket, multi-core compute nodes. We implement our proposed model as part of a simulator that allows us to draw direct comparisons between real-world and modeled power and energy consumption. We find that our model has high accuracy when compared to real-world executions. Furthermore, our model improves accuracy by about two orders of magnitude when compared to the traditional models used in the energy-efficient workflow scheduling literature.

Rafael Ferreira da Silva, Anne-Cécile Orgerie, Henri Casanova, Ryan Tanaka, Ewa Deelman, Frédéric Suter
Exploratory Visual Analysis of Anomalous Runtime Behavior in Streaming High Performance Computing Applications

Online analysis of runtime behavior is essential for performance tuning in streaming scientific workflows. Integration of anomaly detection and visualization is necessary to support human-centered analysis, such as verification of candidate anomalies utilizing domain knowledge. In this work, we propose an efficient and scalable visual analytics system for online performance analysis of scientific workflows toward the exascale scenario. Our approach uses a call stack tree representation to encode the structural and temporal information of the function executions. Based on the call stack tree features (e.g., execution time of the root function or vector representation of the tree structure), we employ online anomaly detection approaches to identify candidate anomalous function executions. We also present a set of visualization tools for verification and exploration in a level-of-detailed manner. General information, such as distribution of execution times, are provided in an overview visualization. The detailed structure (e.g., function invocation relations) and the temporal information (e.g., message communication) of the execution call stack of interest are also visualized. The usability and efficiency of our methods are verified in a real-world HPC application.

Cong Xie, Wonyong Jeong, Gyorgy Matyasfalvi, Hubertus Van Dam, Klaus Mueller, Shinjae Yoo, Wei Xu
Analysis of the Construction of Similarity Matrices on Multi-core and Many-Core Platforms Using Different Similarity Metrics

Similarity matrices are 2D representations of the degree of similarity between points of a given dataset which are employed in different fields such as data mining, genetics or machine learning. However, their calculation presents quadratic complexity and, thus, it is specially expensive for large datasets. MPICorMat is able to accelerate the construction of these matrices through the use of a hybrid parallelization strategy based on MPI and OpenMP. The previous version of this tool achieved high performance and scalability, but it only implemented one single similarity metric, the Pearson’s correlation. Therefore, it was suitable only for those problems where data are normally distributed and there is a linear relationship between variables. In this work, we present an extension to MPICorMat that incorporates eight additional metrics for similarity so that the users can choose the one that best adapts to their problem. The performance and energy consumption of each metric is measured in two platforms: a multi-core platform with two Intel Xeon Sandy-Bridge processors and a many-core Intel Xeon Phi KNL. Results show that MPICorMat executes faster and consumes less energy on the many-core architecture. The new version of MPICorMat is publicly available to download from its website: https://sourceforge.net/projects/mpicormat/

Uxía Casal, Jorge González-Domínguez, María J. Martín
High Performance Algorithms for Counting Collisions and Pairwise Interactions

The problem of counting collisions or interactions is common in areas as computer graphics and scientific simulations. Since it is a major bottleneck in applications of these areas, a lot of research has been carried out on such subject, mainly focused on techniques that allow calculations to be performed within pruned sets of objects. This paper focuses on how interaction calculation (such as collisions) within these sets can be done more efficiently than existing approaches. Two algorithms are proposed: a sequential algorithm that has linear complexity at the cost of high memory usage; and a parallel algorithm, mathematically proved to be correct, that manages to use GPU resources more efficiently than existing approaches. The proposed and existing algorithms were implemented, and experiments show a speedup of 21.7 for the sequential algorithm (on small problem size), and 1.12 for the parallel proposal (large problem size). By improving interaction calculation, this work contributes to research areas that promote interconnection in the modern world, such as computer graphics and robotics.

Matheus Henrique Junqueira Saldanha, Paulo Sérgio Lopes de Souza

Open Access

Comparing Domain Decomposition Methods for the Parallelization of Distributed Land Surface Models

Current research challenges in hydrology require high resolution models, which simulate the processes comprising the water-cycle on a global scale. These requirements stand in great contrast to the current capabilities of distributed land surface models. Hardly any literature noting efficient scalability past approximately 64 processors could be found. Porting these models to supercomputers is no simple task, because the greater part of the computational load stems from the evaluation of highly parametrized equations. Furthermore, the load is heterogeneous in both spatial and temporal dimension, and considerable load-imbalances occur triggered by input data. We investigate different domain decomposition methods for distributed land surface models and focus on their properties concerning load balancing and communication minimizing partitionings. Artificial strong scaling experiments from a single core to 8, 192 cores show that graph-based methods can distribute the computational load of the application almost as efficiently as coordinate-based methods, while the partitionings found by the graph-based method significantly reduce communication overhead.

Alexander von Ramm, Jens Weismüller, Wolfgang Kurtz, Tobias Neckel
Analysis and Detection on Abused Wildcard Domain Names Based on DNS Logs

Wildcard record is a type of resource records (RRs) in DNS, which can allow any domain name in the same zone to map to a single record value. Former works have made use of DNS zone file data and domain name blacklists to understand the usage of wildcard domain names. In this paper, we analyze wildcard domain names in real network DNS logs, and present some novel findings. By analyzing web contents, we found that the proportion of domain names related to pornography and online gambling contents (referred as abused domain names in this work) in wildcard domain names is much higher than that in non-wildcard domain names. By analyzing behaviors of registration, resolution and maliciousness, we found that abused wildcard domain names have remarkably higher risks in security than normal wildcard domain names. Then, based on the analysis, we proposed GSCS algorithm to detect abused wildcard domain names. GSCS is based on a domain graph, which can give insights on the similarities of abused wildcard domain names’ resolution behaviors. By applying spectral clustering algorithm and seed domains, GSCS can distinguish abused wildcard domain names from normal ones effectively. Experiments on real datasets indicate that GSCS can achieve about 86% detection rates with 5% seed domains, performing much better than BP algorithm.

Guangxi Yu, Yan Zhang, Huajun Cui, Xinghua Yang, Yang Li, Huiran Yang
XScan: An Integrated Tool for Understanding Open Source Community-Based Scientific Code

Many scientific communities have adopted community-based models that integrate multiple components to simulate whole system dynamics. The community software projects’ complexity, stems from the integration of multiple individual software components that were developed under different application requirements and various machine architectures, has become a challenge for effective software system understanding and continuous software development. The paper presents an integrated software toolkit called X-ray Software Scanner (in abbreviation, XScan) for a better understanding of large-scale community-based scientific codes. Our software tool provides support to quickly summarize the overall information of scientific codes, including the number of lines of code, programming languages, external library dependencies, as well as architecture-dependent parallel software features. The XScan toolkit also realizes a static software analysis component to collect detailed structural information and provides an interactive visualization and analysis of the functions. We use a large-scale community-based Earth System Model to demonstrate the workflow, functions and visualization of the toolkit. We also discuss the application of advanced graph analytics techniques to assist software modular design and component refactoring.

Weijian Zheng, Dali Wang, Fengguang Song
An On-Line Performance Introspection Framework for Task-Based Runtime Systems

The expected high levels of parallelism together with the heterogeneity and complexity of new computing systems pose many challenges to current software. New programming approaches and runtime systems that can simplify the development of parallel applications are needed. Task-based runtime systems have emerged as a good solution to cope with high levels of parallelism, while providing software portability, and easing program development. However, these runtime systems require real-time information on the state of the system to properly orchestrate program execution and optimise resource utilisation. In this paper, we present a lightweight monitoring infrastructure developed within the AllScale Runtime System, a task-based runtime system for extreme scale. This monitoring component provides real-time introspection capabilities that help the runtime scheduler in its decision-making process and adaptation, while introducing minimum overhead. In addition, the monitoring component provides several post-mortem reports as well as real-time data visualisation that can be of great help in the task of performance debugging.

Xavier Aguilar, Herbert Jordan, Thomas Heller, Alexander Hirsch, Thomas Fahringer, Erwin Laure
Productivity-Aware Design and Implementation of Distributed Tree-Based Search Algorithms

Parallel tree search algorithms offer viable solutions to problems in different areas, such as operations research, machine learning and artificial intelligence. This class of algorithms is highly compute-intensive, irregular and usually relies on context-specific data structures and hand-made code optimizations. Therefore, C and C++ are the languages often employed, due to their low-level features and performance. In this work, we investigate the use of Chapel high-productivity language for the design and implementation of distributed tree search algorithms for solving combinatorial problems. The experimental results show that Chapel is a suitable language for this purpose, both in terms of performance and productivity. Despite the use of high-level features, the distributed tree search in Chapel is on average $$16\%$$ 16 % slower and reaches up to $$85\%$$ 85 % of the scalability observed for its MPI+OpenMP counterpart.

Tiago Carneiro, Nouredine Melab
Development of Element-by-Element Kernel Algorithms in Unstructured Implicit Low-Order Finite-Element Earthquake Simulation for Many-Core Wide-SIMD CPUs

Acceleration of the Element-by-Element (EBE) kernel in matrix-vector products is essential for high-performance in unstructured implicit finite-element applications. However, the EBE kernel is not straightforward to attain high performance due to random data access with data recurrence. In this paper, we develop methods to circumvent these data races for high performance on many-core CPU architectures with wide SIMD units. The developed EBE kernel attains 16.3% and 20.9% of FP32 peak on Intel Xeon Phi Knights Landing based Oakforest-PACS and Intel Skylake Xeon Gold processor based system, respectively. This leads to 2.88-fold speedup over the baseline kernel and 2.03-fold speedup of the whole finite-element application on Oakforest-PACS. An example of urban earthquake simulation using the developed finite-element application is shown.

Kohei Fujita, Masashi Horikoshi, Tsuyoshi Ichimura, Larry Meadows, Kengo Nakajima, Muneo Hori, Lalith Maddegedara
A High-Productivity Framework for Adaptive Mesh Refinement on Multiple GPUs

Recently grid-based physical simulations with multiple GPUs require effective methods to adapt grid resolution to certain sensitive regions of simulations. In the GPU computation, an adaptive mesh refinement (AMR) method is one of the effective methods to compute certain local regions that demand higher accuracy with higher resolution. However, the AMR methods using multiple GPUs demand complicated implementation and require various optimizations suitable for GPU computation in order to obtain high performance. Our AMR framework provides a high-productive programming environment of a block-based AMR for grid-based applications. Programmers just write the stencil functions that update a grid point on Cartesian grid, which are executed over a tree-based AMR data structure effectively by the framework. It also provides the efficient GPU-suitable methods for halo exchange and mesh refinement with a dynamic load balance technique. The framework-based application for compressible flow has achieved to reduce the computational time to less than 15% with 10% of memory footprint in the best case compared to the equivalent computation running on the fine uniform grid. It also has demonstrated good weak scalability with $$84\%$$ 84 % of the parallel efficiency on the TSUBAME3.0 supercomputer.

Takashi Shimokawabe, Naoyuki Onodera
Harmonizing Sequential and Random Access to Datasets in Organizationally Distributed Environments

Computational science is rapidly developing, which pushes the boundaries in data management concerning the size and structure of datasets, data processing patterns, geographical distribution of data and performance expectations. In this paper we present a solution for harmonizing data access performance, i.e. finding a compromise between local and remote read/write efficiency that would fit those evolving requirements. It is based on variable-size logical data-chunks (in contrast to fixed-size blocks), direct storage access and several mechanisms improving remote data access performance. The solution is implemented in the Onedata system and suited to its multi-layer architecture, supporting organizationally distributed environments – with limited trust between data providers. The solution is benchmarked and compared to XRootD + XCache, which offers similar functionalities. The results show that the performance of both systems is comparable, although overheads in local data access are visibly lower in Onedata.

Michał Wrzeszcz, Łukasz Opioła, Bartosz Kryza, Łukasz Dutka, Renata G. Słota, Jacek Kitowski
Towards Unknown Traffic Identification Using Deep Auto-Encoder and Constrained Clustering

Nowadays, network traffic identification, as the fundamental technique in the field of cybersecurity, suffers from a critical problem, namely “unknown traffic”. The unknown traffic refers to network traffic generated by previously unknown applications (i.e., zero-day applications) in a pre-constructed traffic classification system. The ability to divide the mixed unknown traffic into multiple clusters, each of which contains only one application traffic as far as possible, is the key to solve this problem. In this paper, we propose the DePCK to improve the clustering purity. There are two main innovations in our framework: (i) It learns to extract bottleneck features via deep auto-encoder from traffic statistical characteristics; (ii) It uses the flow correlation to guide the process of pairwise constrained k-means. To verify the effectiveness of our framework, we make contrast experiments on two real-world datasets. The experimental results show that the clustering purity rate of DePCK can exceed 94.81% on the ISP-data and 91.48% on the WIDE-data [1], which outperform the state-of-the-art methods: RTC [20], and k-means with log data [15].

Yongzheng Zhang, Shuyuan Zhao, Yafei Sang
How to Compose Product Pages to Enhance the New Users’ Interest in the Item Catalog?

Converting first-time users into recurring ones is key to the success of Web-based applications. This problem is known as Pure Cold-Start and it refers to the capability of Recommender Systems (RSs) to provide useful recommendations to users without historical data. Traditionally, RSs assume that non-personalized recommendation can mitigate this problem. However, several users are not interested in consuming just biased-items, such as popular or best-rated items. Then, we introduce two new approaches inspired by user coverage maximization to deal with this problem. These coverage-based RSs reached a high number of distinct first-time users. Thus, we proposed to compose the product’s page by mixing complementary non-personalized RSs. An online study, conducted with 204 real users confirmed that we should diversify the RSs used to conquer first-time users.

Nicollas Silva, Diego Carvalho, Adriano C. M. Pereira, Fernando Mourão, Leonardo Rocha
Rumor Detection on Social Media: A Multi-view Model Using Self-attention Mechanism

With the unprecedented prevalence of social media, rumor detection has become increasingly important since it can prevent misinformation from spreading in public. Traditional approaches extract features from the source tweet, the replies, the user profiles as well as the propagation path of a rumor event. However, these approaches do not take the sentiment view of the users into account. The conflicting affirmative or denial stances of users can provide crucial clues for rumor detection. Besides, the existing work attaches the same importance to all the words in the source tweet, but actually, these words are not equally informative. To address these problems, we propose a simple but effective multi-view deep learning model that is supposed to excavate stances of users and assign weights for different words. Experimental results on a social-media based dataset reveal that the multi-view model we proposed is useful, and achieves the state-of-the-art performance measuring the accuracy of automatic rumor detection. Our three-view model achieves 95.6% accuracy and our four-view model using BERT as a view also reaches an improvement of detection accuracy.

Yue Geng, Zheng Lin, Peng Fu, Weiping Wang
EmoMix: Building an Emotion Lexicon for Compound Emotion Analysis

Building a high-quality emotion lexicon is regarded as the foundation of research on emotion analysis. Existing methods have focused on the study of primary categories (i.e., anger, disgust, fear, happiness, sadness, and surprise). However, there are many emotions expressed in texts that are difficult to be mapped to primary emotions, which poses a great challenge in emotion annotation for big data analysis. For instance, “despair” is a combination of “fear” and “sadness,” and thus it is difficult to divide into each of them. To address this problem, we propose an automatic building method of emotion lexicon based on the psychological theory of compound emotion. This method could map emotional words into an emotion space, and annotate different emotion classes through a cascade clustering algorithm. Our experimental results show that our method outperforms the state-of-the-art methods in both word and sentence-level primary classification performance, and also offer us some insights into compound emotion analysis.

Ran Li, Zheng Lin, Peng Fu, Weiping Wang, Gang Shi
Long Term Implications of Climate Change on Crop Planning

The effects of climate change have been much speculated on in the past few years. Consequently, there has been intense interest in one of its key issues of food security into the future. This is particularly so given population increase, urban encroachment on arable land, and the degradation of the land itself. Recently, work has been done on predicting precipitation and temperature for the next few decades as well as developing optimisation models for crop planning. Combining these together, this paper examines the effects of climate change on a large food producing region in Australia, the Murrumbidgee Irrigation Area. For time periods between 1991 and 2071 for dry, average and wet years, an analysis is made about the way that crop mixes will need to change to adapt for the effects of climate change. It is found that sustainable crop choices will change into the future, and that large-scale irrigated agriculture may become unviable in the region in all but the wettest years.

Andrew Lewis, Marcus Randall, Sean Elliott, James Montgomery
Representation Learning of Taxonomies for Taxonomy Matching

Taxonomy matching aims to discover categories alignments between two taxonomies, which is an important operation of knowledge sharing task to benefit many applications. The existing methods for taxonomy matching mostly depend on string lexical features and domain-specific information. In this paper, we consider the method of representation learning of taxonomies, which projects categories and relationships into low-dimensional vector spaces. We propose a method to takes advantages of category hierarchies and siblings, which exploits a low-dimensional semantic space to modeling categories relations by translating operations in the semantic space. We take advantage of maximum weight matching problem on bipartite graphs to model taxonomy matching problem, which runs in polynomial time to generate optimal categories alignments for two taxonomies in a global manner. Experimental results on OAEI benchmark datasets show that our method significantly outperforms the baseline methods in taxonomy matching.

Hailun Lin, Yong Liu, Peng Zhang, Jianwu Wang
Creating Training Data for Scientific Named Entity Recognition with Minimal Human Effort

Scientific Named Entity Referent Extraction is often more complicated than traditional Named Entity Recognition (NER). For example, in polymer science, chemical structure may be encoded in a variety of nonstandard naming conventions, and authors may refer to polymers with conventional names, commonly used names, labels (in lieu of longer names), synonyms, and acronyms. As a result, accurate scientific NER methods are often based on task-specific rules, which are difficult to develop and maintain, and are not easily generalized to other tasks and fields. Machine learning models require substantial expert-annotated data for training. Here we propose polyNER: a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based word vector classifier. Evaluation on materials science publications shows that the polyNER approach enables improved precision or recall relative to a state-of-the-art chemical entity extraction system at a dramatically lower cost: it required just two hours of expert time, rather than extensive and expensive rule engineering, to achieve that result. This result highlights the potential for human-computer partnership for constructing domain-specific scientific NER systems.

Roselyne B. Tchoua, Aswathy Ajith, Zhi Hong, Logan T. Ward, Kyle Chard, Alexander Belikov, Debra J. Audus, Shrayesh Patel, Juan J. de Pablo, Ian T. Foster
Evaluating the Benefits of Key-Value Databases for Scientific Applications

The convergence of Big Data applications with High - Performance Computing requires new methodologies to store, manage and process large amounts of information. Traditional storage solutions are unable to scale and that results in complex coding strategies. For example, the brain atlas of the Human Brain Project has the challenge to process large amounts of high-resolution brain images. Given the computing needs, we study the effects of replacing a traditional storage system with a distributed Key-Value database on a cell segmentation application. The original code uses HDF5 files on GPFS through an intricate interface, imposing synchronizations. On the other hand, by using Apache Cassandra or ScyllaDB through Hecuba, the application code is greatly simplified. Thanks to the Key-Value data model, the number of synchronizations is reduced and the time dedicated to I/O scales when increasing the number of nodes.

Pol Santamaria, Lena Oden, Eloy Gil, Yolanda Becerra, Raül Sirvent, Philipp Glock, Jordi Torres
Scaling the Training of Recurrent Neural Networks on Sunway TaihuLight Supercomputer

The recurrent neural network (RNN) models require longer training time with larger datasets and bigger number of parameters. Distributed training with large mini-batch size is a potential solution to accelerate the whole training process. This paper proposes a framework for large-scale training RNN/LSTM on the Sunway TaihuLight (SW) supercomputer. We take series of architecture-oriented optimizations for the memory-intensive kernels in RNN models to improve the computing performance. The lazy communication scheme with improved communication implementation and the distributed training and testing scheme are proposed to achieve high scalability for distributed training. Furthermore, we explore the training algorithm with large mini-batch size, in order to improve convergence speed without losing accuracy. The framework supports training RNN models with large size of parameters with at most 800 training nodes. The evaluation results show that, compared to training with single computing node, training based on proposed framework can achieve a 100-fold convergence rate with 8,000 mini-batch size.

Ouyi Li, Wenlai Zhao, Xuancheng Huang, Yushu Chen, Lin Gan, Hongkun Yu, Jiacheng Zhang, Yang Liu, Haohuan Fu, Guangwen Yang
Immersed Boundary Method Halo Exchange in a Hemodynamics Application

In recent years, highly parallelized simulations of blood flow resolving individual blood cells have been demonstrated. Simulating such dense suspensions of deformable particles in flow often involves a partitioned fluid-structure interaction (FSI) algorithm, with separate solvers for Eulerian fluid and Lagrangian cell grids, plus a solver - e.g., immersed boundary method - for their interaction. Managing data motion in parallel FSI implementations is increasingly important, particularly for inhomogeneous systems like vascular geometries. In this study, we evaluate the influence of Eulerian and Lagrangian halo exchanges on efficiency and scalability of a partitioned FSI algorithm for blood flow. We describe an MPI+OpenMP implementation of the immersed boundary method coupled with lattice Boltzmann and finite element methods. We consider how communication and recomputation costs influence the optimization of halo exchanges with respect to three factors: immersed boundary interaction distance, cell suspension density, and relative fluid/cell solver costs.

John Gounley, Erik W. Draeger, Amanda Randles
Future Ramifications of Age-Dependent Immunity Levels for Measles: Explorations in an Individual-Based Model

When a high population immunity already exists for a disease, heterogeneities, such as social contact behavior and preventive behavior, become more important to understand the spread of this disease. Individual-based models are suited to investigate the effects of these heterogeneities. Measles is a disease for which, in many regions, high population immunity exists. However, different levels of immunity are observed for different age groups. For example, the generation born between 1985 and 1995 in Flanders is incompletely vaccinated, and thus has a higher level of susceptibility. As time progresses, this peak in susceptibility will shift to an older age category. Simultaneously, susceptibility will increase due to the waning of vaccine-induced immunity. Older generations, with a high degree of natural immunity, will, on the other hand, eventually disappear from the population. Using an individual-based model, we investigate the impact of changing age-dependent immunity levels (projected for Flanders, for years 2013 to 2040) on the risk for measles outbreaks. We find that, as time progresses, the risk for measles outbreaks increases, and outbreaks tend to be larger. As such, it is important to not only consider infants when designing strategies for measles elimination, but to also take other age categories into account.

Elise Kuylen, Lander Willem, Niel Hens, Jan Broeckhove
Evolution of Hierarchical Structure and Reuse in iGEM Synthetic DNA Sequences

Many complex systems, both in technology and nature, exhibit hierarchical modularity: smaller modules, each of them providing a certain function, are used within larger modules that perform more complex functions. Previously, we have proposed a modeling framework, referred to as Evo-Lexis [21], that provides insight to some fundamental questions about evolving hierarchical systems.The predictions of the Evo-Lexis model should be tested using real data from evolving systems in which the outputs can be well represented by sequences. In this paper, we investigate the time series of iGEM synthetic DNA dataset sequences, and whether the resulting iGEM hierarchies exhibit the qualitative properties predicted by the Evo-Lexis framework. Contrary to Evo-Lexis, in iGEM the amount of reuse decreases during the timeline of the dataset. Although this results in development of less cost-efficient and less deep Lexis-DAGs, the dataset exhibits a bias in reusing specific nodes more often than others. This results in the Lexis-DAGs to take the shape of an hourglass with relatively high H-score values and stable set of core nodes. Despite the reuse bias and stability of the core set, the dataset presents a high amount of diversity among the targets which is in line with modeling of Evo-Lexis.

Payam Siyari, Bistra Dilkina, Constantine Dovrolis
Computational Design of Superhelices by Local Change of the Intrinsic Curvature

Helices appear in nature at many scales, ranging from molecules to tendrils in plants. Organisms take advantage of the helical shape to fold, propel and assemble. For this reason, several applications in micro and nanorobotics, drug delivery and soft-electronics have been suggested. On the other hand, biomolecules can form complex tertiary structures made with helices to accomplish many different functions. A particular well-known case takes place during cell division when DNA, a double helix, is packaged into a super-helix—i.e., a helix made of helices—to prevent DNA entanglement. DNA super-helix formation requires auxiliary histone molecules, around which DNA is wrapped, in a “beads on a string” structure. The idea of creating superstructures from simple elastic filaments served as the inspiration to this work. Here we report a method to produce filaments with complex shapes by periodically creating strains along the ribbons. Filaments can gain helical shapes, and their helicity is ruled by the asymmetric contraction along the main axis. If the direction of the intrinsic curvature is locally changed, then a tertiary structure can result, similar to the DNA wrapped structure. In this process, auxiliary structures are not required and therefore new methodologies to shape filaments, of interest to nanotechnology and biomolecular science, are proposed.

Pedro E. S. Silva, Maria Helena Godinho, Fernão Vístulo de Abreu
Spatial Modeling of Influenza Outbreaks in Saint Petersburg Using Synthetic Populations

In this paper, we model influenza propagation in the Russian setting using a spatially explicit model and a detailed human agent database as its input. The aim of the research is to assess the applicability of this modeling method using influenza incidence data for 2010–2011 epidemic outbreak in Saint Petersburg and to compare the simulation results with the output of the compartmental SEIR model for the same outbreak. For this purpose, a synthetic population of Saint Petersburg was built and used for the simulation via FRED open source modeling framework. The parameters related to the outbreak (background immunity level and effective contact rate) are assessed by calibrating the compartmental model to incidence data. We show that the current version of synthetic population allows the agent-based model to reproduce real disease incidence.

Vasiliy Leonenko, Alexander Lobachev, Georgiy Bobashev
Six Degrees of Freedom Numerical Simulation of Tilt-Rotor Plane

Six degrees of freedom coupled simulation is presented for a tilt-rotor plane represented by V-22 Osprey. The Moving Computational Domain (MCD) method is used to compute a flow field around aircraft and the movement of the body with high accuracy. This method enables to move a plane through space without restriction of computational ranges. Therefore it is different from computation of such the flows by using conventional methods that calculate a flow field around a static body placing it in a uniform flow like a wind tunnel. To calculate with high accuracy, no simplification for simulating propeller was used. Fluid flows are created only by moving boundaries of an object. A tilt-rotor plane has a hovering function like a helicopter by turning axes of rotor toward the sky during takeoff or landing. On the other hand in flight, it behaves as a reciprocating aircraft by turning axes of rotor forward. To perform such two flight modes in the simulation, multi-axis sliding mesh approach was proposed which is a computational technique to enable us to deal with multiple axes of different direction. Moreover, using in combination with the MCD method, the approach has been able to be applied to the simulation which has more complicated motions of boundaries.

Ayato Takii, Masashi Yamakawa, Shinichi Asao, K. Tajiri
A Macroscopic Study on Dedicated Highway Lanes for Autonomous Vehicles

The introduction of Autonomous Vehicles (AVs) will have far-reaching effects on road traffic in cities and on highways. The implementation of Automated Highway Systems (AHS), possibly with a dedicated lane only for AVs, is believed to be a requirement to maximise the benefit from the advantages of AVs. We study the ramifications of an increasing percentage of AVs on the whole traffic system with and without the introduction of a dedicated highway AV lane. We conduct a macroscopic simulation of the city of Singapore under user equilibrium conditions with realistic traffic demand. We present findings regarding average travel time, throughput, road usage, and lane-access control. Our results show a reduction of average travel time as a result of increasing the portion of AVs in the system. We show that the introduction of an AV lane is not beneficial in terms of average commute time. Furthermore a notable shift of travel demand away from the highways towards major and small roads is noticed in early stages of AV penetration of the system. Finally, our findings show that after a certain threshold percentage of AVs the differences between AV and no AV lane scenarios become negligible.

Jordan Ivanchev, Alois Knoll, Daniel Zehe, Suraj Nair, David Eckhoff
An Agent-Based Model for Evaluating the Boarding and Alighting Efficiency of Autonomous Public Transport Vehicles

A key metric in the design of interior layouts of public transport vehicles is the dwell time required to allow passengers to board and alight. Real-world experimentation using physical vehicle mock-ups and involving human participants can be performed to compare dwell times among vehicle designs. However, the associated costs limit such experiments to small numbers of trials. In this paper, we propose an agent-based simulation model of the behavior of passengers during boarding and alighting. High-level strategical behavior is modeled according to the Recognition-Primed Decision paradigm, while the low-level collision-avoidance behavior relies on an extended Social Force Model tailored to our scenario. To enable successful navigation within the confined space of the vehicle, we propose a mechanism to emulate passenger turning while avoiding complex geometric computations. We validate our model against real-world experiments from the literature, demonstrating deviations of less than 11%. In a case study, we evaluate the boarding and alighting times required by three autonomous vehicle interior layouts proposed by industrial designers.

Boyi Su, Philipp Andelfinger, David Eckhoff, Henriette Cornet, Goran Marinkovic, Wentong Cai, Alois Knoll
MLP-IA: Multi-label User Profile Based on Implicit Association Labels

Multi-Label user profile is widely used and have made great contributions in the field of recommendation systems, personalized searches, etc. Current researches on multi-label user profile either ignore the associations among labels or only consider the explicit associations among them, which are not sufficient to take full advantage of the internal associations. In this paper, a new insight is presented to mine the internal correlation among implicit association labels. To take advantage of this insight, a multi-label propagation method with implicit associations (MLP-IA) is proposed to get user profile. A probability matrix is first designed to record the implicit associations and then combine the multi-label propagation method with this probability matrix to get more accurate user profile. Finally, this method proves to be convergent and faster than traditional label propagation algorithm. Experiments on six real-world datasets in Weibo show that, compared with state-of-the-art methods, our approach can accelerate the convergence and its performance is significantly better than the previous ones.

Lingwei Wei, Wei Zhou, Jie Wen, Meng Lin, Jizhong Han, Songlin Hu
Estimating Agriculture NIR Images from Aerial RGB Data

Remote Sensing in agriculture makes possible the acquisition of large amount of data without physical contact, providing diagnostic tools with important impacts on costs and quality of production. Hyperspectral imaging sensors attached to airplanes or unmanned aerial vehicles (UAVs) can obtain spectral signatures, that makes viable assessing vegetation indices and other characteristics of crops and soils. However, some of these imaging technologies are expensive and therefore less attractive to familiar and/or small producers. In this work a method for estimating Near Infrared (NIR) bands from a low-cost and well-known RGB camera is presented. The method is based on a weighted sum of NIR previously acquired from pre-classified uniform areas, using hyperspectral images. Weights (belonging degrees) for NIR spectra were obtained from outputs of K-nearest neighbor classification algorithm. The results showed that presented method has potential to estimate near infrared band for agricultural areas by using only RGB images with error less than 9%.

Daniel Caio de Lima, Diego Saqui, Steve Ataky, Lúcio A. de C. Jorge, Ednaldo José Ferreira, José Hiroki Saito
Simulation of Fluid Flow in Induced Fractures in Shale by the Lattice Boltzmann Method

With increasing interest in unconventional resources, understanding the flow in fractures, the gathering system for fluid production in these reservoirs, becomes an essential building block for developing effective stimulation treatment designs. Accurate determination of stress-dependent permeability of fractures requires time-intensive physical experiments on fractured core samples. Unlike previous attempts to estimate permeability through experiments, we utilize 3D Lattice Boltzmann Method simulations for increased understanding of how rock properties and generated fracture geometries influence the flow. Here, both real induced shale rock fractures and synthetic fractures are studied. Digital representations are characterized for descriptive topological parameters, then duplicated, with the upper plane translated to yield an aperture and variable degree of throw. We present several results for steady LBM flow in characterized, unpropped fractures, demonstrating our methodology. Results with aperture variation in these complex, rough-walled geometries are described with a modification to the theoretical cubic law relation for flow in a smooth slit. Moreover, a series of simulations mimicking simple variation in proppant concentration, both in full and partial monolayers, are run to better understand their effects on the permeability of propped fractured systems.

Rahman Mustafayev, Randy Hazlett
Incentive Mechanism for Cooperative Intrusion Response: A Dynamic Game Approach

Multi-hop D2D (Device-to-Device) communication is often exposed to many intrusions for its inherent properties, such as openness and weak security protection. To mitigate the intrusions in time, one of significant approaches is to establish a Cooperative Intrusion Response System (CIRS) to respond to intrusion activities during data transmission. In CIRS, user equipments that act as relays (RUEs) are assumed to actively help destination nodes to respond to intrusion activities. However, this assumption is often invalid in multi-hop D2D communication because the RUEs are selfish and unwilling to spend extra resources on undertaking response tasks. To address this problem, a game approach is proposed to encourage RUEs to cooperate. In detail, we formulate an incentive mechanism for CIRS in multi-hop D2D communication as a dynamic game and achieve an optimal solution to help RUEs decide whether to participate in detection or not. Theoretical analysis shows that only one Nash equilibrium exists for the proposed game. Simulations demonstrate that our mechanism can efficiently motivate potential RUEs to participate in intrusion detection and response, and it can also block intrusion propagation in time.

Yunchuan Guo, Xiao Wang, Liang Fang, Yongjun Li, Fenghua Li, Kui Geng
A k-Cover Model for Reliability-Aware Controller Placement in Software-Defined Networks

The main characteristics of Software-Defined Networks are the separation of the control and data planes, as well as a logically centralized control plane. This emerging network architecture simplifies the data forwarding and allows managing the network in a flexible way. Controllers play a key role in SDNs since they manage the whole network. It is crucial to determine the minimum number of controllers and where they should be placed to provide low latencies between switches and their assigned controller. It is worth to underline that, if there are long propagation delays between controllers and switches, their ability of reacting to network events quickly is affected, degrading reliability. Thus, the Reliability-Aware Controller Placement (RCP) problem in Software-Defined Networks (SDNs) is a critical issue. In this work we propose a k-cover based model for the RCP problem in SDNs. It simultaneously optimizes the number and placement of controllers, as well as latencies of primary and backup paths between switches and controllers, providing reliable networks against link, switch and controller failures. Although RCP problem is NP-hard, the simulation results show that reliabilities greater than 97%, satisfying low latencies, were obtained and the model can be used to find the optimum solution for different network topologies, in negligible time.

Gabriela Schütz
Robust Ensemble-Based Evolutionary Calibration of the Numerical Wind Wave Model

The adaptation of numerical wind wave models to the local time-spatial conditions is a problem that can be solved by using various calibration techniques. However, the obtained sets of physical parameters become over-tuned to specific events if there is a lack of observations. In this paper, we propose a robust evolutionary calibration approach that allows to build the stochastic ensemble of perturbed models and use it to achieve the trade-off between quality and robustness of the target model. The implemented robust ensemble-based evolutionary calibration (REBEC) approach was compared to the baseline SPEA2 algorithm in a set of experiments with the SWAN wind wave model configuration for the Kara Sea domain. Provided metrics for the set of scenarios confirm the effectiveness of the REBEC approach for the majority of calibration scenarios.

Pavel Vychuzhanin, Nikolay O. Nikitin, Anna V. Kalyuzhnaya
Approximate Repeated Administration Models for Pharmacometrics

Improving performance through parallelization, while a common approach to reduce running-times in high-performance computing applications, is only part of the story. At some point, all available parallelism is exploited and performance improvements need to be sought elsewhere. As part of drug development trials, a compound is periodically administered, and the interactions between it and the human body are modeled through pharmacokinetics and pharmacodynamics by a set of ordinary differential equations. Numerical integration of these equations is the most computationally intensive part of the fitting process. For this task, parallelism brings little benefit. This paper describes how to exploit the nearly periodic nature of repeated administration models by numerical application of the method of averaging on the one hand and reusing previous computational effort on the other hand. The presented method can be applied on top of any existing integrator while requiring only a single tunable threshold parameter. Performance improvements and approximation error are studied on two pharmacometrics models. In addition, automated tuning of the threshold parameter is demonstrated in two scenarios. Up to 1.7-fold and 70-fold improvements are measured with the presented method for the two models respectively.

Balazs Nemeth, Tom Haber, Jori Liesenborgs, Wim Lamotte
Evolutionary Optimization of Intruder Interception Plans for Mobile Robot Groups

The task of automated intruder detection and interception is often considered as a suitable application for groups of mobile robots. Realistic versions of the problem include representing uncertainty, which turns it into NP-hard optimization tasks. In this paper we define the problem of indoor intruder interception with probabilistic intruder motion model and uncertainty of intruder detection. We define a model for representing the problem and propose an algorithm for optimizing plans for groups of mobile robots patrolling the building. The proposed evolutionary multi-agent algorithm uses a novel representation of solutions. The algorithm has been evaluated using different problem sizes and compared with other methods.

Wojciech Turek, Agata Kubiczek, Aleksander Byrski
Backmatter
Metadata
Title
Computational Science – ICCS 2019
Editors
Dr. João M. F. Rodrigues
Dr. Pedro J. S. Cardoso
Dr. Jânio Monteiro
Prof. Roberto Lam
Dr. Valeria V. Krzhizhanovskaya
Michael H. Lees
Jack J. Dongarra
Peter M.A. Sloot
Copyright Year
2019
Electronic ISBN
978-3-030-22734-0
Print ISBN
978-3-030-22733-3
DOI
https://doi.org/10.1007/978-3-030-22734-0

Premium Partner