Skip to main content
Top

2018 | Book

Computational Science – ICCS 2018

18th International Conference, Wuxi, China, June 11–13, 2018 Proceedings, Part III

Editors: Prof. Yong Shi, Haohuan Fu, Yingjie Tian, Dr. Valeria V. Krzhizhanovskaya, Michael Harold Lees, Jack Dongarra, Peter M. A. Sloot

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The three-volume set LNCS 10860, 10861 and 10862 constitutes the proceedings of the 18th International Conference on Computational Science, ICCS 2018, held in Wuxi, China, in June 2018.

The total of 155 full and 66 short papers presented in this book set was carefully reviewed and selected from 404 submissions. The papers were organized in topical sections named:

Part I: ICCS Main Track

Part II: Track of Advances in High-Performance Computational Earth Sciences: Applications and Frameworks; Track of Agent-Based Simulations, Adaptive Algorithms and Solvers; Track of Applications of Matrix Methods in Artificial Intelligence and Machine Learning; Track of Architecture, Languages, Compilation and Hardware Support for Emerging ManYcore Systems; Track of Biomedical and Bioinformatics Challenges for Computer Science; Track of Computational Finance and Business Intelligence; Track of Computational Optimization, Modelling and Simulation; Track of Data, Modeling, and Computation in IoT and Smart Systems; Track of Data-Driven Computational Sciences; Track of Mathematical-Methods-and-Algorithms for Extreme Scale; Track of Multiscale Modelling and Simulation

Part III: Track of Simulations of Flow and Transport: Modeling, Algorithms and Computation; Track of Solving Problems with Uncertainties; Track of Teaching Computational Science; Poster Papers

Table of Contents

Frontmatter

Track of Simulations of Flow and Transport: Modeling, Algorithms and Computation

Frontmatter
ALE Method for a Rotating Structure Immersed in the Fluid and Its Application to the Artificial Heart Pump in Hemodynamics

In this paper, we study a dynamic fluid-structure interaction (FSI) problem involving a rotational elastic turbine, which is modeled by the incompressible fluid model in the fluid domain with the arbitrary Lagrangian-Eulerian (ALE) description and by the St. Venant-Kirchhoff structure model in the structure domain with the Lagrangian description, and the application to a hemodynamic FSI problem involving an artificial heart pump with a rotating rotor. A linearized rotational and deformable structure model is developed for the rotating rotor and a monolithic mixed ALE finite element method is developed for the hemodynamic FSI system. Numerical simulations are carried out for a hemodynamic FSI model with an artificial heart pump, and are validated by comparing with a commercial CFD package for a simplified artificial heart pump.

Pengtao Sun, Wei Leng, Chen-Song Zhang, Rihui Lan, Jinchao Xu
Free Surface Flow Simulation of Fish Turning Motion

In this paper, the influence of depth from the free surface of the fish and turning motion will be clarified by numerical simulation. We used Moving-Grid Finite volume method and Moving Computational Domain Method with free surface height function for numerical simulation schemes. Numerical analysis is performed by changing the radius at a certain depth, and the influence of the difference in radius is clarified. Next, analyze the fish that changes its depth and performs rotational motion at the same rotation radius, and clarify the influence of the difference in depth. In any cases, the drag coefficient was a positive value, the side force coefficient was a negative value and the lift coefficient was a smaller value than drag. Analysis was performed with the radius of rotation changed at a certain depth. The depth was changed and the rotational motion at the same rotation radius was analyzed. As a result, it was found the following. The smaller radius of rotation, the greater the lift and side force coefficients. The deeper the fish from free surface, the greater the lift coefficient. It is possible to clarify the influence of depth and radius of rotation from the free surface of submerged fish that is in turning motion on the flow.

Sadanori Ishihara, Masashi Yamakawa, Takeshi Inomono, Shinichi Asao
Circular Function-Based Gas-Kinetic Scheme for Simulation of Viscous Compressible Flows

A stable gas-kinetic scheme based on circular function is proposed for simulation of viscous compressible flows in this paper. The main idea of this scheme is to simplify the integral domain of Maxwellian distribution function over the phase velocity and phase energy to modified Maxwellian function, which will integrate over the phase velocity only. Then the modified Maxwellian function can be degenerated to a circular function with the assumption that all particles are distributed on a circle. Firstly, the RAE2822 airfoil is simulated to validate the accuracy of this scheme. Then the nose part of an aerospace plane model is studied to prove the potential of this scheme in industrial application. Simulation results show that the method presented in this paper has a good computational accuracy and stability.

Zhuxuan Meng, Liming Yang, Donghui Wang, Chang Shu, Weihua Zhang
A New Edge Stabilization Method for the Convection-Dominated Diffusion-Convection Equations

We study a new edge stabilization method for the finite element discretization of the convection-dominated diffusion-convection equations. In addition to the stabilization of the jump of the normal derivatives of the solution across the inter-element-faces, we additionally introduce a SUPG/GaLS-like stabilization term but on the domain boundary other than in the interior of the domain. New stabilization parameters are also designed. Stability and error bounds are obtained. Numerical results are presented. Theoretically and numerically, the new method is much better than other edge stabilization methods and is comparable to the SUPG method, and generally, the new method is more stable than the SUPG method.

Huoyuan Duan, Yu Wei
Symmetric Sweeping Algorithms for Overlaps of Quadrilateral Meshes of the Same Connectivity

We propose a method to calculate intersections of two admissible quadrilateral meshes of the same connectivity. The global quadrilateral polygons intersection problem is reduced to a local problem that how an edge intersects with a local frame which consists 7 connected edges. A classification on the types of intersection is presented. By symmetry, an alternative direction sweep algorithm halves the searching space. It reduces more than 256 possible cases of polygon intersection to 34 (17 when considering symmetry) programmable cases of edge intersections. Besides, we show that the complexity depends on how the old and new mesh intersect.

Xihua Xu, Shengxin Zhu
A Two-Field Finite Element Solver for Poroelasticity on Quadrilateral Meshes

This paper presents a finite element solver for linear poroelasticity problems on quadrilateral meshes based on the displacement-pressure two-field model. This new solver combines the Bernardi-Raugel element for linear elasticity and a weak Galerkin element for Darcy flow through the implicit Euler temporal discretization. The solver does not use any penalty factor and has less degrees of freedom compared to other existing methods. The solver is free of nonphysical pressure oscillations, as demonstrated by numerical experiments on two widely tested benchmarks. Extension to other types of meshes in 2-dim and 3-dim is also discussed.

Graham Harper, Jiangguo Liu, Simon Tavener, Zhuoran Wang
Preprocessing Parallelization for the ALT-Algorithm

In this paper, we improve the preprocessing phase of the ALT algorithm through parallelization. ALT is a preprocessing-based, goal-directed speed-up technique that uses A* (A star), Landmarks and Triangle inequality which allows fast computations of shortest paths (SP) in large-scale networks. Although faster techniques such as arc-flags, SHARC, Contraction Hierarchies and Highway Hierarchies already exist, ALT is usually combined with these faster algorithms to take advantage of its goal-directed search to further reduce the SP search computation time and its search space. However, ALT relies on landmarks and optimally choosing these landmarks is NP-hard, hence, no effective solution exists. Since landmark selection relies on constructive heuristics and the current SP search speed-up is inversely proportional to landmark generation time, we propose a parallelization technique which reduces the landmark generation time significantly while increasing its effectiveness.

Genaro Peque Jr., Junji Urata, Takamasa Iryo
Efficient Linearly and Unconditionally Energy Stable Schemes for the Phase Field Model of Solid-State Dewetting Problems

In this paper, we study linearly first and second order in time, uniquely solvable and unconditionally energy stable numerical schemes to approximate the phase field model of solid-state dewetting problems based on the novel approach SAV (scalar auxiliary variable), a new developed efficient and accurate method for a large class of gradient flows. The schemes are based on the first order Euler method and the second order backward differential formulas(BDF2) for time discretization, and finite element methods for space discretization. It is shown that the schemes are unconditionally stable and the discrete equations are uniquely solvable for all time steps. We present some numerical experiments to validate the stability and accuracy of the proposed schemes.

Zhengkang He, Jie Chen, Zhangxin Chen
A Novel Energy Stable Numerical Scheme for Navier-Stokes-Cahn-Hilliard Two-Phase Flow Model with Variable Densities and Viscosities

A novel numerical scheme including time and spatial discretization is offered for coupled Cahn-Hilliard and Navier-Stokes governing equation system in this paper. Variable densities and viscosities are considered in the numerical scheme. By introducing an intermediate velocity in both Cahn-Hilliard equation and momentum equation, the scheme can keep discrete energy law. A decouple approach based on pressure stabilization is implemented to solve the Navier-Stokes part, while the stabilization or convex splitting method is adopted for the Cahn-Hilliard part. This novel scheme is totally decoupled, linear, unconditionally energy stable for incompressible two-phase flow diffuse interface model. Numerical results demonstrate the validation, accuracy, robustness and discrete energy law of the proposed scheme in this paper.

Xiaoyu Feng, Jisheng Kou, Shuyu Sun
Study on Numerical Methods for Gas Flow Simulation Using Double-Porosity Double-Permeability Model

In this paper, we firstly study numerical methods for gas flow simulation in dual-continuum porous media. Typical methods for oil flow simulation in dual-continuum porous media cannot be used straightforward to this kind of simulation due to the artificial mass loss caused by the compressibility and the non-robustness caused by the non-linear source term. To avoid these two problems, corrected numerical methods are proposed using mass balance equations and local linearization of the non-linear source term. The improved numerical methods are successful for the computation of gas flow in the double-porosity double-permeability porous media. After this improvement, temporal advancement for each time step includes three fractional steps: (i) advance matrix pressure and fracture pressure using the typical computation; (ii) solve the mass balance equation system for mean pressures; (iii) correct pressures in (i) by mean pressures in (ii). Numerical results show that mass conservation of gas for the whole domain is guaranteed while the numerical computation is robust.

Yi Wang, Shuyu Sun, Liang Gong
Molecular Simulation of Displacement of Methane by Injection Gases in Shale

Displacement methane (CH4) by injection gases is regarded as an effective way to exploit shale gas and sequestrate carbon dioxide (CO2). In our work, the displacement of CH4 by injection gases is studied by using the grand canonical Monte Carlo (GCMC) simulation. Then, we use molecular dynamics (MD) simulation to study the adsorption occurrence behavior of CH4 in different pore size. This shale model is composed of organic and inorganic material, which is an original and comprehensive simplification for the real shale composition. The results show that both the displacement amount of CH4 and sequestration amount of CO2 see an upward trend with the increase of pore size. The CO2 molecules can replace the adsorbed CH4 from the adsorption sites directly. On the contrary, when N2 molecules are injected into the slit pores, the partial pressure of CH4 would decrease. With the increase of the pores width, the adsorption occurrence transfers from single adsorption layer to four adsorption layers. It is expected that our work can reveal the mechanisms of adsorption and displacement of shale gas, which could provide a guidance and reference for displacement exploitation of shale gas and sequestration of CO2.

Jihong Shi, Liang Gong, Zhaoqin Huang, Jun Yao
A Compact and Efficient Lattice Boltzmann Scheme to Simulate Complex Thermal Fluid Flows

A coupled LBGK scheme, constituting of two independent distribution functions describing velocity and temperature respectively, is established in this paper. Chapman-Enskog expansion, a procedure to prove the consistency of this mesoscopic method with macroscopic conservation laws, is also conducted for both lattice scheme of velocity and temperature, as well as a simple introduction on the common used DnQb model. An efficient coding manner for Matlab is proposed in this paper, which improves the coding and calculation efficiency at the same time. The compact and efficient scheme is then applied in the simulation of the famous and well-studied Rayleigh-Benard convection, which is common seen as a representative heat convection problem in modern industries. The results are interesting and reasonable, and meet the experimental data well. The stability of this scheme is also proved through different cases with a large range of Rayleigh number, until 2 million.

Tao Zhang, Shuyu Sun
Study on Topology-Based Identification of Sources of Vulnerability for Natural Gas Pipeline Networks

Natural gas pipeline networks are the primary means of transporting natural gas, and safety is the priority in production operation. Investigating the vulnerability of natural gas pipeline networks can effectively identify weak links in the pipeline networks and is critical to the safe operation of pipeline networks. In this paper, based on network evaluation theory, a pipeline network topology-based natural gas pipeline network method to identify sources of vulnerability was developed. In this process, based on characteristics of actual flow in natural gas pipeline networks, network evaluation indices were improved to increase the accuracy of the identification of sources of vulnerability for natural gas pipeline networks. Based on the improved index, a topology-based identification process for sources of vulnerability for natural gas pipeline networks was created. Finally, the effectiveness of the proposed method was verified via pipeline network hydraulic simulation. The result shows that the proposed method is simple and can accurately identify sources of vulnerability in the nodes or links in natural gas pipeline networks.

Peng Wang, Bo Yu, Dongliang Sun, Shangmin Ao, Huaxing Zhai
LES Study on High Reynolds Turbulent Drag-Reducing Flow of Viscoelastic Fluids Based on Multiple Relaxation Times Constitutive Model and Mixed Subgrid-Scale Model

Due to complicated rheological behaviors and elastic effect of viscoelastic fluids, only a handful of literatures reporting the large-eddy simulation (LES) studies on turbulent drag-reduction (DR) mechanism of viscoelastic fluids. In addition, these few studies are limited within the low Reynolds number situations. In this paper, LES approach is applied to further study the flow characteristics and DR mechanism of high Reynolds viscoelastic turbulent drag-reducing flow. To improve the accuracy of LES, an N-parallel FENE-P constitutive model based on multiple relaxation times and an improved mixed subgrid-scale (SGS) model are both utilized. DR rate and velocity fluctuations under different calculation parameters are analyzed. Contributions of different shear stresses on frictional resistance coefficient, and turbulent coherent structures which are closely related to turbulent burst events are investigated in details to further reveal the DR mechanism of high Reynolds viscoelastic turbulent drag-reducing flow. Especially, the different phenomena and results between high Reynolds and low Reynolds turbulent flows are addressed. This study is expected to provide a beneficial guidance to the engineering application of turbulent DR technology.

Jingfa Li, Bo Yu, Xinyu Zhang, Shuyu Sun, Dongliang Sun, Tao Zhang

Track of Solving Problems with Uncertainties

Frontmatter
Statistical and Multivariate Analysis Applied to a Database of Patients with Type-2 Diabetes

The prevalence of type 2 Diabetes Mellitus (T2DM) has reached critical proportions globally over the past few years. Diabetes can cause devastating personal suffering and its treatment represents a major economic burden for every country around the world. To property guide effective actions and measures, the present study aims to examine the profile of the diabetic population in Mexico. We used the Karhunen-Loève transform which is a form of principal component analysis, to identify the factors that contribute to T2DM. The results revealed a unique profile of patients who cannot control this disease. Results also demonstrated that compared to young patients, old patients tend to have better glycemic control. Statistical analysis reveals patient profiles and their health results and identify the variables that measure overlapping health issues as reported in the database (i.e. collinearity).

Diana Canales, Neil Hernandez-Gress, Ram Akella, Ivan Perez
Novel Monte Carlo Algorithm for Solving Singular Linear Systems

A new Monte Carlo algorithm for solving singular linear systems of equations is introduced. In fact, we consider the convergence of resolvent operator $$R_{\lambda }$$ and we construct an algorithm based on the mapping of the spectral parameter $$\lambda $$. The approach is applied to systems with singular matrices. For such matrices we show that fairly high accuracy can be obtained.

Behrouz Fathi Vajargah, Vassil Alexandrov, Samaneh Javadi, Ali Hadian
Reducing Data Uncertainty in Forest Fire Spread Prediction: A Matter of Error Function Assessment

Forest fires are a significant problem that every year causes important damages around the world. In order to efficiently tackle these hazards, one can rely on forest fire spread simulators. Any forest fire evolution model requires several input data parameters to describe the scenario where the fire spread is taking place, however, this data is usually subjected to high levels of uncertainty. To reduce the impact of the input-data uncertainty, different strategies have been developed during the last years. One of these strategies consists of adjusting the input parameters according to the observed evolution of the fire. This strategy emphasizes how critical is the fact of counting on reliable and solid metrics to assess the error of the computational forecasts. The aim of this work is to assess eight different error functions applied to forest fires spread simulation in order to understand their respective advantages and drawbacks, as well as to determine in which cases they are beneficial or not.

Carlos Carrillo, Ana Cortés, Tomàs Margalef, Antonio Espinosa, Andrés Cencerrado
Analysis of the Accuracy of OpenFOAM Solvers for the Problem of Supersonic Flow Around a Cone

The numerical results of comparing the accuracy for some OpenFOAM solvers are presented. The comparison was made for the problem of inviscid compressible flow around a cone at zero angle of attack. The results obtained with the help of various OpenFOAM solvers are compared with the known numerical solution of the problem with the variation of cone angle and flow velocity. This study is a part of a project aimed to create a reliable numerical technology for modelling the flows around elongated bodies of rotation (EBR).

Alexander E. Bondarev, Artem E. Kuvshinnikov
Modification of Interval Arithmetic for Modelling and Solving Uncertainly Defined Problems by Interval Parametric Integral Equations System

In this paper we present the concept of modeling and solving uncertainly defined boundary value problems described by 2D Laplace’s equation. We define uncertainty of input data (shape of boundary and boundary conditions) using interval numbers. Uncertainty can be considered separately for selected or simultaneously for all input data. We propose interval parametric integral equations system (IPIES) to solve so-define problems. We obtain IPIES in result of PIES modification, which was previously proposed for precisely (exactly) defined problems. For this purpose we have to include uncertainly defined input data into mathematical formalism of PIES. We use pseudo-spectral method for numerical solving of IPIES and propose modification of directed interval arithmetic to obtain interval solutions. We present the strategy on examples of potential problems. To verify correctness of the method, we compare obtained interval solutions with analytical ones. For this purpose, we obtain interval analytical solutions using classical and directed interval arithmetic.

Eugeniusz Zieniuk, Marta Kapturczak, Andrzej Kużelewski
A Hybrid Heuristic for the Probabilistic Capacitated Vehicle Routing Problem with Two-Dimensional Loading Constraints

The Probabilistic Capacitated Vehicle Routing Problem (PCVRP) is a generalization of the classical Capacitated Vehicle Routing Problem (CVRP). The main difference is the stochastic presence of the customers, that is, the number of them to be visited each time is a random variable, where each customer associates with a given probability of presence.We consider a special case of the PCVRP, in which a fleet of identical vehicles must serve customers, each with a given demand consisting in a set of rectangular items. The vehicles have a two-dimensional loading surface and a maximum capacity.The resolution of problem consists in finding an a priori route visiting all customers which minimizes the expected length over all possibilities. We propose a hybrid heuristic, based on a branch-and-bound algorithm, for the resolution of the problem. The effectiveness of the approach is shown by means of computational results.

Soumaya Sassi Mahfoudh, Monia Bellalouna
A Human-Inspired Model to Represent Uncertain Knowledge in the Semantic Web

One of the most evident and well-known limitations of the Semantic Web technology is its lack of capability to deal with uncertain knowledge. As uncertainty is often part of the knowledge itself or can be inducted by external factors, such a limitation may be a serious barrier for some practical applications. A number of approaches have been proposed to extend the capabilities in terms of uncertainty representation; some of them are just theoretical or not compatible with the current semantic technology; others focus exclusively on data spaces in which uncertainty is or can be quantified. Human-inspired models have been adopted in the context of different disciplines and domains (e.g. robotics and human-machine interaction) and could be a novel, still largely unexplored, pathway to represent uncertain knowledge in the Semantic Web. Human-inspired models are expected to address uncertainties in a way similar to the human one. Within this paper, we (i) briefly point out the limitations of the Semantic Web technology in terms of uncertainty representation, (ii) discuss the potentialities of human-inspired solutions to represent uncertain knowledge in the Semantic Web, (iii) present a human-inspired model and (iv) a reference architecture for implementations in the context of the legacy technology.

Salvatore Flavio Pileggi
Bayesian Based Approach Learning for Outcome Prediction of Soccer Matches

In the current world, sports produce considerable data such as players skills, game results, season matches, leagues management, etc. The big challenge in sports science is to analyze this data to gain a competitive advantage. The analysis can be done using several techniques and statistical methods in order to produce valuable information. The problem of modeling soccer data has become increasingly popular in the last few years, with the prediction of results being the most popular topic. In this paper, we propose a Bayesian Model based on rank position and shared history that predicts the outcome of future soccer matches. The model was tested using a data set containing the results of over 200,000 soccer matches from different soccer leagues around the world.

Laura Hervert-Escobar, Neil Hernandez-Gress, Timothy I. Matis
Fuzzy and Data-Driven Urban Crowds

In this work we present a system able to simulate crowds in complex urban environments; the system is built in two stages, urban environment generation and pedestrian simulation, for the first stage we integrate the WRLD3D plug-in with real data collected from GPS traces, then we use a hybrid approach done by incorporating steering pedestrian behaviors with the goal of simulating the subtle variations present in real scenarios without needing large amounts of data for those low-level behaviors, such as pedestrian motion affected by other agents and static obstacles nearby. Nevertheless, realistic human behavior cannot be modeled using deterministic approaches, therefore our simulations are both data-driven and sometimes are handled by using a combination of finite state machines (FSM) and fuzzy logic in order to handle the uncertainty of people motion.

Leonel Toledo, Ivan Rivalcoba, Isaac Rudomin

Track of Teaching Computational Science

Frontmatter
Design and Analysis of an Undergraduate Computational Engineering Degree at Federal University of Juiz de Fora

The undergraduate program in Computational Engineering at Federal University of Juiz de Fora, Brazil, was created in 2008 as a joint initiative of two distinct departments in the University, Computer Science, located in the Exact Science Institute, and Applied and Computational Mechanics, located in the School of Engineering. First freshmen began in 2009 and graduated in 2014. This work presents the curriculum structure of this pioneering full bachelor’s degree in Computational Engineering in Brazil.

Marcelo Lobosco, Flávia de Souza Bastos, Bernardo Martins Rocha, Rodrigo Weber dos Santos
Extended Cognition Hypothesis Applied to Computational Thinking in Computer Science Education

Computational thinking is a much-used concept in computer science education. Here we examine the concept from the viewpoint of the extended cognition hypothesis. The analysis reveals that the extent of the concept is limited by its strong historical roots in computer science and software engineering. According to the extended cognition hypothesis, there is no meaningful distinction between human cognitive functions and the technology. This standpoint promotes a broader interpretation of the human-technology interaction. Human cognitive processes spontaneously adapt available technology enhanced skills when technology is used in cognitively relevant levels and modalities. A new concept technology synchronized thinking is presented to denote this conclusion. More diverse and practical approach is suggested for the computer scienceeducation.

Mika Letonsaari
Interconnected Enterprise Systems − A Call for New Teaching Approaches

Enterprise Resource Planning Systems (ERPS) have continually extended their scope over the last decades. The evolution has currently reached a stage where ERPS support the entire value chain of an enterprise. This study deals with the rise of a new era, where ERPS is transformed into so-called interconnected Enterprise Systems (iES), which have a strong outside-orientation and provide a networked ecosystem open to human and technological actors (e.g. social media, Internet of Things). Higher education institutions need to prepare their students to understand the shift and to transfer the implications to today’s business world. Based on literature and applied learning scenarios the study shows existing approaches to the use of ERPS in teaching and elaborates whether and how they can still be used. In addition, implications are outlined and the necessary changes towards new teaching approaches for iES are proposed.

Bettina Schneider, Petra Maria Asprion, Frank Grimberg

Poster Papers

Frontmatter
Efficient Characterization of Hidden Processor Memory Hierarchies

A processor’s memory hierarchy has a major impact on the performance of running code. However, computing platforms, where the actual hardware characteristics are hidden from both the end user and the tools that mediate execution, such as a compiler, a JIT and a runtime system, are used more and more, for example, performing large scale computation in cloud and cluster. Even worse, in such environments, a single computation may use a collection of processors with dissimilar characteristics. Ignorance of the performance-critical parameters of the underlying system makes it difficult to improve performance by optimizing the code or adjusting runtime-system behaviors; it also makes application performance harder to understand.To address this problem, we have developed a suite of portable tools that can efficiently derive many of the parameters of processor memory hierarchies, such as levels, effective capacity and latency of caches and TLBs, in a matter of seconds. The tools use a series of carefully considered experiments to produce and analyze cache response curves automatically. The tools are inexpensive enough to be used in a variety of contexts that may include install time, compile time or runtime adaption, or performance understanding tools.

Keith Cooper, Xiaoran Xu
Discriminating Postural Control Behaviors from Posturography with Statistical Tests and Machine Learning Models: Does Time Series Length Matter?

This study examines the influence of time series duration on the discriminative power of center-of-pressure (COP) features in distinguishing different population groups via statistical tests and machine learning (ML) models. We used two COP datasets, each containing two groups. One was collected from older adults with low or high risk of falling (dataset I), and the other from healthy and post-stroke adults (dataset II). Each time series was mapped into a vector of 34 features twice: firstly, using the original duration of 60 s, and then using only the first 30 s. We then compared each feature across groups through traditional statistical tests. Next, we trained six popular ML models to distinguish between the groups using features from the original signals and then from the shorter signals. The performance of each ML model was then compared across groups for the 30 s and 60 s time series. The mean percentage of features able to discriminate the groups via statistical tests was 26.5% smaller for 60 s signals in dataset I, but 13.5% greater in dataset II. In terms of ML, better performances were achieved for signals of 60 s in both datasets, mainly for similarity-based algorithms. Hence, we recommend the use of COP time series recorded over at least 60 s. The contribution of this paper also include insights into the robustness of popular ML models to the sampling duration of COP time series.

Luiz H. F. Giovanini, Elisangela F. Manffra, Julio C. Nievola
Mathematical Modelling of Wormhole-Routed x-Folded TM Topology in the Presence of Uniform Traffic

Recently, x-Folded TM topology was introduced as a desirable design in k-ary n-cube networks due to the low diameter and short average distance. In this article, we propose a mathematical model to predict the average network delay for ($$k \times k$$) x-Folded TM in the presence of uniform traffic pattern. Our model accurately formulates the applied traffic pattern over network virtual channels based on the average distance and number of nodes. The mathematical results indicate that the average network delay for x-Folded TM topology is reduced when compared with other topologies in the presence of uniform traffic pattern. Finally, the results obtained from simulation experiments confirm that the mathematical model exhibits a significant degree of accuracy for x-Folded TM topology under the traffic pattern even in varied virtual channels.

Mehrnaz Moudi, Mohamed Othman, Kweh Yeah Lun, Amir Rizaan Abdul Rahiman
Adaptive Time-Splitting Scheme for Nanoparticles Transport with Two-Phase Flow in Heterogeneous Porous Media

In this work, we introduce an efficient scheme using an adaptive time-splitting method to simulate nanoparticles transport associated with a two-phase flow in heterogeneous porous media. The capillary pressure is linearized in terms of saturation to couple the pressure and saturation equations. The governing equations are solved using an IMplicit Pressure Explicit Saturation-IMplicit Concentration (IMPES-IMC) scheme. The spatial discretization has been done using the cell-centered finite difference (CCFD) method. The interval of time has been divided into three levels, the pressure level, the saturation level, and the concentrations level, which can reduce the computational cost. The time step-sizes at different levels are adaptive iteratively by satisfying the Courant-Friedrichs-Lewy (CFL<1) condition. The results illustrates the efficiency of the numerical scheme. A numerical example of a highly heterogeneous porous medium has been introduced. Moreover, the adaptive time step-sizes are shown in graphs.

Mohamed F. El-Amin, Jisheng Kou, Shuyu Sun
Identifying Central Individuals in Organised Criminal Groups and Underground Marketplaces

Traditional organised criminal groups are becoming more active in the cyber domain. They form online communities and use these as marketplaces for illegal materials, products and services, which drives the Crime as a Service business model. The challenge for law enforcement of investigating and disrupting the underground marketplaces is to know which individuals to focus effort on. Because taking down a few high impact individuals can have more effect on disrupting the criminal services provided. This paper present our study on social network centrality measures’ performance for identifying important individuals in two networks. We focus our analysis on two distinctly different network structures: Enron and Nulled.IO. The first resembles an organised criminal group, while the latter is a more loosely structured hacker forum. Our result show that centrality measures favour individuals with more communication rather than individuals usually considered more important: organised crime leaders and cyber criminals who sell illegal materials, products and services.

Jan William Johnsen, Katrin Franke
Guiding the Optimization of Parallel Codes on Multicores Using an Analytical Cache Model

Cache performance is particularly hard to predict in modern multicore processors as several threads can be concurrently in execution, and private cache levels are combined with shared ones. This paper presents an analytical model able to evaluate the cache performance of the whole cache hierarchy for parallel applications in less than one second taking as input their source code and the cache configuration. While the model does not tackle some advanced hardware features, it can help optimizers to make reasonably good decisions in a very short time. This is supported by an evaluation based on two modern architectures and three different case studies, in which the model predictions differ on average just 5.05% from the results of a detailed hardware simulator and correctly guide different optimization decisions.

Diego Andrade, Basilio B. Fraguela, Ramón Doallo
LDA-Based Scoring of Sequences Generated by RNN for Automatic Tanka Composition

This paper proposes a method of scoring sequences generated by recurrent neural network (RNN) for automatic Tanka composition. Our method gives sequences a score based on topic assignments provided by latent Dirichlet allocation (LDA). When many word tokens in a sequence are assigned to the same topic, we give the sequence a high score. While a scoring of sequences can also be achieved by using RNN output probabilities, the sequences having large probabilities are likely to share much the same subsequences and thus are doomed to be deprived of diversity. The experimental results, where we scored Japanese Tanka poems generated by RNN, show that the top-ranked sequences selected by our method were likely to contain a wider variety of subsequences than those selected by RNN output probabilities.

Tomonari Masada, Atsuhiro Takasu
Computing Simulation of Interactions Between + Protein and Janus Nanoparticle

Janus nanoparticles have surfaces with two or more distinct physical properties, allowing different types of chemical properties to occur on the same particle and thus making possible many unique applications. It is necessary to investigate the interaction between proteins and Janus nanoparticles (NPs), which are two typical building blocks for making bio-nano-objects. Here we computed the phase diagrams for an $$\alpha $$+$$\beta $$ protein(GB1) and Janus NP using coarse-grained model and molecular dynamics simulations, and studied how the secondary structures of proteins, the binding interface and kinetics are affected by the nearby NP. Two phases were identified for the system. In the folded phase, the formation of $$\beta $$-sheets are always enhanced by the presence of NPs, while the formation of $$\alpha $$-helices are not sensitive to NPs. The underlying mechanism of the phenomenon was attributed to the geometry and flexibility of the $$\beta $$-sheets. The knowledge gained in this study is useful for understanding the interactions between proteins and Janus NP which may facilitate designing new bio-nanomaterials or devices.

Xinlu Guo, Xiaofeng Zhao, Shuguang Fang, Yunqiang Bian, Wenbin Kang
A Modified Bandwidth Reduction Heuristic Based on the WBRA and George-Liu Algorithm

This paper presents a modified heuristic based on the Wonder Bandwidth Reduction Algorithm with starting vertex given by the George-Liu algorithm. The results are obtained on a dataset of instances taken from the SuiteSparse matrix collection when solving linear systems using the zero-fill incomplete Cholesky-preconditioned conjugate gradient method. The numerical results show that the improved vertex labeling heuristic compares very favorably in terms of efficiency and performance with the well-known GPS algorithm for bandwidth and profile reductions.

Sanderson L. Gonzaga de Oliveira, Guilherme O. Chagas, Diogo T. Robaina, Diego N. Brandão, Mauricio Kischinhevsky
Improving Large-Scale Fingerprint-Based Queries in Distributed Infrastructure

Fingerprints are often used in a sketching mechanism, which maps elements into concise and representative synopsis using small space. Large-scale fingerprint-based query can be used as an important tool in big data analytics, such as set membership query, rank-based query and correlationship query etc. In this paper, we propose an efficient approach to improving the performance of large-scale fingerprint-based queries in a distributed infrastructure. At initial stage of the queries, we first transform the fingerprints sketch into space constrained global rank-based sketch at query site via collecting minimal information from local sites. The time-consuming operations, such as local fingerprints construction and searching, are pushed down into local sites. The proposed approach can construct large-scale and scalable fingerprints efficiently and dynamically, meanwhile it can also supervise continuous queries by utilizing the global sketch, and run an appropriate number of jobs over distributed computing environments. We implement our approach in Spark, and evaluate its performance over real-world datasets. When compared with native SparkSQL, our approach outperforms the native routines on query response time by 2 orders of magnitude.

Shupeng Wang, Guangjun Wu, Binbin Li, Xin Jin, Ge Fu, Chao Li, Jiyuan Zhang
A Effective Truth Discovery Algorithm with Multi-source Sparse Data

The problem to find out the truth from inconsistent information is defined as Truth Discovery. The essence of truth discovery is to estimate source quality. Therefore the measuring mechanism of data source will immensely affect the result and process of truth discovery. However the state-of-the-art algorithms dont consider how source quality is affected when null is provided by source. We propose to use the Silent Rate, True Rate and False Rate to measure source quality in this paper. In addition, we utilize Probability Graphical Model to model truth and source quality which is measured through null and real data. Our model makes full use of all claims and null to improve the accuracy of truth discovery. Compared with prevalent approaches, the effectiveness of our approach is verified on three real datasets and the recall has improved significantly.

Jiyuan Zhang, Shupeng Wang, Guangjun Wu, Lei Zhang
Blackboard Meets Dijkstra for Resource Allocation Optimization

This paper presents the integration of Dijkstra’s algorithm into a Blackboard framework to optimize the selection of web resources from service providers. The architectural framework of the implementation of the proposed Blackboard approach and its components in a real life scenario is laid out. For justification of approach, and to show practical feasibility, a sample implementation architecture is presented.

Christian Vorhemus, Erich Schikuta
Augmented Self-paced Learning with Generative Adversarial Networks

Learning with very limited training data is a challenging but typical scenario in machine learning applications. In order to achieve a robust learning model, on one hand, the instructive labeled instances should be fully leveraged; on the other hand, extra data source need to be further explored. This paper aims to develop an effective learning framework for robust modeling, by naturally combining two promising advanced techniques, i.e. generative adversarial networks and self-paced learning. To be specific, we present a novel augmented self-paced learning with generative adversarial networks (ASPL-GANs), which consists of three component modules, i.e. a generator G, a discriminator D, and a self-paced learner S. Via competition between G and D, realistic synthetic instances with specific class labels are generated. Receiving both real and synthetic instances as training data, classifier S simulates the learning process of humans in a self-paced fashion and gradually proceeds from easy to complex instances in training. The three components are maintained in a unified framework and optimized jointly via alternating iteration. Experimental results validate the effectiveness of the proposed algorithm in classification tasks.

Xiao-Yu Zhang, Shupeng Wang, Yanfei Lv, Peng Li, Haiping Wang
Benchmarking Parallel Chess Search in Stockfish on Intel Xeon and Intel Xeon Phi Processors

The paper presents results from benchmarking the parallel multithreaded Stockfish chess engine on selected multi- and many-core processors. It is shown how the strength of play for an n-thread version compares to 1-thread version on both Intel Xeon and latest Intel Xeon Phi x200 processors. Results such as the number of wins, losses and draws are presented and how these change for growing numbers of threads. Impact of using particular cores on Intel Xeon Phi is shown. Finally, strengths of play for the tested computing devices are compared.

Pawel Czarnul
Leveraging Uncertainty Analysis of Data to Evaluate User Influence Algorithms of Social Networks

Identifying of highly influential users in social networks is critical in various practices, such as advertisement, information recommendation, and surveillance of public opinion. According to recent studies, different existing user influence algorithms generally produce different results. There are no effective metrics to evaluate the representation abilities and the performance of these algorithms for the same dataset. Therefore, the results of these algorithms cannot be accurately evaluated and their limits cannot be effectively observered. In this paper, we propose an uncertainty-based Kalman filter method for predicting user influence optimal results. Simultaneously, we develop a novel evaluation metric for improving maximum correntropy and normalized discounted cumulative gain (NDCG) criterion to measure the effectiveness of user influence and the level of uncertainty fluctuation intervals of these algorithms. Experimental results validate the effectiveness of the proposed algorithm and evaluation metrics for different datasets.

Jianjun Wu, Ying Sha, Rui Li, Jianlong Tan, Bin Wang
E-Zone: A Faster Neighbor Point Query Algorithm for Matching Spacial Objects

Latest astronomy projects observe the spacial objects with astronomical cameras generating images continuously. To identify transient objects, the position of these objects on the images need to be compared against a reference table on the same portion of the sky, which is a complex search task called cross match. We designed Euclidean-Zone (E-Zone), a method for faster neighbor point queries which allows efficient cross match between spatial catalogs. In this paper, we implemented E-Zone algorithm utilizing euclidean distance between celestial objects with pixel coordinates to avoid the complex mathematical functions in equatorial coordinate system. Meanwhile, we surveyed on the parameters of our model and other system factors to find optimal configures of this algorithm. In addition to the sequential algorithm, we modified the serial program and implemented an OpenMP parallelized version. For serial version, the results of our algorithm achieved a speedup of 2.07 times over using equatorial coordinate system. Also, we achieved 19 ms for sequencial queries and 5 ms for parallel queries for 200,000 objects on a single CPU processor over a 230,520 synthetic reference database.

Xiaobin Ma, Zhihui Du, Yankui Sun, Yuan Bai, Suping Wu, Andrei Tchernykh, Yang Xu, Chao Wu, Jianyan Wei
Application of Algorithmic Differentiation for Exact Jacobians to the Universal Laminar Flame Solver

We introduce algorithmic differentiation (AD) to the C++ Universal Laminar Flame (ULF) solver code. ULF is used for solving generic laminar flame configurations in the field of combustion engineering. We describe in detail the required code changes based on the operator overloading-based AD tool CoDiPack. In particular, we introduce a global alias for the scalar type in ULF and generic data structure using templates. To interface with external solvers, template-based functions which handle data conversion and type casts through specialization for the AD type are introduced. The differentiated ULF code is numerically verified and performance is measured by solving two canonical models in the field of chemically reacting flows, a homogeneous reactor and a freely propagating flame. The models stiff set of equations is solved with Newtons method. The required Jacobians, calculated with AD, are compared with the existing finite differences (FD) implementation. We observe improvements of AD over FD. The resulting code is more modular, can easily be adapted to new chemistry and transport models, and enables future sensitivity studies for arbitrary model parameters.

Alexander Hück, Sebastian Kreutzer, Danny Messig, Arne Scholtissek, Christian Bischof, Christian Hasse
Morph Resolution Based on Autoencoders Combined with Effective Context Information

In social networks, people often create morphs, a special type of fake alternative names for avoiding internet censorship or some other purposes. How to resolve these morphs to the entities that they really refer to is very important for natural language processing tasks. Although some methods have been proposed, they do not use the context information of morphs or target entities effectively; only use the information of neighbor words of morphs or target entities. In this paper, we proposed a new approach to resolving morphs based on autoencoders combined with effective context information. First, in order to represent the semantic meanings of morphs or target candidates more precisely, we proposed a method to extract effective context information. Next, by integrating morphs or target candidates and their effective context information into autoencoders, we got the embedding representation of morphs and target candidates. Finally, we ranked target candidates based on similarity measurement of semantic meanings of morphs and target candidates. Thus, our method needs little annotated data, and experimental results demonstrated that our approach can significantly outperform state-of-the-art methods.

Jirong You, Ying Sha, Qi Liang, Bin Wang
Old Habits Die Hard: Fingerprinting Websites on the Cloud

To detect malicious websites on the cloud where a variety of network traffic mixed together, precise detection method is needed. Such method ought to classify websites over composite network traffic and fit to the practical problems like unidirectional flows in ISP gateways. In this work, we investigate the website fingerprinting methods and propose a novel model to classify websites on the cloud. The proposed model can recognize websites from traffic collected with multi-tab setting and performs better than the state of the art method. Furthermore, the method keeps excellent performances with unidirectional flows and real world traffic by utilizing features only extracted from the request side.

Xudong Zeng, Cuicui Kang, Junzheng Shi, Zhen Li, Gang Xiong
Deep Streaming Graph Representations

Learning graph representations generally indicate mapping the vertices of a graph into a low-dimension space, in which the proximity of the original data can be preserved in the latent space. However, traditional methods that based on adjacent matrix suffered from high computational cost when encountering large graphs. In this paper, we propose a deep autoencoder driven streaming methods to learn low-dimensional representations for graphs. The proposed method process the graph as a data stream fulfilled by sampling strategy to avoid straight computation over the large adjacent matrix. Moreover, a graph regularized deep autoencoder is employed in the model to keep different aspects of proximity information. The regularized framework is able to improve the representation power of learned features during the learning process. We evaluate our method in clustering task by the features learned from our model. Experiments show that the proposed method achieves competitive results comparing with methods that directly apply deep models over the complete graphs.

Minglong Lei, Yong Shi, Peijia Li, Lingfeng Niu
Adversarial Reinforcement Learning for Chinese Text Summarization

This paper proposes a novel Adversarial Reinforcement Learning architecture for Chinese text summarization. Previous abstractive methods commonly use Maximum Likelihood Estimation (MLE) to optimize the generative models, which makes auto-generated summary less incoherent and inaccuracy. To address this problem, we innovatively apply the Adversarial Reinforcement Learning strategy to narrow the gap between the generated summary and the human summary. In our model, we use a generator to generate summaries, a discriminator to distinguish between generated summaries and real ones, and reinforcement learning (RL) strategy to iteratively evolve the generator. Besides, in order to better tackle Chinese text summarization, we use a character-level model rather than a word-level one and append Text-Attention in the generator. Experiments were run on two Chinese corpora, respectively consisting of long documents and short texts. Experimental Results showed that our model significantly outperforms previous deep learning models on rouge score.

Hao Xu, Yanan Cao, Yanmin Shang, Yanbing Liu, Jianlong Tan, Li Guo
Column Concept Determination for Chinese Web Tables via Convolutional Neural Network

Hundreds of millions of tables on the Internet contain a considerable wealth of high-quality relational data. However, the web tables tend to lack explicit key semantic information. Therefore, information extraction in tables is usually supplemented by recovering the semantics of tables, where column concept determination is an important issue. In this paper, we focus on column concept determination in Chinese web tables. Different from previous research works, convolutional neural network (CNN) was applied in this task. The main contributions of our work lie in three aspects: firstly, datasets were constructed automatically based on the infoboxes in Baidu Encyclopedia; secondly, to determine the column concepts, a CNN classifier was trained to annotate cells in tables and the majority vote method was used on the columns to exclude incorrect annotations; thirdly, to verify the effectiveness, we performed the method on the real tabular dataset. Experimental results show that the proposed method outperforms the baseline methods and achieves an average accuracy of 97% for column concept determination.

Jie Xie, Cong Cao, Yanbing Liu, Yanan Cao, Baoke Li, Jianlong Tan
Service-Oriented Approach for Internet of Things

The new era of industrial automation has been developed and implemented quickly, and it is impacting different areas of society. Especially in recent years, much progress has been made in this area, known as the fourth industrial revolution. Every day factories are more connected and able to communicate and interact in real time between industrial systems. There is a need to flexibilization on the shop floor to promote higher customization of products in a short life cycle and service-oriented architecture is a good option to materialize this. This paper aims to propose briefly a service-oriented model for the Internet of things in an Industry 4.0 context. Also, discusses challenges of this new revolution, also known as Industry 4.0, addressing the introduction of modern communication and computing technologies to maximize interoperability across all the different existing systems. Moreover, it will cover technologies that support this new industrial revolution and discuss impacts, possibilities, needs, and adaptation.

Eduardo Cardoso Moraes
Adversarial Framework for General Image Inpainting

We present a novel adversarial framework to solve the arbitrarily sized image random inpainting problem, where a pair of convolution generator and discriminator is trained jointly to fill the relatively large but random “holes”. The generator is a symmetric encoder-decoder just like an hourglass but with added skip connections. The skip connections act like information shortcut to transfer some necessary details that discarded by the “bottleneck” layer. Our discriminator is trained to distinguish whether an image is natural or not and find out the hidden holes from a reconstructed image. A combination of a standard pixel-wise L2 loss and an adversarial loss is used to guided the generator to preserve the known part of the origin image and fills the missing part with plausible result. Our experiment is conducted on over 1.24M images with uniformly random 25% missing part. We found the generator is good at capturing structure context and performs well in arbitrary size images without complex texture.

Wei Huang, Hongliang Yu
A Stochastic Model to Simulate the Spread of Leprosy in Juiz de Fora

This work aims to simulate the spread of leprosy in Juiz de Fora using the SIR model and considering some of its pathological aspects. SIR models divide the studied population into compartments in relation to the disease, in which S, I and R compartments refer to the groups of susceptible, infected and recovered individuals, respectively. The model was solved computationally by a stochastic approach using the Gillespie algorithm. Then, the results obtained by the model were validated using the public health records database of Juiz de Fora.

Vinícius Clemente Varella, Aline Mota Freitas Matos, Henrique Couto Teixeira, Angélica da Conceição Oliveira Coelho, Rodrigo Weber dos Santos, Marcelo Lobosco
Data Fault Identification and Repair Method of Traffic Detector

The quality control and evaluation of traffic detector data are a prerequisite for subsequent applications. Considering that the PCA method is not ideal when detecting fault information with time-varying and multi-scale features, an improved MSPCA model is proposed in this paper. In combination with wavelet packet energy analysis and principal component analysis, data fault identification for traffic detectors is realized. On the basis of traditional multi-scale principal component analysis, detailed information is obtained by wavelet packet multi-scale decomposition, and a principal component analysis model is established in different scale matrices; fault data is separated by wavelet packet energy difference; according to the time characteristics and space of the detector data Correlation fixes fault data. Through case analysis, the feasibility of the method was verified.

Xiao-lu Li, Jia-xu Chen, Xin-ming Yu, Xi Zhang, Fang-shu Lei, Peng Zhang, Guang-yu Zhu
The Valuation of CCIRS with a New Design

This paper presents a study of pricing a credit derivatives – credit contingent interest rate swap (CCIRS) with a new design, which allows some premium to be paid later when default event doesn’t happen. This item makes the contract more flexible and supplies cash liquidity to the buyer, so that the contract is more attractive. Under the reduced form framework, we provide the pricing model with the default intensity relevant to the interest rate, which follows Cox-Ingersoll-Ross (CIR) process. A semi-closed form solution is obtained, by which numerical results and parameters analysis have been carried on. Especially, it is discussed that a trigger point for the proportion of the later possible payment which causes the zero initial premium.

Huaying Guo, Jin Liang
Method of Node Importance Measurement in Urban Road Network

The node importance measurement plays an important role in analyzing the reliability of the urban road network. In this thesis, the topological structure, geographic information and traffic flow characteristics of urban road network are all considered, and methods of node importance measurement of urban road network are proposed based on a spatially weighted degree model and h-index from different perspectives. Experiments are given to show the efficiency and practicability of the proposed methods.

Dan-qi Liu, Jia-lin Wang, Xiao-lu Li, Xin-ming Yu, Kang Song, Xi Zhang, Fang-shu Lei, Peng Zhang, Guang-yu Zhu
AdaBoost-LSTM Ensemble Learning for Financial Time Series Forecasting

A hybrid ensemble learning approach is proposed to forecast financial time series combining AdaBoost algorithm and Long Short-Term Memory (LSTM) network. Firstly, by using AdaBoost algorithm the database is trained to get the training samples. Secondly, the LSTM is utilized to forecast each training sample separately. Thirdly, AdaBoost algorithm is used to integrate the forecasting results of all the LSTM predictors to generate the ensemble results. Two major daily exchange rate datasets and two stock market index datasets are selected for model evaluation and comparison. The empirical results demonstrate that the proposed AdaBoost-LSTM ensemble learning approach outperforms some other single forecasting models and ensemble learning approaches. This suggests that the AdaBoost-LSTM ensemble learning approach is a highly promising approach for financial time series data forecasting, especially for the time series data with nonlinearity and irregularity, such as exchange rates and stock indexes.

Shaolong Sun, Yunjie Wei, Shouyang Wang
Analysis of Bluetooth Low Energy Detection Range Improvements for Indoor Environments

Real Time Location Systems (RTLS) research identifies Bluetooth Low Energy as one of the technologies that promise an acceptable response to the requirements for indoor environments. Against this background we investigate the latest developments with Bluetooth especially with regards its range and possible use in the indoor environments. Several different venues are used at the University to conduct the experiment to mimic typical indoor environments. The results indicated an acceptable range in line of sight as well as through obstacles such as glass, drywall partitions and solid brick wall. Future research will investigate methods to determine the position of Bluetooth Low Energy devices for possible location of patients and assets.

Jay Pancham, Richard Millham, Simon James Fong
Study on an N-Parallel FENE-P Constitutive Model Based on Multiple Relaxation Times for Viscoelastic Fluid

An N-parallel FENE-P constitutive model based on multiple relaxation times is proposed in this paper, which aims at accurately describing the apparent viscosity of viscoelastic fluid. The establishment of N-parallel FENE-P constitutive model and the numerical approach to calculate the apparent viscosity are presented in detail, respectively. To validate the performance of the proposed constitutive model, it is compared with the conventional FENE-P constitutive model (It only has single relaxation time) in estimating the apparent viscosity of two common viscoelastic fluids: polymer and surfactant solutions. The comparative results indicate the N-parallel FENE-P constitutive model can represent the apparent viscosity of polymer solutions more accurate than the traditional model in the whole range of shear rate (0.1 s−1–1000 s−1), and the advantage is more noteworthy especially when the shear rate is higher (10 s−1–1000 s−1). Despite both the proposed model and the traditional model can’t capture the interesting shear thickening behavior of surfactant solutions, the proposed constitutive model still possesses advantage over the traditional one in depicting the apparent viscosity and first normal stress difference. In addition, the N-parallel FENE-P constitutive model demonstrates a better applicability and favorable adjustability of the model parameters.

Jingfa Li, Bo Yu, Shuyu Sun, Dongliang Sun
RADIC Based Fault Tolerance System with Dynamic Resource Controller

The continuously growing High-Performance Computing requirements increments the number of components and at the same time failure probabilities. Long-running parallel applications are directly affected by this phenomena, disrupting its executions on failure occurrences. MPI, a well-known standard for parallel applications follows a fail-stop semantic, requiring the application owners restart the whole execution when hard failures appear losing time and computation data. Fault Tolerance (FT) techniques approach this issue by providing high availability to the users’ applications execution, though adding significant resource and time costs. In this paper, we present a Fault Tolerance Manager (FTM) framework based on RADIC architecture, which provides FT protection to parallel applications implemented with MPI, in order to successfully complete executions despite failures. The solution is implemented in the application-layer following the uncoordinated and semi-coordinated rollback recovery protocols. It uses a sender-based message logger to store exchanged messages between the application processes; and checkpoints only the processes data required to restart them in case of failures. The solution uses the concepts of ULFM for failure detection and recovery. Furthermore, a dynamic resource controller is added to the proposal, which monitors the message logger buffers and performs actions to maintain an acceptable level of protection. Experimental validation verifies the FTM functionality using two private clusters infrastructures.

Jorge Villamayor, Dolores Rexachs, Emilio Luque
Effective Learning with Joint Discriminative and Representative Feature Selection

Feature selection plays an important role in various machine learning tasks such as classification. In this paper, we focus on both discriminative and representative abilities of the features, and propose a novel feature selection method with joint exploration on both labeled and unlabeled data. In particular, we implement discriminative feature selection to extract the features that can best reveal the underlying classification labels, and develop representative feature selection to obtain the features with optimal self-expressive performance. Both methods are formulated as joint $$ \ell_{2,1} $$-norm minimization problems. An effective alternate minimization algorithm is also introduced with analytic solutions in a column-by-column manner. Extensive experiments on various classification tasks demonstrate the advantage of the proposed method over several state-of-the-art methods.

Shupeng Wang, Xiao-Yu Zhang, Xianglei Dang, Binbin Li, Haiping Wang
Agile Tuning Method in Successive Steps for a River Flow Simulator

Scientists and engineers continuously build models to interpret axiomatic theories or explain the reality of the universe of interest to reduce the gap between formal theory and observation in practice. We focus our work on dealing with the uncertainty of the input data of the model to improve the quality of the simulation. To reduce this error, scientist and engineering implement techniques for model tuning and they look for ways to reduce their high computational cost. This article proposes a methodology for adjusting a simulator of a complex dynamic system that models the wave translation along rivers channels, with emphasis on the reduction of computation resources. We propose a simulator calibration by using a methodology based on successive adjustment steps of the model. We based our process in a parametric simulation. The input scenarios used to run the simulator at every step were obtained in an agile way, achieving a model improvement up to 50% in the reduction of the simulated data error. These results encouraged us to extend the adjustment process over a larger domain region.

Mariano Trigila, Adriana Gaudiani, Emilio Luque
A Parallel Quicksort Algorithm on Manycore Processors in Sunway TaihuLight

In this paper we present a highly efficient parallel quicksort algorithm on SW26010, a heterogeneous manycore processor that makes Sunway TaihuLight the Top-One supercomputer in the world. Motivated by the software-cache and on-chip communication design of SW26010, we propose a two-phase quicksort algorithm, with the first counting elements and the second moving elements. To make the best of such manycore architecture, we design a decentralized workflow, further optimize the memory access and balance the workload. Experiments show that our algorithm scales efficiently to 64 cores of SW26010, achieving more than 32X speedup for int32 elements on all kinds of data distributions. The result outperforms the strong scaling one of Intel TBB (Threading Building Blocks) version of quicksort on x86-64 architecture.

Siyuan Ren, Shizhen Xu, Guangwen Yang
How Is the Forged Certificates in the Wild: Practice on Large-Scale SSL Usage Measurement and Analysis

Forged certificate is a prominent issue in the real world deployment of SSL/TLS - the most widely used encryption protocols for Internet security, which is typically used in man-in-the-middle (MITM) attacks, proxies, anonymous or malicious services, personal or temporary services, etc. It wrecks the SSL encryption, leading to privacy leakage and severe security risks. In this paper, we study forged certificates in the wild based on a long term large scale passive measurement. With the combination of certificate transparency (CT) logs and our measurement results, nearly 3 million forged certificates against the Alexa Top 10K sites are identified and studied. Our analysis reveals the causes and preference of forged certificates, as well as several significant differences from the benign ones. Finally, we discover several IP addresses used for MITM attacks by forged certificate tracing and deep behavior analysis. We believe our study can definitely contribute to research on SSL/TLS security as well as real world protocol usage.

Mingxin Cui, Zigang Cao, Gang Xiong
Managing Cloud Data Centers with Three-State Server Model Under Job Abandonment Phenomenon

In fact, to improve the quality of system service, for user job requests, which already have waited for a long time in queues of a busy cluster, cloud vendors often migrate the jobs to other available clusters. This strategy brings about the occurrence of job abandonment phenomenon in data centers, which disturbs the server management mechanisms in the manner of decreasing effectiveness control, increasing energy consumptions, and so on. In this paper, based on the three-state model proposed in previous works, we develop a novel model and its management strategy for cloud data centers using a finite queue. Our proposed model is tested in a simulated cloud environment using CloudSim. The achieved outcomes show that our three-state server model for data centers operates well under the job abandonment phenomenon.

Binh Minh Nguyen, Bao Hoang, Huy Tran, Viet Tran
The Analysis of the Effectiveness of the Perspective-Based Observational Tunnels Method by the Example of the Evaluation of Possibilities to Divide the Multidimensional Space of Coal Samples

Methods of qualitative analysis of multidimensional data using visualization of this data consist in using the transformation of a multidimensional space into a two-dimensional one. In this way, multidimensional complicated data can be presented on a two-dimensional computer screen. This allows to conduct the qualitative analysis of this data in a way which is the most natural for people, through the sense of sight. The application of complex algorithms targeted to search for multidimensional data of specific properties can be replaced with such a qualitative analysis. Some qualitative characteristics are simply visible in the two-dimensional image representing this data. The new perspective-based observational tunnels method is an example of the multidimensional data visualization method. This method was used in this paper to present and analyze the real set of seven-dimensional data describing coal samples obtained from two hard coal mines. This paper presents for the first time the application of perspective-based observational tunnels method for the evaluation of possibilities to divide the multidimensional space of coal samples by their susceptibility to fluidal gasification. This was performed in order to verify whether it will be possible to indicate the possibility of such a division by applying this method. Views presenting the analyzed data, enabling to indicate the possibility to separate areas of the multidimensional space occupied by samples with different applicability for the gasification process, were obtained as a result.

Dariusz Jamroz
Urban Data and Spatial Segregation: Analysis of Food Services Clusters in St. Petersburg, Russia

This paper presents an approach to study spatial segregation through clusterization of food services in St. Petersburg, Russia, based on analysis of geospatial and user-generated data from open sources. We consider a food service as an urban place with social and symbolic features and we track how popularity (number of reviews) and rating of food venues in Google maps correlate with formation of food venues clusters. We also analyze environmental parameters which correlate with clusterization of food services, such as functional load of the surrounding built environment and presence of public spaces. We observe that main predictors for food services clusters formation are shops, services and offices, while public spaces (parks and river embankments) do not draw food venues. Popular and highly rated food venues form clusters in historic city centre which collocate with existing creative spaces, unpopular and low rated food venues do not form clusters and are more widely spread in peripheral city areas.

Aleksandra Nenko, Artem Konyukhov, Sergey Mityagin
Control Driven Lighting Design for Large-Scale Installations

Large-scale photometric computations carried out in the course of lighting design preparation were already subject of numerous works. They focused either on improving the quality of design, for example related to energy-efficiency, or dealt with issues concerning the computation complexity and computations as such.However, mutual influence of the design process and dynamic dimming of luminaires has not yet been addressed. If road segments are considered separately, suboptimal results can occur in places such as junctions. Considering the entire road network at once complicates the computation procedures and requires additional processing time. This paper focuses on a method to make this more efficient approach viable by applying reversed scheme of design and control. The crucial component of both design and control modules is data inventory which role is also discussed in the paper.

Adam Sȩdziwy, Leszek Kotulski, Sebastian Ernst, Igor Wojnicki
An OpenMP Implementation of the TVD–Hopmoc Method Based on a Synchronization Mechanism Using Locks Between Adjacent Threads on Xeon Phi (TM) Accelerators

This work focuses on the study of the 1–D TVD–Hopmoc method executed in shared memory manycore environments. In particular, this paper studies barrier costs on Intel$$^{\textregistered }$$ Xeon Phi$$^\mathrm{TM}$$ (KNC and KNL) accelerators when using the OpenMP standard. This paper employs an explicit synchronization mechanism to reduce spin and thread scheduling times in an OpenMP implementation of the 1–D TVD–Hopmoc method. Basically, we define an array that represents threads and the new scheme consists of synchronizing only adjacent threads. Moreover, the new approach reduces the OpenMP scheduling time by employing an explicit work-sharing strategy. In the beginning of the process, the array that represents the computational mesh of the numerical method is partitioned among threads, instead of permitting the OpenMP API to perform this task. Thereby, the new scheme diminishes the OpenMP spin time by avoiding OpenMP barriers using an explicit synchronization mechanism where a thread only waits for its two adjacent threads. The results of the new approach is compared with a basic parallel implementation of the 1–D TVD–Hopmoc method. Specifically, numerical simulations shows that the new approach achieves promising performance gains in shared memory manycore environments for an OpenMP implementation of the 1–D TVD–Hopmoc method.

Frederico L. Cabral, Carla Osthoff, Gabriel P. Costa, Sanderson L. Gonzaga de Oliveira, Diego Brandão, Mauricio Kischinhevsky
Data-Aware Scheduling of Scientific Workflows in Hybrid Clouds

In this paper, we address the scheduling of scientific workflows in hybrid clouds considering data placement and present the Hybrid Scheduling for Hybrid Clouds (HSHC) algorithm. HSHC is a two-phase scheduling algorithm with a genetic algorithm based static phase and dynamic programming based dynamic phase. We evaluate HSHC with both a real-world scientific workflow application and random workflows in terms of makespan and costs.

Amirmohammad Pasdar, Khaled Almi’ani, Young Choon Lee
Large Margin Proximal Non-parallel Support Vector Classifiers

In this paper, we propose a novel large margin proximal non-parallel twin support vector machine for binary classification. The significant advantages over twin support vector machine are that the structural risk minimization principle is implemented and by adopting uncommon constraint formulation for the primal problem, the proposed method avoids the computation of the large inverse matrices before training which is inevitable in the formulation of twin support vector machine. In addition, the dual coordinate descend algorithm is used to solve the optimization problems to accelerate the training efficiency. Experimental results exhibit the effectiveness and the classification accuracy of the proposed method.

Mingzeng Liu, Yuanhai Shao
The Multi-core Optimization of the Unbalanced Calculation in the Clean Numerical Simulation of Rayleigh-Bénard Turbulence

The so-called clean numerical simulation (CNS) is used to simulate the Rayleigh-Bénard (RB) convection system. Compared with direct numerical simulation (DNS), the accuracy and reliability of investigating turbulent flows improve largely. Although CNS can well control the numerical noises, the cost of calculation is more expensive. In order to simulate the system in a reasonable period, the calculation schemes of CNS require redesign. In this paper, aiming at the CNS of the two-dimension RB system, we first propose the notions of equal difference matrix and balance point set which are crucial to model the unbalanced calculation of the system under multi-core platform. Then, according to the notions, we present algorithms to optimize the unbalanced calculation. We prove our algorithm is optimal when the core number is the power of 2 and our algorithm approaches the optimal when the core number is not the power of 2. Finally, we compare the results of our optimized algorithms with others to demonstrate the effectiveness of our optimization.

Lu Li, Zhiliang Lin, Yan Hao
ES-GP: An Effective Evolutionary Regression Framework with Gaussian Process and Adaptive Segmentation Strategy

This paper proposes a novel evolutionary regression framework with Gaussian process and adaptive segmentation strategy (named ES-GP) for regression problems. The proposed framework consists of two components, namely, the outer DE and the inner DE. The outer DE focuses on finding the best segmentation scheme, while the inner DE focuses on optimizing the hyper-parameters of GP model for each segment. These two components work cooperatively to find a piecewise gaussian process solution which is flexible and effective for complicated regression problems. The proposed ES-GP has been tested on four artificial regression problems and two real-world time series regression problems. The experiment results show that ES-GP is capable of improving prediction performance over non-segmented or fixed-segmented solutions.

Shijia Huang, Jinghui Zhong
Evaluating Dynamic Scheduling of Tasks in Mobile Architectures Using ParallelME Framework

Recently we observe that mobile phones stopped being just devices for basic communication to become providers of many applications that require increasing performance for good user experience. Inside today’s mobile phones we find different processing units (PU) with high computational capacity, as multicore architectures and co-processors like GPUs. Libraries and run-time environments have been proposed to improve applications’ performance by taking advantage of different PUs in a transparent way. Among these environments we can highlight the ParallelME. Despite the importance of task scheduling strategies in these environments, ParallelME has implemented only the First Come Firs Serve (FCFS) strategy. In this paper we extended the ParallelME framework by implementing and evaluating two different dynamic scheduling strategies, Heterogeneous Earliest Finish Time (HEFT) and Performance-Aware Multiqueue Scheduler (PAMS). We evaluate these strategies considering synthetic applications, and compare the proposals with the FCFS. For some scenarios, PAMS was proved to be up to 39% more efficient than FCFS. These gains usually imply on lower energy consumption, which is very desirable when working with mobile architectures.

Rodrigo Carvalho, Guilherme Andrade, Diogo Santana, Thiago Silveira, Daniel Madeira, Rafael Sachetto, Renato Ferreira, Leonardo Rocha
An OAuth2.0-Based Unified Authentication System for Secure Services in the Smart Campus Environment

Based on the construction of Shandong Normal University’s smart authentication system, this paper researches the key technologies of Open Authorization(OAuth) protocol, which allows secure authorization in a simple and standardized way from third-party applications accessing online services. Through the analysis of OAuth2.0 standard and the open API details between different applications, and concrete implementation procedure of the smart campus authentication platform, this paper summarizes the research methods of building the smart campus application system with existing educational resources in cloud computing environment. Through the conducting of security experiments and theoretical analysis, this system has been proved to run stably and credibly, flexible, easy to integrate with existing smart campus services, and efficiently improve the security and reliability of campus data acquisition. Also, our work provides a universal reference and significance to the authentication system construction of the smart campus.

Baozhong Gao, Fangai Liu, Shouyan Du, Fansheng Meng
Time Series Cluster Analysis on Electricity Consumption of North Hebei Province in China

In recent years, China has vigorously promoted the building of ecological civilization and regarded green low-carbon development as one of the important directions and tasks for industrial transformation and upgrading. It calls for accelerating industrial energy conservation and consumption reduction, accelerating the implementation of cleaner production, accelerating the use of renewable resources, promoting industrial savings and cleanliness, advancing changes in low-carbon and high-efficiency production, and promoting industrial restructuring and upgrading. A series of measures have had a negative impact on the scale of industrial production in the region, thereby affecting the electricity consumption here. Based on the electricity consumption data of 31 counties in northern Hebei, this paper uses the time series clustering method to cluster the electricity consumption of 31 counties in Hebei Province. The results show that the consumption of electricity in different counties is different. The macro-control policies have different impacts on different types of counties.

Luhua Zhang, Miner Liu, Jingwen Xia, Kun Guo, Jun Wang
Effective Semi-supervised Learning Based on Local Correlation

Traditionally, the manipulation of unlabeled instances is solely based on prediction of the existing model, which is vulnerable to ill-posed training set, especially when the labeled instances are limited or imbalanced. To address this issue, this paper investigate the local correlation based on the entire data distribution, which is leveraged as informative guidance to ameliorate the negative influence of biased model. To formulate the self-expressive property between instances within a limited vicinity, we develop the sparse self-expressive representation learning method based on column-wise sparse matrix optimization. Optimization algorithm is presented via alternating iteration. Then we further propose a novel framework, named semi-supervised learning based on local correlation, to effectively integrate the explicit prior knowledge and the implicit data distribution. In this way, the individual prediction from the learning model is refined by collective representation, and the pseudo-labeled instances are selected more effectively to augment the semi-supervised learning performance. Experimental results on multiple classification tasks indicate the effectiveness of the proposed algorithm.

Xiao-Yu Zhang, Shupeng Wang, Xin Jin, Xiaobin Zhu, Binbin Li
Detection and Prediction of House Price Bubbles: Evidence from a New City

In the early stages of growth of a city, housing market fundamentals are uncertain. This could attract speculative investors as well as actual housing demand. Sejong is a recently built administrative city in South Korea. Most government departments and public agencies have moved into it, while others are in the process of moving or plan to do so. In Sejong, a drastic escalation in house prices has been noted over the last few years, but at the same time, the number of vacant housing units has increased. Using the present value model, lease-price ratio, and log-periodic power law, this study examines the bubbles in the Sejong housing market. The analysis results indicate that (i) there are significant house price bubbles, (ii) the bubbles are driven by speculative investment, and (iii) the bubbles are likely to burst earlier here than in other cities. The approach in this study can be applied to identifying pricing bubbles in other cities.

Hanwool Jang, Kwangwon Ahn, Dongshin Kim, Yena Song
A Novel Parsing-Based Automatic Domain Terminology Extraction Method

As domain terminology plays a crucial role in the study of every domain, automatic domain terminology extraction method is in real demand. In this paper, we propose a novel parsing-based method which generates the domain compound terms by utilizing the dependent relations between the words. Dependency parsing is used to identify the dependent relations. In addition, a multi-factor evaluator is proposed to evaluate the significance of each candidate term which not only considers frequency but also includes the influence of other factors affecting domain terminology. Experimental results demonstrate that the proposed domain terminology extraction method outperforms the traditional POS-base method in both precision and recall.

Ying Liu, Tianlin Zhang, Pei Quan, Yueran Wen, Kaichao Wu, Hongbo He
Remote Procedure Calls for Improved Data Locality with the Epiphany Architecture

This paper describes the software implementation of an emerging parallel programming model for partitioned global address space (PGAS) architectures. Applications with irregular memory access to distributed memory do not perform well on conventional symmetric multiprocessing (SMP) architectures with hierarchical caches. Such applications tend to scale with the number of memory interfaces and corresponding memory access latency. Using a remote procedure call (RPC) technique, these applications may see reduced latency and higher throughput compared to remote memory access or explicit message passing. The software implementation of a remote procedure call method detailed in the paper is designed for the low-power Adapteva Epiphany architecture.

James A. Ross, David A. Richie
Identifying the Propagation Sources of Stealth Worms

Worm virus can spread in various ways with great destructive power, which poses a great threat to network security. One example is the WannaCry worm in May 2017. By identifying the sources of worms, we can better understand the causation of risks, and then implement better security measures. However, the current available detection system may not be able to fully detect the existing threats when the worms with the stealth characteristics do not show any abnormal behaviors. This paper makes two key contributions toward the challenging problem of identifying the propagation sources: (1) A modified algorithm of observed results based on Bayes rule has been proposed, which can modify the results of possible missed nodes, so as to improve the accuracy of identifying the propagation sources. (2) We have applied the method of branch and bound, effectively reduced the traversal space and improved the efficiency of the algorithm by calculating the upper and lower bounds of the infection probability of nodes. Through the experiment simulation in the real network, we verified the accuracy and high efficiency of the algorithm for tracing the sources of worms.

Yanwei Sun, Lihua Yin, Zhen Wang, Yunchuan Guo, Binxing Fang
Machine Learning Based Text Mining in Electronic Health Records: Cardiovascular Patient Cases

This article presents the approach and experimental study results of machine learning based text mining methods with application for EHR analysis. It is shown how the application of ML-based text mining methods to identify classes and features correlation to increases the possibility of prediction models. The analysis of the data in EHR has significant importance because it contains valuable information that is crucial for the decision-making process during patient treatment. The preprocessing of EHR using regular expressions and the means of vectorization and clustering medical texts data is shown. The correlation analysis confirms the dependence between the found classes of diagnosis and individual characteristics of patients and episodes. The medical interpretation of the findings is also presented with the support of physicians from the specialized medical center, which confirms the effectiveness of the shown approach.

Sergey Sikorskiy, Oleg Metsker, Alexey Yakovlev, Sergey Kovalchuk
Evolutionary Ensemble Approach for Behavioral Credit Scoring

This paper is concerned with the question of potential quality of scoring models that can be achieved using not only application form data but also behavioral data extracted from the transactional datasets. The several model types and a different configuration of the ensembles were analyzed in a set of experiments. Another aim of the research is to prove the effectiveness of evolutionary optimization of an ensemble structure and use it to increase the quality of default prediction. The example of obtained results is presented using models for borrowers default prediction trained on the set of features (purchase amount, location, merchant category) extracted from a transactional dataset of bank customers.

Nikolay O. Nikitin, Anna V. Kalyuzhnaya, Klavdiya Bochenina, Alexander A. Kudryashov, Amir Uteuov, Ivan Derevitskii, Alexander V. Boukhanovsky
Detecting Influential Users in Customer-Oriented Online Communities

Every year the activity of users in various social networks is increasing. Different business entities can analyze in more detail the behavior of the audience and adapt their products and services to its needs. Social network data allow not only to find the influential individuals according to their local topological properties, but also to investigate their preferences, and thus to personalize strategies of interaction with opinion leaders. However, information channels of organizations (e.g., community of a bank in a social network) include not only target audience but also employees and fake accounts. This lowers the applicability of network-based methods of identifying influential nodes. In this study, we propose an algorithm of discovering influential nodes which combines topological metrics with the individual characteristics of users’ profiles and measures of their activities. The algorithm is used along with preliminary clustering procedure, which is aimed at the identification of groups of users with different roles, and with the algorithm of profiling the interests of users according to their subscriptions. The applicability of approach is tested using the data from a community of large Russian bank in the vk.com social network. Our results show that: (i) it is important to consider user’s role in the leader detection algorithm, (ii) the roles of poorly described users may be effectively identified using roles of its neighbors, (iii) proposed approach allows for finding users with high values of actual informational influence and for distinguishing their key interests.

Ivan Nuzhdenko, Amir Uteuov, Klavdiya Bochenina
GeoSkelSL: A Python High-Level DSL for Parallel Computing in Geosciences

This paper presents GeoSkelSL a Domain Specific Language (DSL) dedicated to Geosciences that helps non experts in computer science to write their own parallel programs. This DSL is embedded in Python language which is widely used in Geosciences. The program written by the user is derived to an efficient C++/MPI parallel program using implicit parallel patterns. The tools associated to the DSL also generate scripts that allow the user to automatically compile and run the resulting program on the targeted computer.

Kevin Bourgeois, Sophie Robert, Sébastien Limet, Victor Essayan
Precedent-Based Approach for the Identification of Deviant Behavior in Social Media

The current paper is devoted to a problem of deviant users’ identification in social media. For this purpose, each user of social media source should be described through a profile that aggregates open information about him/her within the special structure. Aggregated user profiles are formally described in terms of multivariate random process. The special emphasis in the paper is made on methods for identifying of users with certain on a base of few precedents and control the quality of search results. Experimental study shows the implementation of described methods for the case of commercial usage of the personal account in social media.

Anna V. Kalyuzhnaya, Nikolay O. Nikitin, Nikolay Butakov, Denis Nasonov
Performance Analysis of 2D-compatible 2.5D-PDGEMM on Knights Landing Cluster

This paper discusses the performance of a parallel matrix multiplication routine (PDGEMM) that uses the 2.5D algorithm, which is a communication-reducing algorithm, on a cluster based on the Xeon Phi 7200-series (codenamed Knights Landing), Oakforest-PACS. Although the algorithm required a 2.5D matrix distribution instead of the conventional 2D distribution, it performed computations of 2D distributed matrices on a 2D process grid by redistributing the matrices (2D-compatible 2.5D-PDGEMM). Our use of up to 8192 nodes (8192 Xeon Phi processors) demonstrates that in terms of strong scaling, our implementation performs better than conventional 2D implementations.

Daichi Mukunoki, Toshiyuki Imamura
Backmatter
Metadata
Title
Computational Science – ICCS 2018
Editors
Prof. Yong Shi
Haohuan Fu
Yingjie Tian
Dr. Valeria V. Krzhizhanovskaya
Michael Harold Lees
Jack Dongarra
Peter M. A. Sloot
Copyright Year
2018
Electronic ISBN
978-3-319-93713-7
Print ISBN
978-3-319-93712-0
DOI
https://doi.org/10.1007/978-3-319-93713-7

Premium Partner