Zum Inhalt

Methods and Applications for Modeling and Simulation of Complex Systems

18th Asia Simulation Conference, AsiaSim 2018, Kyoto, Japan, October 27–29, 2018, Proceedings

  • 2018
  • Buch
insite
SUCHEN

Über dieses Buch

This volume constitutes the proceedings of the 18th Asia Simulation Conference, AsiaSim 2018, held in Kyoto, Japan, in August 2018.

The 45 revised full papers presented in this volume were carefully reviewed and selected from 90 submissions. The papers are organized in topical sections on modeling and simulation technology; soft computing and machine learning; high performance computing and cloud computing; simulation technology for industry; simulation technology for intelligent society; simulation of instrumentation and control application; computational mathematics and computational science; flow simulation; visualization and computer vision to support simulation.

Inhaltsverzeichnis

Frontmatter

Modeling and Simulation Technology

Frontmatter
A Novel Method to Build a Factor Space for Model Validation

Factor space is an indispensable part of the model validation. In order to provide an advantageous method to build factor space for model validation, this paper states the challenging problems of the factor space, and proposes a mathematical model of it. Further based on the model, this paper provides the graphic illustration, the factor decomposition, the credibility aggregation and the model defect tracing of the factor space, which construct a novel method to build a factor space for model validation. Finally, the paper provides a case study of an electromagnetic rail gun model validation to explain the usage of the method.

Ke Fang, Ming Yang, Yuchen Zhou
Simulation Credibility Evaluation Based on Multi-source Data Fusion

Real-world system experiment data, similar system running data, empirical data or domain knowledge of SME (subject matter expert) can serve as observed data in credibility evaluation. It is of great significance to study how to incorporate multi-source observed data to evaluate the validity of the model. Generally, data fusion methods are categorized into original data fusion, feature level fusion, and decision level fusion. In this paper, we firstly discuss the hierarchy of multiple source data fusion in credibility evaluation. Then, a Bayesian feature fusion method and a MADM-based (multiple attribute decision making) decision fusion approach are proposed for credibility evaluation. The proposed methods are available under different data scenarios. Furthermore, two case studies are provided to examine the effectiveness of credibility evaluation methods with data fusion.

Yuchen Zhou, Ke Fang, Ping Ma, Ming Yang
A Method of Parameter Calibration with Hybrid Uncertainty

A method, which combines the cumulative distribution function and modified Kolmogorov–Smirnov test, is proposed to solve parameter calibration problem with genetic algorithm seeking the optimal result, due to the hybrid uncertainty in model. The framework is built on comparing the difference between cumulative distribution functions of some target observed values and that of sample values. First, an auxiliary variable method is used to decomposition hybrid parameters into sub-parameters with only one kind of uncertainty, which is aleatory or epistemic, because only epistemic uncertainty can be calibrated. Then we find optimal matching values with genetic algorithm according to the index of difference of joint cumulative distribution functions. Finally, we demonstrate that the proposed model calibration method is able to get the approximation values of the unknown true value of epistemic parameters, in mars entry dynamics profile. The example illustrates the rationality and efficiency of the method of this paper.

Liu Bo, Shang XiaoBing, Wang Songyan, Chao Tao
Reinforcement Learning Testbed for Power-Consumption Optimization

Common approaches to control a data-center cooling system rely on approximated system/environment models that are built upon the knowledge of mechanical cooling and electrical and thermal management. These models are difficult to design and often lead to suboptimal or unstable performance. In this paper, we show how deep reinforcement learning techniques can be used to control the cooling system of a simulated data center. In contrast to common control algorithms, those based on reinforcement learning techniques can optimize a system’s performance automatically without the need of explicit model knowledge. Instead, only a reward signal needs to be designed. We evaluated the proposed algorithm on the open source simulation platform EnergyPlus. The experimental results indicate that we can achieve 22% improvement compared to a model-based control algorithm built into the EnergyPlus. To encourage the reproduction of our work as well as future research, we have also publicly released an open-source EnergyPlus wrapper interface ( https://github.com/IBM/rl-testbed-for-energyplus ) directly compatible with existing reinforcement learning frameworks.

Takao Moriyama, Giovanni De Magistris, Michiaki Tatsubori, Tu-Hoa Pham, Asim Munawar, Ryuki Tachibana
A DEVS Visual Front-End Interface for Model Reusability and Maintainability

Modeling is important in developing simulations. Due to the modeling process that establishes the relationship between the models determines the overall simulation flow and behavior of the models, so it requires the formulation of the overall structure and clear relationship expression. In case of DEVS, which is a discrete event simulation specification technique, modeling is performed by combining atomic models and coupled models. This process is performed in an abstract form and is expressed in a programming language. Therefore, it takes a lot of time and effort to get clear specification and understanding in DEVS modeling process and interpretation process, and it is not intuitive. In this paper, we introduce a DEVS based visual front-end interface that can reduce the cost of the modeling process to solve these difficulties. The DEVS visual front-end interface provides an environment for visually checking and modifying the modeling structure, and also provides skeleton code based on the modeling structure data. In addition, a program for maintaining consistency between source code and modeling information data is also presented in this study.

Jiyong Yang, Moongi Seok, San Jeong, Changbeom Choi
HLA-Based Federation Development Framework Supporting Model Reuse

In this paper we designed a HLA-based federation development framework supporting model reuse. The framework, including model filter, wrapper and scheduler, uses HLA/RTI as distributed underlying communication and management skeleton, and adopts model database and matching algorithm to select satisfying state-machine models for users, adopts FMI to unify model interface and implementation, adopts RTI to associate all the FMUs into one federation. In addition, the framework has distributed, heterogeneous, scalable simulation frame and could provide adequate simulation management.

Hang Ji, Xiang Zhai, Xiao Song, Xiaoliang Liu, Yazhou Liang, Zhengxuan Jia

Soft Computing and Machine Learning

Frontmatter
Automatic Performance Simulation for Microservice Based Applications

As microservices can easily scale up and down to adapt to dynamic workloads, various Internet-based applications adopt the microservice architecture to provide online services. Existing works often model applications’ performance according to historical training data, but they using static models cannot adapt to dynamic workloads and complex applications. To address the above issue, this paper proposes an adaptive automatic simulation approach to evaluate applications’ performance. We first model applications’ performance with a queue-based model, which well represents the correlations between workloads and performance metrics. Then, we predict applications’ response time by adjusting the parameters of the application performance model with an adaptive fuzzy Kalman filter. Thus, we can predict the applications’ performance by simulating various dynamic workloads. Finally, we have deployed a typical microservice based application and simulated workloads in the experiment to validate our approach. Experimental results show that our approach on performance simulation is much more accurate and effective than existing ones in predicting response time.

Yao Sun, Lun Meng, Peng Liu, Yan Zhang, Haopeng Chan
Predictive Simulation of Public Transportation Using Deep Learning

Traffic congestion has been one of the most common issues faced today as the rising population and growing economy calls for higher demands in efficient transportation. Looking into the public transport system in Singapore, we investigate its efficiency through a simple simulation and introduced predictive travel times to simulate ahead in future so as to identify congestion issues well in advance. Public transport operators can then utilize the reports to apply strategic resolutions in order to mitigate or avoid those issues beforehand. A deep neural network regression model was proposed to predict congestion, which is then used to compute future travel times. Experiments showed that the proposed methods are able to inject a better sense of realism into future simulations.

Muhammad Shalihin Bin Othman, Gary Tan
An Ensemble Modeling for Thermal Error of CNC Machine Tools

Thermal error caused by the thermal deformation of computer numerical control (CNC) machine tools is one of the main factors to affect the machining accuracy. With monitoring data of the temperature field, establishing data-driven thermal error model is considered as a more convenient, effective and cost-efficient way to reduce the thermal error. As a matter of fact, it is very difficult to develop a thermal error model with perfect generalization adapting to different working conditions of machining tools. In this paper, a method of an ensemble modeling (EM) based on Convolution Neural Network (CNN) and Back Propagation (BP) Neural Network for modeling thermal error is presented. This ensemble model takes full advantages of two different neural networks, namely CNN having self-extracting feature to solve collinear problem in temperature field and BP can process heat source to thermal error by mapping nonlinear function, then combined into a EM. To demonstrate the effectiveness of the proposed model, an experiment platform was set up based on a heavy-duty CNC machine tool. The results show that the proposed model achieves better accuracy and strong robustness in comparison with only with BP network and CNN network respectively.

Xuemei Jiang, PanPan Zhu, Ping Lou, Xiaomei Zhang, Quan Liu
Gait Classification and Identity Authentication Using CNN

Mobile-securance is one of the most crucial concerns in the contemporary society. To make further supplementation on the security of mobile phones, this paper proposes a sequential method including periodogram based gait separation, convolutional neural network (CNN) based gait classification and authentication algorithm. The implementation has also been achieved in this paper. The original data are obtained from mobile phone built-in accelerometer. Periodogram based gait separation algorithm calculates the walking periodicity of the mobile phone users and separates individual gait from the time series. Using CNN based classification whose overall classification accuracy can reach over 90% in the test, the separated gaits are subsequently categorized into 6 gait patterns. Furthermore, the CNN based identification authentication convert the certification issue to a bi-section issue, whether the mobile phone holder is the mobile phone user or not. The CNN based authentication method may achieve an accuracy of over 87% when combing with the walking periodicity data of mobile phone users. Albeit the high overall accuracy of CNN based classification and identification authentication, currently the method still have potential deficiency which requires further researches, preparing for public application and popularization.

Wei Yuan, Linxuan Zhang
Deep Dissimilarity Measure for Trajectory Analysis

Quantifying dissimilarities between two trajectories is a challenging yet fundamental task in many trajectory analysis systems. Existing methods are computationally expensive to calculate. We proposed a dissimilarity measure estimate for trajectory data by using deep learning methodology. One advantage of the proposed method is that it can get executed on GPU, which can significantly reduce the execution time for processing large number of data. The proposed network is trained using synthetic data. A simulator to generate synthetic trajectories is proposed. We used a publicly available dataset to evaluate the proposed method for the task of trajectory clustering. Our experiments show the performance of our proposed method is comparable with other well-known dissimilarity measures while it is substantially faster to compute.

Reza Arfa, Rubiyah Yusof, Parvaneh Shabanzadeh

High Performance Computing and Cloud Computing

Frontmatter
Performance Comparison of Eulerian Kinetic Vlasov Code Between Xeon Phi KNL and Xeon Broadwell

The present study deals with the kinetic Vlasov simulation code as a high-performance application, which solves the first-principle kinetic equations known as the Vlasov equation. A five-dimensional Vlasov code with two spatial dimension and three velocity dimensions is parallelized with the MPI-OpenMP hybrid parallelism. The performance of the parallel Vlasov code is measured on a single compute node with a Xeon Phi Knights Landing (KNL) processor and on a single compute node with two Xeon Broadwell processors. It is shown that the use of Multi-Channel Dynamic Random Access Memory (MCDRAM) as the “cache” mode gives higher performances than the “flat” mode when the size of a computing job is larger than the size of MCDRAM. On the other hand, the use of MCDRAM as the “flat” mode gives higher performances than the “cache” mode for small-size jobs, when the NUMA (Non-Uniform Memory Access) policy is controlled appropriately. It is also shown that there is not a substantial difference in the performance among the cluster modes. The performance of our Vlasov code is best with the “Quadrant” cluster mode and worst with the “SNC-4” cluster mode.

Takayuki Umeda, Keiichiro Fukazawa
Heterogeneous Scalable Multi-languages Optimization via Simulation

Scientific Computing (SC) is a multidisciplinary field that uses the computational approach to understand and study complex artificial and natural systems belonging many scientific sectors. Optimization via Simulation (OvS) is a fast developing area in SC field. OvS combines classical optimization algorithms and stochastic simulations to face problems with unknown and/or dynamic data distribution. We present Heterogeneous Simulation Optimization (HSO), an architecture that enable to distribute the OvS process on an Heterogeneous Computing systems. HSO is designed according to two levels of heterogeneity: hardware heterogeneity, that is the ability to exploit the computational power of several general-purpose CPUs and/or hardware accelerators such as Graphics Processing Units (GPUs); programming languages heterogeneity, that is the capability to develop the OvS methodology combining different programming languages such as C++, C, Clojure, Erlang, Go, Haskel, Java, Node.js, Objective-C, PHP, Python, Scala and many others. The proposed HSO architecture has been fully developed and is available on a public GitHub repository. We have validated and tested the scalability of HSO developing two different use cases that show both the levels of heterogeneity, and showing how to exploit Optimal Computing Budget Allocation (OCBA) algorithm and a Genetic Algorithm in a OvS process.

Gennaro Cordasco, Matteo D’Auria, Carmine Spagnuolo, Vittorio Scarano
Smart Simulation Cloud (Simulation Cloud 2.0)—The Newly Development of Simulation Cloud

A new scientific and technological revolution is emerging around the world and the era of “New Internet + Big Data + Artificial Intelligence+” is coming. As a third approach of understanding and transforming the world after theoretical approach and experimental approach, simulation technology faces major challenge in paradigm, approach, and ecosystem. Based on earlier researches and practices on simulation grid and simulation cloud, our team proposes smart cloud simulation (cloud simulation 2.0) based on our recent years research. The paper introduces the connotation of smart cloud simulation, and the concept model, architecture, body of knowledge, prototype, key technologies and applications of smart simulation cloud (smart cloud simulation system). The innovations of this paper are as follows. We define the connotation of newly cloud simulation the paradigm of which is interconnected, service-oriented, personalized, flexible, sociable and intelligent to satisfy the simulation demands of users at anytime, anywhere. We propose the digitizing, things-connecting, virtualization, service-oriented, collaborating and intelligent enabling technologies respectively. Moreover, typical prototype applications based on high performance simulation computers are proposed to illustrate the availability of our architecture and methods and it works well for the full cycle of simulation.

Bohu Li, Guoqiang Shi, Tingyu Lin, Yingxi Zhang, Xudong Chai, Lin Zhang, Duzheng Qing, Liqin Guo, Chi Xing, Yingying Xiao, Zhengxuan Jia, Xiao Song, Rong Dai
A Semantic Composition Framework for Simulation Model Service

In order to solve the large-scale model service composition problem in the cloud of simulation, this paper proposes a simulation model service composition framework which considers the characteristics of the cloud. This simulation model service composition framework adopts an ontology-based simulation model service description strategy (MSDS). Based on MSDS, the composite service composed of several model services with complex topology connection relationships is generated by the Input/Output semantic connection strength and simulation capability. A contrast experiment is conducted for the empirical verification.

Tian Bai, Lin Zhang, Fei Wang, Tingyu Lin, Yingying Xiao

Simulation Technology for Industry

Frontmatter
Dynamic Optimization of Two-Coil Power-Transfer System Using L-Section Matching Network for Magnetically Coupled Intrabody Communication

This study examined a novel intrabody communication method using magnetic coupling. For designing transceivers utilizing this communication method, it is important to understand the power-transfer characteristic between the sending and receiving coils of the transceivers and to develop a technique for maximizing the transceiver efficiency. This study targeted a two-coil wireless power-transfer system with an L-section matching network. The maximum power-transfer efficiency (PTE)—0.53—was achieved by adjusting the matching network under a carrier frequency of 2 MHz and a mutual inductance of 14.7 nH (coupling coefficient = 0.006). Additionally, by dynamically estimating the mutual inductance between the sending and receiving coils, the optimum PTE with respect to the positional relation was shown to be achievable. The proposed wireless power-transfer system is promising as a magnetic intrabody communication system.

Kenichi Ito
Demand and Supply Model for the Natural Gas Supply Chain in Colombia

Natural gas is considered the transitional fuel for excellence between fossil and renewable sources, considering its low cost, greater efficiency and lesser effects on the environment. This has led to increased demand levels worldwide, requiring the intervention of public and private actors to meet such demand. In this research, we study the natural gas supply chain in Colombia using system dynamics modelling. The results allow to contrast both the behaviour of the production and transport levels and the behaviour of the demand from the consumption sectors, allowing to identify capacity levels to be developed considering implementation times and percentages of coverage in the supply.

Mauricio Becerra Fernández, Elsa Cristina González La Rotta, Federico Cosenz, Isaac Dyner Rezonzew
Deep-Learning-Based Storage-Allocation Approach to Improve the AMHS Throughput Capacity in a Semiconductor Fabrication Facility

Recently, automated material handling systems (AMHSs) in semiconductor fabrication plants (FABs) in South Korea have become a new and major bottleneck. This is mainly because the number of long-distance transportation requests has increased as the FAB area has widened. This paper presents a deep-learning-based adaptive method for the storage-allocation problem to improve the AMHS throughput capacity.The AMHS in this research consists of overhead hoist transfer transports (OHTs), a unified rail for the OHTs, etc. The main problem involves scheduling (or designating) an intermediate buffer, e.g., a stocker or a side track buffer, for a single lot. Thus far, a static optimization approach has been widely applied to the problem. This research shows that a learning-based adaptive storage-allocation strategy can increase the AMHS capacity in terms of throughput. The deep-learning model considers various production conditions, including processing time, transportation time, and the distribution of works in process (WIP).

Haejoong Kim, Dae-Eun Lim
Research on the Cooperative Behavior in Cloud Manufacturing

With the development of information and network technology, a new type of manufacturing paradigm Cloud Manufacturing (CMg) is emerged. In the CMg environment, geographically distributed various manufacturing resources (MgSs) and manufacturing capabilities managed by different companies are encapsulated as different manufacturing services under the support of cloud computing, Internet of Things and virtualization technologies. CMg can provide on-demand MgSs for the manufacturing tasks (MgTs) of different customers in the network manufacturing environment. One MgT usually needs different MgSs owned by different companies to form a coalition for working together to finish it. However, being an autonomous entity, each MgS generally makes decisions in light of its own interests, so it is difficult to maximize the collective interests of the coalition. The cooperation among the MgSs in the same coalition is an effective way to maximize the collective interests. Hence, how to motivate MgSs to cooperate mutually is been paid more attention in cloud manufacturing environment. In the paper, the evolutionary game theory and the learning automaton are employed to model the decision-making process of MgSs. And a punishment mechanism is introduced to incentivize the mutual cooperation of MgSs. Furthermore, the Blockchain as a data storage structure is adopted to record the behaviors of the MgSs to prevent from falsifying their feedback. At last, the agent-based modeling is used to model and simulate the process of MgSs working together. The simulating results reveal that the punishment mechanism is effective in promoting the cooperation among MgSs from various perspectives.

Ping Lou, Cui Zhu, Xiaomei Zhang, Xuemei Jiang, Zhengying Li
Particle in Cell Simulation to Study the Charging and Evolution of Wake Structure of LEO Spacecraft

In this study, we performed the simulation to study the charging and the wake structure in the downstream region of the Low Earth Orbit satellite, ERS-1, in the presence of auroral electrons. We applied a tool called EMSES (Electro Magnetic Spacecraft Environment Simulator) developed based on Particle-in-cell (PIC) code. We classified the simulation into two cases, i.e., in the absence of auroral electrons (case #1) and in the presence of auroral electrons (case #2). The results show that the satellite potential dropped to −2 V for case #1, whereas in case #2 the potential decreased down to −2.4 V. In addition, we found the different features of the wake structure in which the wake structure distorted inward as the object more negatively charged due to the presence of auroral electrons. In order to get higher charging on the spacecraft, case #3, the density of auroral electron is increased 100 times, thus the flux ratio of auroral electron to ambient plasma becomes unity. The magnitude of negative potential increases 5 times compared to case #2. Furthermore, the distorted wake structure in case #2 in the ion profile disappeared and turned out to be ion focusing around the wake centerline shown in case #3.

Nizam Ahmad, Hideyuki Usui, Yohei Miyake

Simulation Technology for Intelligent Society

Frontmatter
Wise-Use of Sediment for River Restoration: Numerical Approach via HJBQVI

A stochastic differential game for cost-effective restoration of river environment based on sediment wise-use, an urgent environmental issue, is formulated. River restoration here means extermination of harmful algae in dam downstream. The algae population has weak tolerance against turbid river water flow, which is why the sediment is focused on in this paper. Finding the optimal strategy of the sediment transport reduces to solving a spatio-temporally 4-D Hamilton-Jacobi-Bellman Quasi-Variational Inequality (HJBQVI): a degenerate nonlinear and nonlocal parabolic problem. Solving the HJBQVI is carried out with a specialized finite difference scheme based on an exponentially-fitted discretization with penalization, which generates stable numerical solutions. An algorithm for solving the discretized HJBQVI without resorting to the conventional iterative matrix inversion methods is then presented. The HJBQVI is applied to a real problem in a Japanese river where local fishery cooperatives and local government have been continuing to debate the way of using some stored sediment in a diversion channel for flood mitigation. Our computational results indicate when and how much amount of the sediment should be applied to the river restoration, which can be useful for their decision-making.

Hidekazu Yoshioka, Yuta Yaegashi, Yumi Yoshioka, Kunihiko Hamagami, Masayuki Fujihara
Calculation of Extreme Precipitation Threshold by Percentile Method Based on Box-Cox Transformation

In this paper, we proposed an improved percentile method that used for calculating the threshold value of the extreme precipitation event. The method normalizes the series of daily precipitation data by Box-Cox transformation and calculates the extreme precipitation threshold through the Z-score and specified percentile. According to the experimental result, the proposed method shows higher stability under some circumstances, comparing to the traditional percentile methods. As the complement, it is capable to obtain better results in the calculation of the extreme precipitation threshold if combined with other percentile methods.

Chi Zhang, Pu-wen Lei, Koji Koyamada
OpenPTDS Dataset: Pedestrian Trajectories in Crowded Scenarios

Pedestrian simulation is an important approach for engineers to evaluate the safety issues of metro buildings. Although there exist many works of pedestrian evacuation, it is still lacking of rich evacuation data to calibrate simulation models. To overcome this problem, we conducted several real-life pedestrian experiments and create a data set named OpenPTDS. Fundamental speed-density diagram is drawn to show its feasibility. To promote further research and applications, the source data is shared at http://shi.buaa.edu.cn/songxiao/en/index.htm .

Xiao Song, Jinghan Sun, Jing Liu, Kai Chen, Hongnan Xie
Description and Analysis of Cognitive Processes in Ground Control Using a Mutual Belief-Based Team Cognitive Model

Mutual understanding and shared situation awareness among air traffic controllers (ATCOs) and aircraft pilots is the key to safe ground control operations at airports. Hence, this paper presents cognitive models for ATCOs and aircraft pilots based on mutual belief. The aim of this model is to provide detailed descriptions of the cognitive processes in ground control communications. This study also presents a method that uses team process simulations to analyze the cognitive processes behind communications and situation-awareness sharing between ATCOs and pilots during ground control. The proposed models and method are used to replicate the Tenerife Airport accident in 1977 to demonstrate how they can be used for accident analysis. The results show that the proposed method can reveal additional possible cognitive processes that could occur given the actual communication log, including those processes identified in the accident investigation report. The proposed method hence has the potential to facilitate the analysis of ground control accidents at airports.

Sakiko Ogawa, Taro Kanno, Kazuo Furuta
A Credibility Assessment Method for Training Simulations from the View of Training Effectiveness

The complex training simulation process often comprises multiple ordered tasks, and the training effects need to be evaluated from single task to multiple combined tasks for credibility assessment. In order to assess the credibility of training simulations, a novel credibility assessment method is proposed from the view of training effectiveness. An integrated credibility assessment framework for training simulations is given primarily involving project process, technical process and information artifacts. Then the single-task training effectiveness evaluation (TEE) method based on paired t-test and the multi-tasks TEE method using cluster analysis are proposed respectively for credibility assessment of complex training simulation. Finally, an application case about TEE for flight training simulation is given to illustrate the proposed method.

Shenglin Lin, Wei Li, Shuai Niu, Ping Ma, Ming Yang

Simulation of Instrumentation and Control Application

Frontmatter
Digital Twin-Based Energy Modeling of Industrial Robots

The rapid development of robotic technologies in recent years has made industrial robots (IRs) increasingly used in various manufacturing process, and the energy problem of IRs has been paid great attention because of their massive adoption and intensive energy consumption. Energy saving of IRs therefore is in great demand for environment protection and enterprise cost reduction, and the energy modeling of IRs is the basis to achieve such goal. In this paper, a novel energy modeling method of IRs based on digital twin is proposed and the system framework is established, which mainly includes the physics-based energy model of the physical IRs, the 3D virtual robot model used to visualize and simulate on the energy consumption of IRs, the digital twin data and the ontology-based unified digitized description model for mapping the virtual model to the corresponding physical energy model. Furthermore, the mapping relation between the physical data and ontology attributes is established to enable the interaction between the physical space and the cyber space of the whole system. Finally, the feasibility and effectiveness of the proposed method is validated by a case study, and the results show that the digital twin-based energy modeling method can be efficiently used for the simulation and prediction of the energy consumption of IRs.

Ke Yan, Wenjun Xu, Bitao Yao, Zude Zhou, Duc Truong Pham
Dyna-Q Algorithm for Path Planning of Quadrotor UAVs

In this paper, the problem of path planning of quadrotor unmanned aerial vehicles (UAVs) is investigated in the framework of reinforcement learning methodology. With the abstraction of the environment in the form of grid world in 2D, the design procedure is presented by utilizing the Dyna-Q algorithm, which is one of the reinforcement method combining both model-based and non-model framework. In this process, an optimal or suboptimal safe flight trajectory will be obtained by learning constantly and planning by simulated experience, thus calculative reward can be maximized efficiently. Matlab software is used for maze establishing and computation, and the effectiveness of the proposed method is illustrated by two typical examples.

Xin Huo, Tianze Zhang, Yuzhu Wang, Weizhen Liu
Boarding Stations Inferring Based on Bus GPS and IC Data

Understanding bus passengers’ flow patterns is an important guide for optimization of public transport system. In most cities buses have been equipped with GPS devices and Smart IC card fare devices, which leads to amount of real-time geocoded data of buses and passengers. Usually these two kinds of data could be combined for inferring passengers’ boarding stations. However, in some cases there is no straightforward relation between them, such that boarding station of passengers are unknown. In this paper, an inferring method for finding the correspondence between these two types of data is proposed. This method was validated by Qingdao data. Based on that, we analyze the spatio-temporal distribution of passengers’ flow.

Xiang Yu, Fengjing Shao, Rencheng Sun, Yi Sui
Acoustic Properties of Resonators Using Deployable Cylinders

In this study, deployable cylinders with bellows-like patterns are used to change the neck length of Helmholtz resonator. The cylinders are quasi-rigid foldable and constructed by assembling flat, rigid quadrilateral plates. Acoustic computation revealed that the proposed resonator with a deployable neck worked effectively to attenuate noise at a resonant frequency. Furthermore, we discuss that the proposed resonator was able to cover low frequency range, because the neck length changed reciprocally to the cross-sectional area and the resonant frequency reached zero in theory according to the geometrical configuration of the cylinder.

Sachiko Ishida, Ryo Matsuura
Iterative Unbiased Conversion Measurement Kalman Filter with Interactive Multi-model Algorithm for Target Tracking

For modern tracking systems, the tracking target generally has the characteristics of high speed and mobility. Tracking targets has always been a challenging problem, especially tracking high speed and strong maneuvering targets, which is difficult in theory and practice. An interactive multi-model (IMM) based on iterative unbiased conversion measurement Kalman filter (IUCMKF) is proposed. The new algorithm takes advantages of the interactive and complementary characteristics between different models to overcome the problems of low precision and filter divergence. First, investigate the function of conversion measurement Kalman filter (CMKF), debiased conversion measurement Kalman filter (DCMKF), and IUCMKF on double-model and multiple-model. Secondly, compare and analyze the performance of the three algorithms (CMKF-IMM, DCMKF-IMM and IUCMKF-IMM). Finally, identify the effect and impact of the combination of the four different models including CA, Singer, CS, and Jerk on the accuracy of target tracking. The results of numerical simulation show the choice of the number and type of models should be weighed according to the actual simulation environment. Even though more choices of models can improve the tracking accuracy of the target, but that also greatly increases the complexity of the algorithm and the error consistency of the algorithm also cannot be guaranteed to some extent. Therefore, compared with CMKF-IMM and DCMKF-IMM, the new algorithm can attain a more accurate state of the target being tracked and the covariance estimation. It also has more potential in improving tracking accuracy.

Da Li, Xiangyu Zou, Ping Lou, Ruifang Li, Qin Wei

Computational Mathematics and Computational Science

Frontmatter
On Convergence Speed of Parallel Variants of BiCGSTAB for Solving Linear Equations

A number of hybrid Bi-Conjugate Gradient (Bi-CG) methods such as the Bi-CG STABilized (BiCGSTAB) method have been developed for solving linear equations. BiCGSTAB has been most often used for efficiently solving the linear equations, but we have sometimes seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy for improving the accuracy of the Bi-CG coefficients has been proposed. In present petascale high-performance computing hardware, the main bottleneck of Krylov subspace methods for efficient parallelization is the inner products which require a global reduction. The parallel variants of BiCGSTAB such as communication avoiding and pipelined BiCGSTAB reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of BiCGSTAB has not previously been clarified on problems with situations where the convergence is slow (strongly affected by rounding errors). In this paper, therefore, we examine the convergence speed between the standard BiCGSTAB and the parallel variants, and the effectiveness of the stabilization strategy by numerical experiments on the problems where the convergence has a long stagnation phase.

Kuniyoshi Abe
Study on Chaotic Cipher with Robustness and Its Characteristics

In this paper, we propose a robustness cryptosystem with a small amount of computation. By adopting a stream cipher not but block cipher like AES, the processing speed is improved. Furthermore, it has chaotic properties and improved randomness. Moreover, since an expression for synchronizing the encryption system and the decryption system is not used in the proposed method, even when a bit error occurs, it is correctly decrypted. In general, nonlinear signals such as chaotic map, Jacobian can be estimated from time series signals by Sano-Sawada method. In the chaotic cipher, the parameters in the expression are used as cryptography keys. Therefore, if Jacobian is a constant, the cryptographic key will be estimated. In the proposed system, we formulate the expression so that each element of Jacobian becomes time-variant as much as possible. Thereby the safety of the system is secured.

Takashi Arai, Yuta Kase, Hiroyuki Kamata
A Stochastic Impulse Control Model for Population Management of Fish-Eating Bird Phalacrocorax Carbo and Its Numerical Computation

Feeding damage from a fish-eating bird Phalacrocorax carbo to a fish Plecoglossus altivelis is severe in Japan. A stochastic impulse control model is introduced for finding the cost-effective and ecologically conscious population management policy of the bird. The optimal management policy is of a threshold type; if the population reaches an upper threshold, then taking a countermeasure to immediately reduce the bird to a lower threshold. This optimal policy is found through solving a Hamilton-Jacobi-Bellman quasi-variational inequality (HJBQVI). We propose a numerical method for HJBQVIs based on a policy iteration approach. Its accuracy on numerical solutions and the associated free boundaries for the management thresholds of the population, is investigated against an exact solution. The computational results indicate that the proposed numerical scheme can successfully solve the HJBQVI with the first-order computational accuracy. In addition, it is shown that the scheme captures the free boundaries subject to errors smaller than element lengths.

Yuta Yaegashi, Hidekazu Yoshioka, Koichi Unami, Masayuki Fujihara
A Dialect of Modern Fortran for Computer Simulations

The modern Fortran is one of the major languages in computational sciences. New features introduced in Fortran 2003 and later have improved the writing experiences of simulation programming. Some features of the language, however, can be further improved by slightly modifying its lexical syntax and imposing a coding rule. In this paper, we propose a dialect of the modern Fortran for the improvements. The features of the dialect include; the period “.” as the member access operator; block comments, addition/subtraction/multiplication assignment, pre-defined and user-defined aliases, automatic check of “implicit none” call, “just once” block, and “skip” block. We have developed a preprocessor to convert the dialect into the legitimate Fortran. It is a simple text converter that keeps the line numbers of the input dialect and the output standard codes.

Shin’Ya Hosoyamada, Akira Kageyama

Flow Simulation

Frontmatter
Performance Comparison of the Three Numerical Methods to Discretize the Local Inertial Equation for Stable Shallow Water Computation

The local inertial equation (LIE) is a simple shallow water model for simulating surface water dynamics. Recently, the model has been widely applied to flood simulation worldwide. Keys in numerical implementation of the LIE are the staggered spatio-temporal discretization and the stable treatment of the friction slope terms. The latter is critical for stable and efficient computation. Currently, several discretization methods (semi-implicit, fully-implicit, and exponential methods) for the friction slope terms with comparable computational efficiency are available. However, their performance evaluation has been carried out only independently. We thus compare the performance of the three methods through their application to test and realistic cases. In this paper, firstly, theoretical linear stability analysis results are reviewed, indicating the highest stability of the implicit method. It is also consistent in a certain sense. Application of these methods to a 1-D test case with an advancing wet and dry interface implies that all the methods work well where the fully-implicit method has the least error. Their application to 2-D flood simulation in Tonle Sap Lake and its floodplains in South-East Asia demonstrates that the exponential method gives slightly more oscillatory results than the others. Dependence of the simulated surface water dynamics on the spatial resolution is investigated as well to give a criterion of the resolution for numerical simulation with satisfactory accuracy.

Tomohiro Tanaka, Hidekazu Yoshioka, Sokly Siev, Hideto Fujii, Ly Sarann, Chihiro Yoshimura
Development of the DRowning hUman Model (DRUM) Toward Evaluation of Performance of Lifejackets in Tsunamis

We developed a new numerical simulation model considering both unsteady water currents and unsteady movement of a human body toward evaluation of lifejackets as a measure against drowning in tsunamis. A combination of a multiphase fluid solution: CIP-CUP method and a method to represent a human body: link segment model enabled to simulate interactions between fluid and human bodies in the developed model. To validate performance of the developed model, we reproduced an experiment by Kurisu et al. (2018) where a manikin was swept down and caught by water currents assuming tsunamis. Consequently, the developed model represented both water currents and movement of a human body well.

Daiki Ajima, Tatsuto Araki, Takashi Nakamura
Illumination Recovery for Realistic Fluid Re-simulation

Previous studies in fluid re-simulation have devoted to reducing computational complexity, and little attention has been paid to realistic aspects. This paper presents a linear approach to estimate illumination from video examples for coherent photorealistic re-simulation. Compared with the previous study of light detection, it couples the reconstructed fluid geometry with surface appearance and linearly estimates illumination parameters, which avoids much higher computational cost from tedious optimization. The parameters in Blinn-Phong shading model (BSM) are recovered hierarchically. Based on fitting the ambient and diffuse components through the particles with lower intensities, reflectance can be clustered from the observations of high-intensity particles surface. We demonstrate its effectiveness for both steps by extensive quantitative and qualitative evaluation through relighting on the fluid surface from ground truth fluid video, as well as from re-simulation. Photorealistic coherently illuminated visual effects consistent with fluid surface geometry are obtained.

Hongyan Quan, Zilong Song, Xinquan Zhou, Shishan Xue, Changbo Wang
Ocean Analysis by Tsunami Simulation of the Nankai Trough Massive Earthquake

Large-scale tsunamis have a major impact on the natural environment. Therefore, it is important to predict a large-scale tsunami. Recently, large-scale tsunami simulation using a supercomputer has been performed for prediction. By analyzing large-scale tsunami simulation data obtained from the supercomputer, it is expected that damage will be minimized. The purpose of this paper is to propose a visualization method for analysis support of simulation results. There are three kinds of visualization methods proposed. The first method is simultaneous visualization of a majority of feature quantities based on an opacity transfer function obtained by extending the HSV color space. This is aimed at observing the mutual relationship between feature quantities obtained by simulation. The second method is to create volume data in which time-series images are superimposed in the time axis direction (XYT space time) and to visualize the time-series data. This is aimed at a detailed analysis of the full-time behavior and specific feature quantities. The third method is to visualize the fusion of the cross-section plane of a tsunami and the fluid volume around it. This is aimed at the detailed visualization of the observation data in the sea, and by fusing the surrounding data, the detailed time-dependent flow velocity and salinity in the sea can be clearly observed. This paper presents the results of visualizing these three methods by applying them to the flow velocity and salinity data obtained from the simulation of the Nankai Trough massive earthquake and analyze these data.

Yuto Sakae, Ikuya Morimoto, Takuya Ozaki, Ryo Kurimoto, Liang Li, Kyoko Hasegawa, Satoshi Nakada, Satoshi Tanaka
Improving Traffic Flow at a Highway Tollgate with ARENA: Focusing on the Seoul Tollgate

In this study, the effects of traffic flow improvements were compared in terms of real traffic data by using the level of congestion, throughput, and average duration time criteria. A current situation and a situation with smart tolling were considered in order to ensure the reliability of the results. The simulation consisted of six scenarios, the error rates were checked with 30 replications, and the error rates were noted to be lower than 0.5%. The scenarios were categorized by vehicle speeds that could affect subsequent vehicles. The smart tolling model was better than the current model overall. In particular, the results of comparing lifelike scenario 1 and scenario 6 with smart tolling are key. The results showed that the level of congestion and average duration time could be decreased to approximately 50%. With regard to the results, there are significant impacts on improving traffic flow while maintaining five lanes, unlike in the current model.

Seung-Min Noh, Ho-Seok Kang, Seong-Yong Jang

Visualization and Computer Vision to Support Simulation

Frontmatter
Pixel Convolutional Networks for Skeleton-Based Human Action Recognition

Human action recognition is an important field in computer vision. Skeleton-based models of human obtain more attention in related researches because of strong robustness to external interference factors. In traditional researches the form of the feature is usually so hand-crafted that effective feature is difficult to extract from skeletons. In this paper a unique method is proposed for human action recognition called Pixel Convolutional Networks, which use a natural and intuitive way to extract skeleton feature from two dimensions, space and time. It achieves good performance compared with mainstream methods in the past few years in the large dataset NTU-RGB+D.

Zhichao Chang, Jiangyun Wang, Liang Han
Feature-Highlighting Transparent Visualization of Laser-Scanned Point Clouds Based on Curvature-Dependent Poisson Disk Sampling

In recent years, with the development of 3D laser-measurement technology, digital archiving is being carried out as one of the efforts to leave cultural assets to posterity around the world. The laser-scanned point clouds are large-scale and precisely record complex 3D structures of the cultural assets. Accordingly, such point clouds are used in research field of visualization to support analysis and use of the assets. As representative examples of visualization, there are feature-highlighting and transparent visualization. The quality of visualization highly depends on distributional uniformity, that is, uniformity of inter-point distances. However, laser-scanned point clouds are usually distributional bias data, which makes it impossible to visualize with high quality. In previous studies, making point distances uniform by Poisson disk sampling, the quality of transparent visualization can be improved. This study proposes curvature-dependent Poisson disk sampling. The proposed method adjusts the order and radius of the sampling disk according to the curvature calculated by principal component analysis. By applying the proposed method to laser-scanned point clouds, the edges of cultural assets can be emphasized and the visibility of shape further improves in transparent visualization. Thereby, we realize feature-highlighting transparent visualization with high visibility of three-dimensional structure and edge-shape.

Yukihiro Noda, Shu Yanai, Liang Li, Kyoko Hasegawa, Atsushi Okamoto, Hiroshi Yamaguchi, Satoshi Tanaka
Image-Based 3D Shape Generation Used for 3D Printing

3D shape design is one of the most vital procedures in addictive manufacturing, especially in the environment of cloud manufacturing. And the consumption of time and energy to design a 3D shape is huge. Our objective is developing a 3D shape generative technique applied in the process of 3D printing model shape design. As generative adversarial networks (GAN) in field of deep learning has the potential to generate 3D shape models based the latent vector sampled from prior latent spaces. We use Conditional GAN as the solution to map image information to 3D printing shapes that satisfy the printable requirements. We evaluate the capability of our model to generate authentic 3D printing shapes across the several classes. Basically, the model could be an alternative as an assistant 3D printing shape designer.

Zemin Li, Lin Zhang, Yaqiang Sun, Lei Ren, Yuanjun Laili
A Memory Efficient Parallel Particle-Based Volume Rendering for Large-Scale Distributed Unstructured Volume Datasets in HPC Environments

In recent years, the size and complexity of the datasets generated by the large-scale numerical simulations using modern HPC (High Performance Computing) systems have continuously increasing. These generated datasets can possess different formats, types, and attributes. In this work, we have focused on the large-scale distributed unstructured volume datasets, which are still applied on numerical simulations in a variety of scientific and engineering fields. Although volume rendering is one of the most popular techniques for analyzing and exploring a given volume data, in the case of unstructured volume data, the time-consuming visibility sorting becomes problematic as the data size increases. Focusing on an effective volume rendering of large-scale distributed unstructured volume datasets generated in HPC environments, we opted for using the well-known PBVR (Particle-based Volume Rendering) method. Although PBVR does not require any visibility sorting during the rendering process, the CPU-based approach has a notorious image quality and memory consumption tradeoff. This is because that the entire set of the intermediate rendering primitives (particles) was required to be stored a priori to the rendering processing. In order to minimize the high pressure on the memory consumption, we propose a fully parallel PBVR approach, which eliminates the necessity for storing these intermediate rendering primitives, as required by the existing approaches. In the proposed method, each set of the rendering primitives is directly converted to a partial image by the processes, and then they are gathered and merged by the utilized parallel image composition library (234Compositor). We evaluated the memory cost and processing time by using a real CFD simulation result, and we could verify the effectiveness of our proposed method compared to the already existing parallel PBVR method.

Yoshiaki Yamaoka, Kengo Hayashi, Naohisa Sakamoto, Jorji Nonaka
A Transfer Entropy Based Visual Analytics System for Identifying Causality of Critical Hardware Failures Case Study: CPU Failures in the K Computer

Large-scale scientific computing facilities usually operate expensive HPC (High Performance Computing) systems, which have their computational and storage resources shared with the authorized users. On such shared resource systems, a continuous and stable operation is fundamental for providing the necessary hardware resources for the different user needs, including large-scale numerical simulations, which are the main targets of such large-scale facilities. For instance, the K computer installed at the R-CCS (RIKEN Center for Computational Science), in Kobe, Japan, enables the users to continuously run large jobs with tens of thousands of nodes (a maximum of 36,864 computational nodes) for up to 24 h, and a huge job by using the entire K computer system (82,944 computational nodes) for up to 8 h. Critical hardware failures can directly impact the affected job, and may also indirectly impact the scheduled subsequent jobs. To monitor the health condition of the K computer and its supporting facility, a large number of sensors has been providing a vast amount of measured data. Since it is almost impossible to analyze the entire data in real-time, these information has been stored as log data files for post-hoc analysis. In this work, we propose a visual analytics system which uses these big log data files to identify the possible causes of the critical hardware failures. We focused on the transfer entropy technique for quantifying the “causality” between the possible cause and the critical hardware failure. As a case study, we focused on the critical CPU failures, which required subsequent substitution, and utilized the log files corresponding to the measured temperatures of the cooling system such as air and water. We evaluated the usability of our proposed system, by conducting practical evaluations via a group of experts who directly works on the K computer system operation. The positive and negative feedbacks obtained from this evaluation will be considered for the future enhancements.

Kazuki Koiso, Naohisa Sakamoto, Jorji Nonaka, Fumiyoshi Shoji
Modeling the Spread of Epidemic Diseases on ElasticStack-Based Simulation Output Analysis Environment

In this paper, we propose a simulation output analysis environment using Elastic Stack technology in order to reduce the complexity of the simulation analysis process. The proposed simulation output analysis environment automatically transfers simulation outputs to a centralized analysis server from a set of simulation execution resources, physically separated over a network, manages the collected simulation outputs in a fashion that further analysis tasks can be easily performed, and provides a connection to analysis and visualization services of Kibana in Elastic Stack. The proposed analysis environment provides scalability where a set of computation resources can be added on demand. We demonstrate how the proposed simulation output analysis environment can perform the simulation output analysis effectively with an example of spreading epidemic diseases, such as influenza and flu.

Kangsun Lee, Sungwoo Hwangbo
Backmatter
Titel
Methods and Applications for Modeling and Simulation of Complex Systems
Herausgegeben von
Liang Li
Kyoko Hasegawa
Satoshi Tanaka
Copyright-Jahr
2018
Verlag
Springer Singapore
Electronic ISBN
978-981-13-2853-4
Print ISBN
978-981-13-2852-7
DOI
https://doi.org/10.1007/978-981-13-2853-4

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, FAST LTA/© FAST LTA, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH