Skip to main content
Top

2021 | Book

Advances in Parallel & Distributed Processing, and Applications

Proceedings from PDPTA'20, CSC'20, MSV'20, and GCC'20

Editors: Dr. Hamid R. Arabnia, Leonidas Deligiannidis, Michael R. Grimaila, Douglas D. Hodson, Prof. Kazuki Joe, Masakazu Sekijima, Fernando G. Tinetti

Publisher: Springer International Publishing

Book Series : Transactions on Computational Science and Computational Intelligence

insite
SEARCH

About this book

The book presents the proceedings of four conferences: The 26th International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'20), The 18th International Conference on Scientific Computing (CSC'20); The 17th International Conference on Modeling, Simulation and Visualization Methods (MSV'20); and The 16th International Conference on Grid, Cloud, and Cluster Computing (GCC'20). The conferences took place in Las Vegas, NV, USA, July 27-30, 2020. The conferences are part of the larger 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20), which features 20 major tracks. Authors include academics, researchers, professionals, and students.

Presents the proceedings of four conferences as part of the 2020 World Congress in Computer Science, Computer Engineering, & Applied Computing (CSCE'20);Includes the research tracks Parallel and Distributed Processing, Scientific Computing, Modeling, Simulation and Visualization, and Grid, Cloud, and Cluster Computing;Features papers from PDPTA’20, CSC’20, MSV’20, and GCC’20.

Table of Contents

Frontmatter

Military and Defense Modeling and Simulation

Frontmatter
Julia and Singularity for High Performance Computing

High-performance computing (HPC) is pivotal in the advancement of modern science. Scientists, researchers, and engineers are finding an increasing need to process massive amounts of data and calculations faster and more accurately than ever before. This is especially true in our work of developing a general quantum library for researchers to use in their simulations. Much of this effort revolves around getting the maximum performance enhancements offered by GPUs as possible. We have found that the relatively newer programming language Julia has offered us a productive means of development with minimal overhead. Combined with the container engine Singularity, we can ensure maximum distributability and reproducibility.

Joseph Tippit, Douglas D. Hodson, Michael R. Grimaila
Trojan Banker Simulation Utilizing Python

One of the most malicious types of malware used by hackers today are Trojans. Trojans can come in multiple different forms and are used to steal various types of information from a user. In the case of a Trojan Banker, it tricks a user by acting like something useful to the consumer, but it is secretly used to steal account information from a user and deliver that information to the hackers’ computer. Most of these attacks are delivered via social engineering, and in this paper, we will host a simulation that will identify what types of social engineering techniques are used to gain access to a user’s computer and then identify how long machines at varying levels of security take to discover and remove a Trojan from its system. This simulation should display how difficult it is for an active Trojan to go undetected by a machine as well as how much information a Trojan can steal before being caught.

Drew Campbell, Jake Hall, Iyanuoluwa Odebode, Douglas D. Hodson, Michael R. Grimaila
CovidLock Attack Simulation

CovidLock is a new form of ransomware taking advantage of the panic brought on by COVID-19. It tricks users into downloading an app claiming to track the pandemic. When downloaded, this app encrypts the files on the user’s device and changes the device’s password, demanding a ransom of cryptocurrency to regain access. As the problem of ransomware grows, new forms of mitigating the problems must arise. We present a description of CovidLock, a new mitigation method for ransomware and other malware, and a reason why our method is better than existing methods.

Amber Modlin, Andrew Gregory, Iyanuoluwa Odebode, Douglas D. Hodson, Michael R. Grimaila
The New Office Threat: A Simulation of Watering Hole Cyberattacks

The focus of this paper is to develop a DEVS-style simulation model to manipulate common variables in an advanced persistent threat (APT)-style watering hole attack, a style of attack that targets an organization or target group by infecting a commonly used website or service. A simulation of the environment exposed to this specific attack was developed through Python, carrying variables of target group size, number of trusted sites, and duration of attack before discovery. Analysis of simulation averages suggest that the size of the target group and the duration of the attack are the most important factors in the spread of the malware, though for each category the returns on speed of infection diminish as the size and time of the overall control groups increase.

Braeden Bowen, Jeremy Eraybar, Iyanuoluwa Odebode, Douglas D. Hodson, Michael R. Grimaila
Simulation of SYN Flood Attack and Counter-Attack Methods Using Average Connection Times

While SYN flood attack leveraging TCP protocol is not new, this method is still by far the most popular attack type. In this paper, a simulation of a TCP server is introduced, as well as three different ways to gather data on connection times and number of unresolved requests to connect in the server. The results show that the most efficient way to do so would be to use an average of only successful handshake connections to apply as a server time-out.

Hai Vo, Raymond Kozlowski, Iyanuoluwa Odebode, Douglas D. Hodson, Michael R. Grimaila

Computational Intelligence, Data Science, HPC, Optimization and Applications

Frontmatter
Dielectric Polymer Genome: Integrating Valence-Aware Polarizable Reactive Force Fields and Machine Learning

Informatics-driven computational design of advanced dielectric polymers (i.e., dielectric polymer genome) has remained a challenge. We have developed a computational framework for (i) high-throughput computational synthesis of polymer structures, (ii) evaluation of their dielectric properties using reactive molecular dynamics (RMD) simulations based on a new valence-aware polarizable reactive force field (ReaxPQ-v), and (iii) learning polymer structure–property relationships using machine-learning (ML) models. The resulting large size of simulated training dataset provides an unprecedented opportunity to uncover hitherto-unknown structure–property relationships purely computationally, thereby predicting new polymers with desired dielectric properties. Employing a large dataset of structurally diverse 1276 polymers, multilayer perceptron (MLP) and random forest models achieved good accuracy for predicting the dielectric constants of these polymers, while a recurrent neural network (RNN) model is being developed. Such ML prediction models are indispensable for further enlarging the search space for superior dielectric polymers by orders-of-magnitude.

Kuang Liu, Antonina L. Nazarova, Ankit Mishra, Yingwu Chen, Haichuan Lyu, Longyao Xu, Yue Yin, Qinai Zhao, Rajiv K. Kalia, Aiichiro Nakano, Ken-ichi Nomura, Priya Vashishta, Pankaj Rajak
A Methodology to Boost Data Science in the Context of COVID-19

As the importance of data science is increasing, the number of projects involving data science and machine learning is rising either in quantity or in complexity. It is essential to employ a methodology that may contribute to the improvement of the outputs. In this context, it is crucial to identify possible approaches. And an overview of the evolution of data mining process models and methodologies is given for context. And the analysis showed that the methodologies covered were not complete. So, we propose a new approach to tackle this problem. POST-DS (Process Organization and Scheduling electing Tools for Data Science) is a process-oriented methodology to assist the management of data science projects. This approach is supported not only in the process but also in the organization scheduling and tool selection. The methodology was employed in the context of COVID-19.

Carlos J. Costa, Joao Tiago Aparicio
Shallow SqueezeNext Architecture Implementation on BlueBox2.0

Machine learning and its applications such as ADAS, embedded computer vision, image and object detection, etc. made self-driving car applications possible and safer. Major factors of a hindrance for ADAS deployment are limited computational and memory resources. With the help of the DSE of CNN/DNN architectures, the Shallow SqueezeNext architecture is proposed that overcame the limitations of traditional algorithms. It achieved the least model size of 272 KB with a model accuracy of 82% and a model speed of 9 s per epoch, therefore making it capable of deploying efficiently on a real-time platform BlueBox2.0 by NXP with a model size of 0.531MB and a model accuracy of 87.30% at a model speed of 11 s per epoch.

Jayan Kant Duggal, Mohamed El-Sharkawy
Dark Data: Managing Cybersecurity Challenges and Generating Benefits

Data science delivers an important role in cybersecurity by utilizing the power of data, high-performance computing, and data mining to protect users against cybercrimes and threats. Due to the rapid expansion of data, there are hidden dark data in organizations that are unidentified and unmanageable. It is significant to understand the various prospects that will arise through the utilization of dark data by various organizations. They will find these data valuable to their business intelligence. In this chapter, we are introducing the methodology which will identify dark data and forecast future cybersecurity threats.

Haydar Teymourlouei, Lethia Jackson
Implementing Modern Security Solutions for Challenges Faced by Businesses in the Internet of Things (IoT)

Internet of Things (IoT) connects various nonliving objects through the internet and enables them to share information with their community network to automate processes for humans and make their lives easier. The use of IoT is increasing between businesses and individuals. Businesses are currently increasing their investment in IoT for use in the coming years. This chapter presents the challenges of IoT and solutions. The usage of IoT is growing in marketing, given the vast benefits that IoT offers. For businesses to fully use IoT and realize its benefits, the companies will need to change their business processes. However, businesses will need to overcome three major challenges: security, privacy, and network challenges.

Haydar Teymourlouei, Daryl Stone
Trusted Reviews: Applying Blockchain Technology to Achieve Trusted Reviewing System

In today’s flourishing e-business environment, reviews play a significant role in our daily products’ and services’ choices. Reviews can be described as trusted if their source was known, authentic, and reliable. Trusted Reviews aims to improve the authenticity and quality of customer’s reviews submitted to the system. In order to achieve this, Blockchain technology is implemented for its unique characteristics, such as the immutability of data to prevent fake reviews. In order to encourage members to write legitimate reviews, Thiqah -Trust Credit- will be used as an incentive, serving a significant role in our reward system. Consequently, more genuine reviews will be submitted, improving the system’s reputation legitimacy and enhancing the member’s experience. The model has been tested using Ethereum for decentralized applications. It all starts with writing a smart contract, and this contract contains the rules and conditions that will identify a review as trusted. For example, reaching a certain number of likes. Upon satisfying the contract conditions, a block will be created containing all details of the review, added to the blockchain and the writer of the review gets awarded a Thiqah credit point. The implemented solution will help business owners gain a good reputation and increase customer trust.

Areej Alhogail, Ghadah Alhudhayf, Jood Alanzy, Jude Altalhi, Shahad Alghunaim, Shahad Alnasser
Large-Scale Parallelization of Lattice QCD on Sunway TaihuLight Supercomputer

Lattice quantum chromodynamics is an important method for studying strong interaction through large-scale Monte Carlo numerical simulation calculations. Common computing platforms are difficult to meet the needs of large-scale high-precision calculations of lattice quantum chromodynamics. Sunway TaihuLight supercomputer, which is based on SW26010 heterogeneous many-core processor, can provide sufficient computing power, but applications still need large-scale parallelization and performance optimization according to its unique hardware architecture. Through the analysis and improvement of the previous work, we propose a new grid point data distribution method and perform efficient parallel computing. The lattice QCD application achieved the peak performance of 139.147TFlops using 1347840 cores on Sunway TaihuLight supercomputer and can maintain performance as the scale grows.

Ailin Xu, Zhongzhi Luan, Ming Gong, Xiangyu Jiang

Scientific Computing, Modeling and Simulation

Frontmatter
Reverse Threat Modeling: A Systematic Threat Identification Method for Deployed Vehicles

During the development phase of a vehicle, threat and risk analyses are common methods to identify potential security threats and derive applicable security countermeasures. These measures form the basis to mitigate the risk of a successful attack and are integrated and tested during the development phase of the vehicle. However, over the whole vehicle life cycle, from concept phase until decommissioning, new attack methods which have not been known at the time of design might be developed that allow an exploitation of the system. Intuitively, threat and risk assessment on a regular basis—even after deployment—is desirable. In this context, the present paper proposes a systematic threat modeling method featuring a threat identification process for deployed vehicles.

Mona Gierl, Reiner Kriesten, Peter Neugebauer, Eric Sax
PRNG-Broker: A High-Performance Broker to Supply Parallel Streams of Pseudorandom Numbers for Large-Scale Simulations

The generation of large streams of pseudorandom numbers may lead to performance degradation in simulation applications. Both the PRN generator and how it is used impact the efficiency of generating multiple PRNs.A PRNG-Broker was developed for parallel servers with/without accelerators, which transparently manages the efficient execution of PRNG implementations from CPU/GPU libraries, with an intuitive API that replaces the user PRNGs requests. PRNG-Broker allows the development of efficient PRN-intensive applications without the need of explicit parallelization and optimization of PRNG tasks.It was validated with scientific analyses of proton beam collisions from CERN, which require 30 Ki PRNs per collision. Outcomes show a performance boost over the original code: 48x speedup on a 2*12-core server and over 70x speedup when using a GPU.

Andre Pereira, Alberto Proenca
Numerical Modeling of a Viscous Incompressible Fluid Flow in a Channel with a Step

A stable fourth-order finite difference scheme is applied for solving the problem of steady-state, viscous, incompressible flow through a channel with a step. Results are presented for various step sizes and Reynolds numbers. The method converged for all parameters attempted and the results compared favorably with the literature, from a theoretical, numerical, and experimental standpoint.

Saeed M. Dubas, Paul Bouthellier, Nihal Siriwardana, Laura Wieserman
Modeling, Simulation, and Verification for Structural Stability and Vibration Reduction of Gantry Robots for Shipyard Welding Automation Using ANSYS Workbench® and Recurdyn®

With the strengthening of domestic and foreign environmental regulations in recent times, the disposal of small scrapped FRP (fiber-reinforced plastic) vessels is raising a problem and there is a growing interest in eco-friendly aluminum vessels. Welding is one of the important things in the manufacturing processes of aluminum vessels. This chapter aims to research on structural stability and vibration reduction of a three-axes Cartesian coordinate gantry robot for improving the welding quality. Structural instability and the drive unit of each axis can negatively affect vibration. Structural analysis will be performed for parts that can cause structural instability due to load. Structural analysis will verify the structural stability of gantry robot using ANSYS Workbench®, based on the modeling for analysis using Solidworks®. Also, the driving parts of the x axis and y axis are rack and pinion gear models, which have the disadvantage of vibration caused by backlash. A simulation to reduce vibration caused by backlash will be conducted using Recurdyn®. By simulation with added pinion gear model, we investigate on how the model affects the vibration of the backlash.

Seung Min Bae, Won Jee Chung, Hui Geon Hwang, Yeon Joo Ahn
Long Short-Term Memory Neural Network on the Trajectory Computing of Direct Dynamics Simulation

This direct dynamics simulation is widely used in quantitative structure-activity relationship, virtual screening, protein structure prediction, quantum chemistry, materials design, property prediction, etc. This paper explores the idea of integrating long short-term memory (LSTM) with the trajectory computing of direct dynamics simulations to enhance the performance of the simulation and to improve its usability for research and education. The idea is successfully used to predict the location, energy, and Hessian of atoms in a CO2 reaction system. The results demonstrate that the artificial neural network-based memory model successfully learns the desired features associated with the atomic trajectory and rapidly generates predictions that are in excellent agreement with the results from chemistry dynamics simulations. The accuracy of the prediction is better than expected.

Fred Wu, Tejaswi Jonnalagadda, Colmenares-diaz Eduardo, Sailaja Peruka, Poojitha Chapala, Pooja Sonmale
Evaluating the Effect of Compensators and Load Model on Performance of Renewable and Nonrenewable DGs

Multi-distributed generation and compensators are one of the practical ways to improve the performance of power network. Although each of these devices increases the efficiency of distribution system, the combination of DG and compensator can be more useful. So in this study, the performance of capacitor bank (as traditional compensator) and DSTATCOM (as modern compensator) is evaluated in the distribution system with multi-DG. The load model of the network is also considered as sensitive to voltage-frequency and various customers’ daily load patterns. For getting the best result, simultaneous placement of different types of DGs and compensators is done by the combination of multi-objective whale optimization algorithm and analytical hierarchy process. Technical, economic, and environmental indices are considered as objective functions of this study. The results of the simulation using the IEEE 69-bus distribution system show the proper performance of compensators in the increasing of efficiency of DG units during the different condition of operation.

H. Shayeghi, H. A. Shayanfar, M. Alilou
The Caloric Curve of Polymers from the Adaptive Tempering Monte Carlo Method

Conductive polymers are organic conjugated polymer chains with semiconducting ability that display unique mechanical properties without being thermoformable. Here we present a novel coarse-grained force field for modeling the oxidized phase of polypyrrole containing electronegative atomic dopants. The polypyrrole oligomers in this study have 12 monomers in length with a doping concentration of 25%. The polymer properties are determined using the isothermal-isobaric adaptive tempering Monte Carlo and the Metropolis Monte Carlo with codes optimized for GPUs. Several thermodynamic and mechanical properties are calculated along the caloric curve. When comparing with experiments, densities and bulk moduli perform very well yielding values in the range of 1.20–1.22 g/cm3 and 67–120 MPa, respectively. Comparing with our published model potential for the neutral polypyrrole phase, the oxidized phase presents about 30% increase in density, which is also in agreement with experiments. The computational implementation is easily portable for the inspection of other polymeric materials.

Greg Helmick, Yoseph Abere, Estela Blaisten-Barojas

Scientific Computing, Computational Science, and Applications

Frontmatter
A New Technique of Invariant Statistical Embedding and Averaging in Terms of Pivots for Improvement of Statistical Decisions Under Parametric Uncertainty

In this chapter, a new technique of invariant embedding of sample statistics in a decision criterion (performance index) and averaging this criterion via pivotal quantities (pivots) is proposed for intelligent constructing efficient (optimal, uniformly non-dominated, unbiased, improved) statistical decisions under parametric uncertainty. This technique represents a simple and computationally attractive statistical method based on the constructive use of the invariance principle in mathematical statistics. Unlike the Bayesian approach, the technique of invariant statistical embedding and averaging in terms of pivotal quantities (ISE&APQ) is independent of the choice of priors and represents a novelty in the theory of statistical decisions. It allows one to eliminate unknown parameters from the problem and to find the efficient statistical decision rules, which often have smaller risks than any of the well-known decision rules. The aim of this chapter is to show how the technique of ISE&APQ may be employed in the particular case of optimization, estimation, or improvement of statistical decisions under parametric uncertainty. To illustrate the proposed technique of ISE&APQ, application examples are given.

Nicholas A. Nechval, Gundars Berzinsh, Konstantin N. Nechval
A Note on the Sensitivity of Generic Approximate Sparse Pseudoinverse Matrix for Solving Linear Least Squares Problems

During the last decades, research efforts have been focused on the derivation of effective explicit preconditioned iterative methods. In this manuscript, we review the Explicit Preconditioned Conjugate Gradient Least Squares method, based on generic sparse approximate pseudoinverses, in conjunction with approximate pseudoinverse sparsity patterns, based on the modified row-threshold incomplete QR factorization techniques. Additionally, modified Moore-Penrose conditions are presented, and theoretical estimates for the sensitivity of the generic approximate sparse pseudoinverses are derived. Finally, numerical results concerning the generic approximate sparse pseudoinverses by solving characteristic model problems are given. The theoretical estimates were in qualitative agreement with the numerical results.

A. D. Lipitakis, G. A. Gravvanis, C. K. Filelis-Papadopoulos, D. Anagnostopoulos
Undergraduate Research: Bladerunner

In this project, students in the Electrical Engineering Technology Department at Kennesaw University worked under the guidance of advisory Professor Daren Wilcox and Mr. John Cohran, an automation engineer at Omni International Inc., to develop a prototype of the second generation of an automated liquid handling machine in current production called the Prep96. The prototype developed was called “Bladerunner.” The group researched the Prep96 program to study its operation and worked with Omni International and the Festo Corporation to develop a prototype code, schematic, and simulation using a FESTO PLC-HMI. Students established communication as well as provide autonomous and teleoperation control between devices. What follows is a summary of the current Prep96 industrial design, the Bladerunner prototype design, the components that are used in the Bladerunner design, a description of the software used, the prototype schematic, discovered results, and supported datasheets.

Adina Paddy, Cha Xiong, Colt Henderson, Tuu Le, Daren Wilcox
Comparison of the IaaS Security Available from the Top Three Cloud Providers

Cloud providers have simplified the ability for any customer to generate Infrastructure as a Service (IaaS) machines in a few hours. Unlike the purchase of traditional hardware, the securing of the computer is a now a shared responsibility between the customer and the provider. The delineation of which security controls are the responsibility of the cloud provider will be presented. Unique cloud-related security issues will be discussed. Security controls available from Amazon, Google, and Azure will be compared. This discussion will conclude with recommendations for how businesses can optimize security using cloud providers.

L. Kate Tomchik
Orientation and Line Thickness Determination in Binary Images

This paper addresses the problems of orientation determination of lines in binary images and the determination of line thickness. The orientation problem utilizes the Radon transform, while the line thickness problem determines the thickness of lines at selected angles by considering the pattern of the pixels of those lines. The Radon transform maps a line at a given angle to a point in feature space (also known as the sinogram). The sinogram is typically generated for a wide range of angles (from 0 to 179 degrees in this case). Consequently, lines at particular angles will map to points whose sinogram value is greater than that of other points in the sinogram, thereby generating local peaks in the sinogram.

Sean Matz
Greedy Navigational Cores in the Human Brain

Greedy navigation/routing plays an important role in geometric routing of networks because of its locality and simplicity. This can operate in geometrically embedded networks in a distributed manner; distances are calculated based on coordinates of network nodes for choosing the next hop in the routing. Based only on node coordinates in any metric space, the Greedy Navigational Core (GNC) can be identified as the minimum set of links between these nodes which provides 100% greedy navigability. In this paper, we perform results on structural greedy navigability as the level of presence of Greedy Navigational Cores in structural networks of the human brain.

Zalán Heszberger, András Majdán, András Biró, András Gulyás, László Balázs, Vilmos Németh, József Bíró
A Multicommodity Flow Formulation and Edge Exchange Heuristic Embedded in Cross Decomposition for Solving Capacitated Minimum Spanning Tree Problem

This chapter presents a new mathematical formulation, which is based on multicommodity flow, for a classical capacitated minimum spanning tree problem. It also demonstrates the performance of Van Roy’s cross decomposition algorithm for solving the capacitated minimum spanning tree can be significantly improved by incorporating an edge exchange heuristic algorithm at a tremendous saving in the computational effort. The results also reveal that the proposed algorithm is very competitive with Lagrangian original algorithm in terms of solution quality. The use of the new formulation and the proposed algorithm which take better advantage of the problem structure, especially that of the dual subproblem, provide a large potential for improvement.

Han-Suk Sohn, Dennis Bricker
Elemental Analysis of Oil Paints

Painting works have a long history and significant cultural value. Digital image processing has been introduced to analyze and identify paintings. As an important characteristic of images, image histograms are used to distinguish basic pure and mixture pigments. In this paper, we have investigated the peak location of image histograms of 21 fundamental pigments containing pure pigments and mixture pigments. Whether pure pigments or mixture pigments, the pigments’ histograms have unique peak locations. Our research indicates that fundamental pigments can be effectively distinguished and separated according to their own pigments’ image histograms.

Shijun Tang, Rosemarie C. Chinni, Amber Malloy, Megan Olsson

High-Performance Computing, Parallel and Distributed Processing

Frontmatter
Toward a Numerically Robust and Efficient Implicit Integration Scheme for Parallel Power Grid Dynamic Simulation Development in GridPACKTM

GridPACKTM is a highly modular parallel computing package for developing power grid simulations that run on high-performance computing platforms. As one of the key modules in GridPACK, dynamic simulation assesses the transient behavior of power systems and plays an important role in determining the performance of dynamic security assessment applications which rely heavily on the computational speed and scalability of dynamic simulation. This paper presents an ongoing effort on redesigning the existing “fixed step” modified Euler explicit numerical integration scheme-based dynamic simulation module in GridPACK to incorporate numerically robust and efficient “variable step” implicit integration schemes. Promising computational performance over the explicit integration method in addition to the improved usability is presented in the paper as the outcome of this preliminary study.

Shuangshuang Jin, Shrirang G Abhyankar, Bruce J Palmer, Renke Huang, William A Perkins, Yousu Chen
Improving Analysis in SPMD Applications for Performance Prediction

The analysis of parallel scientific applications allows us to know the details of their behavior. One way of obtaining information is through performance tools. One such tool is PAS2P, which is based on parallel application repeatability, focusing on performance analysis and prediction using the application signature. The analysis is performed using the same execution resources of the parallel application to create an independent machine model and identify common patterns. The analysis stage of the PAS2P tool is costly in terms of runtime, due to the high number of communications it performs, degrading performance by increasing the number of execution processes. To solve this problem, we propose designing a module that reduces the data dependency between processes, reducing the number of communications, and taking advantage of the characteristics of the SPMD applications. For this, we propose an analyzer module that is independent of data between processes. Our proposal allowed us to decrease the analysis time when the application scales.

Felipe Tirado, Alvaro Wong, Dolores Rexachs, Emilio Luque
Directive-Based Hybrid Parallel Power System Dynamic Simulation on Multi-core CPU and Many-Core GPU Architecture

High-performance computing (HPC)-based simulation tools for large-scale power grids are important to the improvement of future energy sector resiliency and reliability. However, the application development complexity, hardware adoption, and maintenance cost with large HPC facilities have hindered the wide utilization and quick commercialization of HPC applications. This paper presents a hybrid implementation of power system dynamic simulation – a time-critical function for transient stability analysis using directive-based parallel programming models to showcase the advantage of leveraging multi-core CPU and many-core GPU computing with superior floating-point acceleration performance and cost-effective architecture to lower this barrier. Real-time modeling and simulation with least modifications on the legacy sequential program are achieved with significant speedup performances on two test cases.

Cong Wang, Shuangshuang Jin, Yousu Chen
Parallel Computation of Gröbner Bases on a Graphics Processing Unit

Solving polynomial systems of equations of both many degrees and variables is not a simple computation. A method of solving these systems is to transform them into a Gröbner basis. Gröbner bases have desirable mathematical properties that make it possible to solve systems of polynomial equations. The computations necessary to compute Gröbner bases are many and can sometimes take days if not longer. Existing implementations are unable to handle the high degree polynomial systems necessary for specialized applications. Graphics Processing Units specialize in fast parallel computations by means of using many (several hundred to a couple thousand) computing cores. Utilizing these cores in parallel allows difficult problems to be solved quickly, versus using a Central Processing Unit, when optimized properly. The goal of this project is to implement a Gröbner basis algorithm that is optimized for GPUs which in turn will allow for faster computations of Gröbner bases.

Mark Hojnacki, Andrew Leeseberg, Jack O’Shaughnessy, Michael Dauchy, Alan Hylton, Leah Gold, Janche Sang
Single Core vs. Parallel Software Algorithms on a Multi-core RISC Processor

Algorithms with order-independent instructions have the opportunity to be executed on multiple cores at the same time in a process called parallelism. For this reason, multi-core processors are the standard in contemporary computer architecture. To understand the benefits and drawbacks of multi-core processors, we analyzed the performance of three algorithms that are important workloads in the general use of computers – sorting, password hashing, and graphics rendering – when computed as single core and multi-core workloads. We found that in the first and last examples for small workloads, the benefits of parallelism did not outweigh the performance drawbacks of coordination, but in the second example, they did.

Austin White, Michael Galloway
MPI Communication Performance in a Heterogeneous Environment with Raspberry Pi

The Raspberry Pi SBC (single-board computer) is being used for distributed memory parallel computing mainly as a low-cost teaching environment and as a low-energy consumption/green computing platform. In this chapter, we take a heterogeneous approach, where the Raspberry Pi is used along with standard computers. In the heterogeneous environment, computing as well as communication performance has to be taken into account in order to get the best results. In this chapter, we focus on the work on communication performance because it provides one of the best guidelines for successful granularity of parallel computing. We have carried out our experiments with a standard MPI (message passing interface) implementation, as well as using the currently most powerful Raspberry Pi models, in order to analyze the communication performance. We have experimented with classical Send-Receive MPI operations and the so-called one-sided MPI communication operations. Also, we document several details specifically related to the heterogeneous configuration environment that we found necessary for interoperation of MPI.

Oscar C. Valderrama Riveros, Fernando G. Tinetti
A FPGA-Based Heterogeneous Implementation of NTRUEncrypt

Nowadays, the lattice-based cryptography is believed to be thwarting future quantum computers. The NTRU (Nth-degree truncated polynomial ring unit) encryption algorithm, abbreviated as NTRUEncrypt, is belonging to the family of lattice-based public-key cryptosystems. Comparing to other asymmetric cryptosystems such as RSA and elliptic curve cryptography (ECC), the encryption and decryption operations of NTRU significantly rely on basic polynomial multiplication, which makes it faster compared to those alternatives. This paper proposes the first heterogeneous implementation of NTRUEncrypt on FPGA (Altera Stratix V) and CPU using OpenCL, which has shown that this kind of lattice-based cryptography lends itself excellently for parallelization and achieves high throughput as well as energy efficiency.

Hexuan Yu, Chaoyu Zhang, Hai Jiang
High-Performance and Energy-Efficient FPGA-GPU-CPU Heterogeneous System Implementation

Since Moore’s law is slowing down, CPU optimizations and multi-core architectures are exposing more and more limitations in energy efficiency and high performance. No single architecture can be best for every workload due to incredible diversity. Inspired by GPUs that have been widely deployed as accelerators in the past few years to speed up different types of tasks, FPGA-GPU (field-programmable gate array and graphic processing unit) heterogeneous computing can optimize traditional system architecture. In this paper, we port six benchmark kernels to the FPGA-GPU-CPU heterogeneous system, selecting the most suitable hardware architecture for every task. Implement performance-oriented and energy-efficient-oriented kernel launch on this system. Due to the recent improvements in high-level synthesis and Intel FPGA SDK for OpenCL, it is convenient for FPGA to cooperate with GPU within a heterogeneous computing system.

Chaoyu Zhang, Hexuan Yu, Yuchen Zhou, Hai Jiang
Preliminary Performance and Programmability Comparison of the Thick Control Flow Architecture and Current Multicore CPUs

Multicore CPUs integrate a number of processor cores on a single chip to support parallel execution of computational tasks. These CPUs improve the performance over single-core processors for independent parallel tasks nearly linearly as long as the memory bandwidth is sufficient. Speedup is, however, difficult to find when dense intercommunication between the cores is required. This forces programmers to use more complex and error-prone programming techniques instead of straight-forward parallel processing patterns. To solve these problems, we have introduced the Thick Control Flow (TCF) Processor Architecture (TPA). TCF is an abstraction of parallel computation that combines self-similar threads into computational entities. While there are already a number of performance studies for TPA, it is not known how well TPA performs against commercial multicores. In this paper, we compare the performance and programmability of TPA and Intel Skylake multicore CPUs with kernel programs. Code examples and qualitative observations on the included programming approaches are given.

Martti Forsell, Sara Nikula, Jussi Roivainen

Communication Strategies, Internet Computing, Cloud, and Computational Science

Frontmatter
Refactor Business Process Models with Redundancy Elimination

Since business processes are important assets, enterprises must be able to deal with their quality issues. Since understandability is one important quality criterion, a question arises here is how to improve the understandability of these models? In this paper, we propose a novel approach to refactor business process models represented as Petri nets with redundancy elimination for improving their understandability. More specifically, we first propose a process model smell for identifying redundant elements in a business process model using the unfolding technique, where the metric of this smell is an implicit place (IP). To avoid the state explosion problem caused by concurrency, we present a novel algorithm for computing an IP from the complete finite prefix unfolding (CFPU) rather than the reachability graph (RG) of a net system. Then, we propose three refactoring operations to eliminate an IP from the business process model without changing their external behavior. After refactoring, the size of the model is decreased such that the model is easier to be understood, that is, the understandability of the model can be improved. Experiments show our approach can eliminate IPs from business process models efficiently and preserve the behavior of these models.

Fei Dai, Huihui Xue, Zhenping Qiang, Lianyong Qi, Mohammad R. Khosravi, Zhihong Liang
A Shortest-Path Routing Algorithm in Bicubes

Recently, an explosive increase of demand on space- and time-consuming computation makes the research activities of massively parallel systems enthusiastic. Because in a massively parallel system a huge number of processors cooperate to process tasks by communicating among others, it forms an interconnection network, which is a network that interconnects the processors. By replacing a processor and a link with a vertex and an edge, respectively, many problems regarding communication and/or routing in interconnection networks are reducible to the problems in the graph theory. There have been many topologies proposed for interconnection networks of the massively parallel systems. The hypercube is the most popular topology and many variants were proposed so far. The bicube is a such topology, which can connect the same number of vertices with the same number degree as the hypercube while its diameter is almost half of that of the hypercube keeping the vertex-symmetric property. Therefore, we focus on the bicube and propose a shortest-path routing algorithm. We give a proof of correctness of the algorithm and demonstrate its execution.

Masaaki Okada, Keiichi Kaneko
An NPGA-II-Based Multi-objective Edge Server Placement Strategy for IoV

With the emergence of crowded traffic conditions, edge computing appears to deal with resource provision in the Internet of Vehicles (IoV). The tasks are offloaded from the vehicles to the nearby roadside units (RSUs) and transferred from the RSUs to the edge servers (ESs) for computing. Since the total number of the ESs is a constant, placing an ES on the remote area would improve the coverage rate, while the workload variance of the ESs and waiting time of the tasks deteriorate. An ideal ES location is supposed to achieve a balance among these three aspects. Therefore, an NPGA-II-based multi-objective edge server placement strategy, named NMEPS, is proposed in this paper to obtain the proper schemes for the ES placement. Technically, the coverage rate, the workload variance of the ESs, and the waiting time of the tasks are formulated as several fitness functions. Then, niched Pareto genetic algorithm II (NPGA-II) and roulette algorithm are applied to seek out the optimal solutions for ES placement. Furthermore, an evaluation function is designed to assess the performance of the solutions obtained. Finally, experimental evaluations are conducted to prove the validity of this method by using big data from Nanjing, China.

Xuan Yan, Zhanyang Xu, Mohammad R. Khosravi, Lianyong Qi, Xiaolong Xu
Automatic Mapping of a Physical Model into a Conceptual Model for a NoSQL Database

NoSQL systems have proven their efficiency to handle Big Data. Most of these systems are schema-less which means that the database does not have a fixed data structure. This property offers an undeniable flexibility allowing the user to add new data without making any changes on the data model. However, the lack of an explicit data model makes it difficult to express queries on the database. Therefore, users (developers and decision-makers) still need the database data model to know how data are stored and related, and then to write their queries. In previous works, we have proposed a process to extract the physical model of a document-oriented NoSQL database. In this paper, we aim to extend this work to achieve a reverse engineering of NoSQL databases in order to provide an element of semantic knowledge close to human understanding. The reverse engineering process is ensured by a set of transformation algorithms. We provide experiments of our approach using a case study example taken from the health care field. We also propose a validation of our solution in a real context; the results of this validation show that the generated conceptual model provides a good assistance to the user to express their queries on the database while saving a lot of time.

Fatma Abdelhedi, Amal Ait Brahim, Rabah Tighilt Ferhat, Gilles Zurfluh
Composition of Parent–Child Cyberattack Models

In today’s world, every system developer and administrator should be familiar with cyberattacks and possible threats to their organization systems. Petri nets have been used to model and simulate cyberattacks allowing for additional knowledge on the planning stages of defending a system. Petri nets have been used since the 1960s and there exist several extensions and variations of how they are designed; in particular, Petri nets with Players, Strategies, and Cost have been recently proposed to model individual cyberattacks on target systems. A formalism on composing these models has also been introduced as long as the attacks are performed either in a sequential order or parallel order. However, cyberattacks are also documented as having a parent–child relationship. The model composition described in this study provides a formalism that addresses cyberattacks that have this type of relationship and the process in which they should be composed through the use of inheritance concepts from object-oriented programming. An example is provided by composing a Sniffing attack (parent) with a Sniffing Application Code attack (child).

Katia P. Maxwell, Mikel D. Petty, C. Daniel Colvett, Tymaine S. Whitaker, Walter A. Cantrell
Tree-Based Fixed Data Transmission for Healthcare Sensor Networks

The ability to obtain health-related information at any time through the use of sensor network technology for healthcare instead of taking measures after becoming ill can greatly improve an individual’s life. The processing of healthcare sensor data requires effective studies on inter-node transmission in the sensor network and collective data processing. This paper introduces asynchronously operating data transmission with a fixed number of transmission data (fixed data transmission) on trees and evaluates the execution times of fixed data transmission and level data transmission. Tree-based fixed data transmission can continue transmission operations with an average number of data at either a shallow-level edge or deep-level edge. Level data transmission, on the other hand, begins data transmission from edges near leaves and transmits a large number of integrated data at a level near the root. The execution time of fixed data transmission on a complete binary tree with a maximum number of transmission data of 2 is equivalent to or smaller than the execution time of level data transmission, and as the number of nodes increases, fixed data transmission approaches a value 1.5 times faster than level data transmission.

Susumu Shibusawa, Toshiya Watanabe
Survey on Recent Active Learning Methods for Deep Learning

The motivation of active learning is that by providing limited labeled training samples, a machine learning algorithm can provide higher accuracy. The provided training samples are selected from a large or streaming dataset. The selection procedure often incorporates some measure of informativeness of samples. This measure is also defined based on the machine learning model itself. The data used in active learning is usually unlabeled; hence, the selected samples have to be labeled by an oracle (e.g., a human or a machine annotator). This is in case that labeling data is time-consuming or expensive in some way.In this paper, active learning is first introduced, a general introduction is given, and several query strategy frameworks are reviewed. Several recent papers on the topic of active learning and deep learning are studied, analyzed, and categorized based on their query strategy and applications. Specially, the overview of active learning and recent deep learning techniques are explored.

Azar Alizadeh, Pooya Tavallali, Mohammad R. Khosravi, Mukesh Singhal
Cloud-Edge Centric Service Provisioning in Smart City Using Internet of Things

Three highly discussed and researched computing technologies in recent times are cloud computing, edge computing, and the Internet of Things (IoT). In cloud computing, service seekers request computing resources from cloud servers connected over the Internet. In edge computing, the edge devices are placed between the cloud server and service seekers resulting in faster access to the computing resources and hence reducing the computational time and cost. Internet of Things is a technology where several devices are connected and communicate with each other over the Internet. In this paper, we try to integrate these three technologies and propose a cloud-edge centric Internet of Things architecture for service provisioning in a smart city. The integration of these technologies improves the overall performance of the smart city system by efficiently utilizing the computing resources resulting in reduced makespan, response time, and implementation cost, increased throughput, and better security of data.

Manoj Kumar Patra, Sampa Sahoo, Bibhudatta Sahoo, Ashok Kumar Turuk
Challenges for Swarm of UAV-Based Intelligence

Swarms of UAVs/drones are efficient resources for swarm intelligence, especially for monitoring/detect/react mechanisms. However, the increasing number of nodes in the system inflates the complexity of swarm behaviour, due to computation, communication and control limitations for monitoring and security purposes. In order to maintain the high performance of such a system, mission/safety/operation-critical applications must be verified via the elaboration of critical checkpoints. To make it resilient, this requires real-time updates in different system layers reflected in this paper, and therefore, scalability (from the networking viewpoint) and memory speed limitations (from the processing viewpoint), as well as security controls, are challenging. In the context of swarms of UAVs, this can be accomplished via big data technologies and ledger base chained structures, which is one part of the contribution of this paper. In order to assure resilience against manipulation threats, the other parts of the contribution concern end-to-end trust mechanism (integrated view of the three pillars: networking, processing/optimization as well as security) and swarm controller methods guaranteeing safety, which aims at enabling the trusted scalability of the swarm systems.

Muhammed Akif Ağca, Peiman Alipour Sarvari, Sébastien Faye, Djamel Khadraoui
Contrived and Remediated GPU Thread Divergence Using a Flattening Technique

General-purpose GPU applications have become mainstream. However, to this day, some code with major thread divergence can ruin GPU performance. In this work, we demonstrate examples of such code. We also propose a solution in the form of a flattening technique, which, although creates poor CPU performance, can be used to revive a GPU ruined by extreme thread divergence. We show the effect of data input on divergence and performance and compare this to the flattened approach called algorithm flattening (AF). AF trades off best-case performance for deterministic performance and works well in the average case, where extreme divergence exists.

Lucas Vespa, Genevieve Peters
Prototype of MANET Network with Ring Topology for Mobile Devices

This work presents the design of a MANET network for heterogeneous mobile devices, based on the ring topology. The net is implemented as a distributed system for handling errors such as crash and net recovery. There are different mobile applications, such as collaborative or emergency applications, that require support of a mobile ad hoc network when the Internet infrastructure is not available.

Ramses Fuentes Pérez, Erika Hernández Rubio, Diego D. Flores Nogueira, Amilcar Meneses Viveros

International Workshop

Frontmatter
New State-of-the-Art Results on ESA’s Messenger Space Mission Benchmark

This contribution presents new state-of-the-art results for ESA’s Messenger space mission benchmark, which is arguably one of the most difficult benchmarks available. The European Space Agency (ESA) created a continuous mid-scale black-box optimization benchmark which resembles an accurate model of NASA’s 2004 launched Messenger interplanetary space probe trajectory. By applying an evolutionary optimization algorithm (MXHPC/MIDACO) that relies on massive parallelization, it is demonstrated that it is possible to robustly solve this benchmark to a near global optimal solution within 1 hour on a computer cluster with 1000 CPU cores. This is a significant improvement over the previously published state-of-the-art results in 2017 where it was demonstrated for the first time that the Messenger benchmark could be solved in a fully automatic way and where it took about 12 h to achieve a near-optimal solution. The here presented results fortify the effectiveness of massively parallelized evolutionary computing for complex real-world problems which have been previously considered intractable.

Martin Schlueter, Mohamed Wahib, Masaharu Munetomo
Crawling Low Appearance Frequency Character Images for Early-Modern Japanese Printed Character Recognition

Offline Japanese character recognition for handwritten and printed characters was matured in the past century. We have investigated the third type of Japanese character recognition that is for early-modern Japanese printed books which are generated by typographical printing. The major problem in early-modern Japanese printed character recognition is the lack of learning data because characters with low appearance frequency are very difficult to be explored. In this paper, we propose a collecting method for characters with very low appearance frequency from early-modern Japanese printed books using crawling technique. The crawler is to be implemented in our newly redeveloped web application for automatically collecting unknown characters from early-modern Japanese printed books. The implementation details and its operation overview are shown in this paper.

Nanami Fujisaki, Yu Ishikawa, Masami Takata, Kazuki Joe
Application of the Orthogonal QD Algorithm with Shift to Singular Value Decomposition for Large Sparse Matrices

In semiconductor manufacturing process, lithography simulation modeling is known as an ill-posed problem. A normal solution of the problem is generally insignificant due to measurement constraints. In order to alleviate the difficulty, we introduced a regularization method using a preconditioning technique which consists of scaling and uniformization based on prior information. By regularizing the solution to prior knowledge, an accurate model can be achieved because the solution using truncated singular value decomposition from a few larger singular values becomes a reasonable solution based on the physically appropriate prior knowledge. The augmented implicitly restarted Lanczos bidiagonalization (AIRLB) algorithm is suitable for the purpose of the truncated singular value decomposition from a few larger singular values. Thus, the AIRLB algorithm is important for obtaining the solution in lithography simulation modeling. In this paper, we propose techniques for improving the AIRLB algorithm for the truncated singular value decomposition of large matrices. Specifically, we implement the improvement of the AIRLB algorithm by Ishida et al. Furthermore, instead of using the QR algorithm, we use the orthogonal-qd-with-shift algorithm for the singular value decomposition of the inner small matrix. Several numerical experiments demonstrate that, compared with AIRLB using the original QR algorithm, the proposed improvements provide highly accurate truncated singular value decomposition. For precise discussion, both large-scale sparse matrices and large-scale dense matrices are included in the experiments.

Hiroki Tanaka, Taiki Kimura, Tetsuaki Matsunawa, Shoji Mimotogi, Masami Takata, Kinji Kimura, Yoshimasa Nakamura
On an Implementation of the One-Sided Jacobi Method with High Accuracy

The one-sided Jacobi method for performing singular value decomposition can compute all singular values and singular vectors with high accuracy. Additionally, the computation cost is insignificant for comparatively small matrices. However, in the case of the conventional implementation in Linear Algebra PACKage, the subroutine may not be able to compute a singular vector with sufficient orthogonality. To avoid this problem, we propose a novel implementation of the one-sided Jacobi method. In the proposed implementation, a Givens rotation with high accuracy and fused multiply-accumulate are adopted.

Masami Takata, Sho Araki, Kinji Kimura, Yoshimasa Nakamura
Improvement of Island Genetic Algorithm Using Multiple Fitness Functions

In this paper, we propose an island genetic algorithm (GA) that promotes a unique evolution. In a conventional island GA [3], all objective functions are combined into a single fitness function. Hence, offspring generations are generated using the same fitness function. In the natural world, each should evolve in a manner that suits the environment, and owing to the various environments on Earth, various organisms have been diversified. Therefore, we propose an improved island GA with different fitness functions to create a distinctive evolution.

Shigeka Nakajima, Masami Takata
High-Performance Cloud Computing for Exhaustive Protein–Protein Docking

Public cloud computing environments, such as Amazon Web Services, Microsoft Azure, and the Google Cloud Platform, have achieved remarkable improvements in computational performance in recent years and are also expected to be able to perform massively parallel computing. As the cloud enables users to use thousands of CPU cores and GPU accelerators casually, and various software types can be used very easily by cloud images, the cloud is beginning to be used in the field of bioinformatics. In this study, we ported the original protein–protein interaction prediction (protein–protein docking) software, MEGADOCK, into Microsoft Azure as an example of an HPC cloud environment. A cloud parallel computing environment with up to 1600 CPU cores and 960 GPUs was constructed using four CPU instance types and two GPU instance types, and the parallel computing performance was evaluated. Our MEGADOCK on Azure system showed a strong scaling value of 0.93 for the CPU instance when H16 instance with 100 instances was used compared to 50 and a strong scaling value of 0.89 for the GPU instance when NC24 instance with 20 was used compared to 5. Moreover, the results of the usage fee and total computation time supported that using a GPU instance reduced the computation time of MEGADOCK and the cloud usage fee required for the computation. The developed environment deployed on the cloud is highly portable, making it suitable for applications in which an on-demand and large-scale HPC environment is desirable.

Masahito Ohue, Kento Aoyama, Yutaka Akiyama
HoloMol: Protein and Ligand Visualization System for Drug Discovery with Augmented Reality

To develop effective drugs against various diseases, it is vital to understand the three-dimensional (3D) structures of proteins and drug candidates that serve as drug targets. In the field of drug discovery, molecular structure display systems that are displayed on computer displays are used. In these systems, the 3D structures of the proteins and drug candidates are projected and visualized in two dimensions. In this study, we construct a molecular structure visualization system that visualizes the 3D structures of proteins and drug candidates that are essential for drug discovery with augmented reality (AR) using HoloLens.

Atsushi Koyama, Shingo Kawata, Wataru Sakamoto, Nobuaki Yasuo, Masakazu Sekijima
Leave-One-Element-Out Cross-Validation for Band Gap Prediction of Halide Double Perovskites

Perovskite solar cells have attracted much attention as a new type of solar cell that can be smaller and thinner than conventional silicon solar cells. However, the development of lead-free perovskite solar cells is required because currently most of them contain lead, which is harmful to the human body and the environment. In addition, the field of materials informatics, which combines materials development with information technology and computational science, has become active in recent years. Research on materials development that incorporates machine learning methods has become common in order to develop better materials quicker. In this paper, we aim to predict the band gap, one of the properties of unknown lead-free perovskite materials, by using machine learning methods. We focused on an element and constructed a prediction model to evaluate the case where the element is not included in the training data.

Hiroki Igarashi, Nobuaki Yasuo, Masakazu Sekijima
Interpretation of ResNet by Visualization of the Preferred Stimulus in Receptive Fields

One of the methods used in image recognition is the deep convolutional neural network (DCNN). DCNN is a model in which the expressive power of features is greatly improved by deepening the hidden layer of CNN. The architecture of CNNs is determined based on a model of the visual cortex of mammals. There is a model called residual network (ResNet) that has a skip connection. ResNet is an advanced model in terms of the learning method, but it has not been interpreted from a biological viewpoint. In this research, we investigate the receptive fields of a ResNet on the classification task in ImageNet. We find that ResNet has orientation-selective neurons and double-opponent color neurons. In addition, we suggest that some inactive neurons in the first layer of ResNet affect the classification task.

Genta Kobayashi, Hayaru Shouno
Bayesian Sparse Covariance Structure Analysis for Correlated Count Data

In this paper, we propose a Bayesian Graphical Lasso for correlated countable data and apply it to spatial crime data. In the proposed model, we assume a Gaussian Graphical Model for the latent variables which dominate the potential risks of crimes. To evaluate the proposed model, we determine optimal hyperparameters which represent samples better. We apply the proposed model for estimation of the sparse inverse covariance of the latent variable and evaluate the partial correlation coefficients. Finally, we illustrate the results on crime spots data and consider the estimated latent variables and the partial correlation coefficients of the sparse inverse covariance.

Sho Ichigozaki, Takahiro Kawashima, Hayaru Shouno
Gaze Analysis of Modification Policy in Debugging an Embedded System

In embedded system development, the debugging is difficult for novice because developers must consider the state of the hardware and the software. Therefore, this study analyzed the gaze transition in the debugging of the embedded system by experts and novices. The gaze data reveals the difficult points of novices and the hidden technique of experts in the debugging process. The analysis segmented the time-series data of gaze object using GP-HSMM, which is an unsupervised learning and time-series data division method with great accuracy. The results showed that experts tend to debug in three phases, namely, circuit debugging in the early stage, source code debugging in the middle stage, and confirmation of both the circuit and source code in the final stage. Based on the temporal trend of gazing at an object, we proposed the teaching contents of modification policy for the novices in order to increase debugging efficiency.

Takeru Baba, Erina Makihara, Hirotaka Yoneda, Kiyoshi Kiyokawa, Keiko Ono

Simulation and Modeling

Frontmatter
Modern Control Methods of Time-Delay Control Systems

It is shown that the Smith predictor is a subclass of the Youla-parameterization based generic two-degree-of-freedom controllers. Comparing the algorithms, the application of the new approach is suggested.

R. Bars, Cs. Bányász, L. Keviczky
An Interactive Software to Learn Pathophysiology with 3D Virtual Models

Educators have accessibility to a repository of resources to guide pedagogical instruction within graduate nursing curriculum. Multiple modalities are utilized within graduate education including face to face, online, and alternative platforms, such as web page, video, or mobile applications. Supplemental resources include e-learning applications, mobile learning, and game-based learning using internet accessible devices, smart displays, and even virtual reality headsets. The use of interactive and innovative methods has shown positive results in student engagement and cognitive learning. However, the implementation of 3D visualization strategies has been limited within healthcare education, specifically graduate advanced practice nursing. It is vital in graduate nursing education to provide students with advanced knowledge of disease processes and critical reasoning skills. While some programs include the ability to display text and image, as well as present animated contents, they lack sufficient interactive features to enhance learning. Therefore, an efficient and effective modality of information delivery is required to achieve this goal. In this paper, we describe the development of an interactive 3D visualization software tool that provides an innovative approach for education within graduate nursing. The visualization software provides a framework to enhance the teaching and learning process in graduate nursing pathophysiology utilizing 3D virtual models.

Abel A. Reyes, Youxin Luo, Parashar Dhakal, Julia Rogers, Manisa Baker, Xiaoli Yang
A Simulation-Optimization Technique for Service Level Analysis in Conjunction with Reorder Point Estimation and Lead-Time Consideration: A Case Study in Sea Port

This study offers a step-by-step practical procedure from the analysis of the current status of the spare parts inventory system to advanced service level analysis by virtue of simulation-optimization technique for a real-world case study associated with a seaport. The remarkable variety and immense diversity, on one hand, and extreme complexities not only in consumption patterns but also in the supply of spare parts in an international port with technically advanced port operator machinery, on the other hand, have convinced the managers to deal with this issue in a structural framework. The huge available data require cleaning and classification to properly process them and derive reorder point (ROP) estimation, reorder quantity (ROQ) estimation, and associated service level analysis. Finally, from 247,000 items used in 9 years long, 1416 inventory items are elected as a result of ABC analysis integrating with the analytic hierarchy process (AHP), which led to the main items that need to be kept under strict inventory control. The ROPs and the pertinent quantities are simulated by Arena software for all the main items, each of which took approximately 30 minutes run time on a personal computer to determine near-optimal estimations.

Mohammad Arani, Saeed Abdolmaleki, Maryam Maleki, Mohsen Momenitabar, Xian Liu
Sustainability, Big Data, and Local Community: A Simulation Case Study of a Growing Higher Education Institution

Higher education institutions are core elements in the sustainable development paradigm because they prepare the future decision-makers of the local, national, or international societies. At the same time, many colleges and universities just declare their sustainability support and rarely use a quantitative approach to analyze/manage this area. It is happening because of sustainable development paradigm complexity, reusability problems of already created tools/models, and some methodological difficulties to use already available big data. This paper introduces an approach where we use simulation as a united methodological platform to combine sustainability, big data, and local community needs for particular numerical analysis. Within the simulation case study, we analyze the transportation system as a part of the sustainability of a young fast-growing higher education institution, the USA.

Anatoly Kurkovsky
Vehicle Test Rig Modeling and Simulation

Vehicle test rigs can be used to evaluate vehicle subsystems for mobility characteristics. These vehicle test rigs can be simulated using multi-body physics tools, such as Chrono. This paper details the method used for developing a user input file format and wrapper methods for running the Chrono::Vehicle tire, track, and suspension test rigs and how to build and run these various test rigs.

Sara Boyle
Modelling and Simulation of MEMS Gyroscope with Coventor MEMS+ and MATLAB/Simulink Software

In this paper, authors presents heterogeneous environment for modeling and simulations created with use Coventor MEMS+ and MATLAB/Simulink software. The big advantage of this solution is possibility to merge with Cadence software what gives in effect big solution for modeling, simulation and design MEMS structure with ROIC (Read Out Integrated Circuit) for further fabrication. This environment was created for needs of multidisciplinary project (medicine, electronics, and computer sciences areas) realized by three scientific institutions and two companies.

Jacek Nazdrowicz, Adam Stawinski, Andrzej Napieralski
Ground Vehicle Suspension Optimization Using Surrogate Modeling

Using surrogate models in place of more computationally expensive simulations is a common practice in several contexts. In this paper, we present an optimization task of finding ideal spring coefficient values for ground vehicle suspensions with respect to a particular metric of driver safety and comfort, develop a set of surrogate models based on sampling full system simulations to calculate this metric, and present and compare the results of using these surrogate models to perform the optimization. We show that the medium-fidelity model, as defined for this study, is of sufficient fidelity for the optimization and that additional fidelity offers little benefit but that the underlying objective function is noisy enough to limit the usefulness of the surrogate model approach for the optimization.

Jeremy Mange

Modeling, Visualization, Computational Science, and Applications

Frontmatter
Enhanced Freehand Interaction by Combining Vision and EMG-Based Systems in Mixed-Reality Environments

This paper studies the capabilities, limitations, and potential of combining a vision-based system with EMG sensors for freehand interaction in mixed-reality environments. We present the design and implementation of our system using the HoloLens and Myo armband; conduct a preliminary user study with 15 participants to study the usability of our model and discuss the advantages, potentials, and limitations of this approach; and discuss our findings and its implications for the design of user interfaces with a similar hardware setup.We show that the flexibility of interaction in our proposal has positive effects on the user performance for the completion of a complex user task, although measured user performance for the individual gestures was worse on average than the performance obtained for the gestures supported by the standard HoloToolkit. One can conclude that the presented interaction paradigm has a great potential for future use in mixed reality, but it still has some limitations regarding robustness and ergonomics that must be addressed for a better user acceptance and broader public adoption.

Carol Naranjo-Valero, Sriram Srinivasa, Achim Ebert, Bernd Hamann
Parameterizations of Closed-Loop Control Systems would be perfectly fine

The optimization of simple two-degree-of-freedom control systems is very easy with new parameterizations such as Youla and Keviczky-Bányász. The comparison of their model-based versions is important at the practical applications.

Cs. Bányász, L. Keviczky, R. Bars
A Virtual Serious Game for Nursing Education

Procedure skill competency is a crucial element in nursing education. However, there are several barriers to skill competency and proficiency, which include limited simulation faculty, lab time, and resources. Serious gaming has proven to be an effective approach to enhance knowledge attainment, promote problem-based active learning, and encourage critical thinking in many industries, including nursing. Therefore, we propose a virtual serious game featured with a realistic environment, considerable interaction and animation, well-designed user-friendly graphic interface, and an innovative evaluation system for nursing education. This game is designed as complementary learning material to be utilized along with traditional pedagogical methods. This game provides a supplemental learning methodology for undergraduate nursing students, enabling them to practice the skill in a safe and innovative environment. Incorporating a serious game into undergraduate nursing education can circumvent the problems of scheduling; individualized instruction; physical requirements of facilities, equipment, and supplies; and geography constraints.

Youxin Luo, Abel A. Reyes, Parashar Dhakal, Manisa Baker, Julia Rogers, Xiaoli Yang
Modeling Digital Business Strategy During Crisis

COVID-19 is a major health, economic, and social crisis in the modern age. Even before the COVID-19 pandemic, digitization had changed the consumer behavior and habits, regulations, supply-side factors, demand-side factors, and costs of information structure and coordination. We have been experiencing shifts from physical interactions to digital interactions and transitioning from physical, predictable, and slow world into digital, virtual, fast, and agile world. COVID-19 will likely accelerate the digitization but will also force corporations to refine and possibly redefine their digital business strategies. Corporations have to address many questions about how to deal with the crisis in digital age and refine their digital business strategy while going through some transformations already. The main question is how corporations could navigate through this crisis when traditional economy and even digital economy assumptions and approaches do not necessarily apply. In this paper, we study how corporations could characterize the digital business strategy during crisis and devise a framework for how they could model and evaluate various strategic options. Due to so many complex dynamics involved and many uncertainties, we argue a layered intelligence framework that supports qualitative analysis that should be used to model the digital business strategy during crisis.

Sakir Yucel
Dealing Bridge Hands: A Study in Random Data Generation

In this study we examine the problem of dealing bridge hands and producing output in elegant usable form using the data generation language (DGL). Although this is a simple problem, the solution leads to the inclusion of several new features for the DGL. These features can be useful for many other problems as well. Several techniques for dealing bridge hands are discussed, along with the new DGL features that were used in each technique. Examples are given of actual hands dealt using the DGL grammars.

Peter M. Maurer
An Empirical Study of the Effect of Reducing Matching Frequency in High-Level Architecture Data Distribution Management

The High-Level Architecture (HLA) is an interoperability protocol standard and implementation architecture for distributed simulation. Using HLA, concurrently executing simulation models collaboratively simulate a scenario by exchanging messages over a network. In HLA simulations, large volumes of messages are possible, potentially limiting scalability. The HLA Data Distribution Management services are designed to reduce message volume. When using those services, the HLA software determines how to route messages by repeatedly solving a computationally expensive computational geometry problem known as “matching”: given a set of axis-parallel hyper-rectangles in a multidimensional coordinate space, find all intersecting pairs of rectangles. Much effort has been devoted to performing matching as efficiently as possible. This study investigates a different approach, namely, performing matching as seldom as possible, without compromising the simulation results. A constructive entity-level combat model, similar to production military semi-automated forces systems, was developed and verified. It was then used to experimentally assess the effect of performing matching at various frequencies. The primary metric of effect was the mean time at which battlefield entities first sighted each other. Experimental results regarding the delay of first sighting times suggest that matching frequency, and thus its computational expense, can be substantially reduced without negative effects.

Mikel D. Petty
Research on Repair Strategy of Heterogeneous Combat Network

The purpose of restoration strategy is to repair the combat system in time after the node is attacked so as to reduce the impact of functional failure. The combat system is modeled into a heterogeneous combat network, and its combat capability is measured by functional reliability based on the functional chain structure. This chapter proposes an edge increasing strategy with the goal of maximizing functional reliability and combines optional edges and connection costs as constraints. The artificial colony algorithm was used to solve the repair model. Simulation experiments were carried out under four node attack strategies. The results verify the effectiveness of the repair model and prove that the proposed method is superior to other repair algorithms.

Yanyan Chen, Yonggang Li, Shangwei Luo, Zhizhong Zhang
The Influence of Decorations and Word Appearances on the Relative Size Judgment in Viewers of Tag Clouds

A tag cloud is a representation of the word content of a source document where the importance of the words is represented by visual characteristics such as color and size. Tag clouds can be used for several purposes such as providing a high-level understanding of a document. Although previous research has indicated that the relative size of tags is a strong factor in communicating the importance of the words in the underlying text, there are still many unanswered questions. Examples of these questions are as follows: How do viewers perceive the relative size of the words in a tag cloud? How is the judgment of the relative size is influenced by other characteristics of the words in a tag cloud? Do viewers make their judgments based on the area, the height, or the length of the words? In this chapter, we investigate viewers’ estimation of the tag word relative sizes given while varying the size, types of letters, and the surrounding text box decorations around the target word pairs. The results indicate a range of relative sizes where relative size judgments may be approximately correct, but also a large region of relative sizes where the relative size judgments are increasingly underestimated as the true size ratio increases. This underestimation bias was only modestly influenced by appearance characteristics. The results have implications for tag cloud design, and for reliance on relative size judgment of words as a visualization technique.

Khaldoon Dhou, Robert Kosara, Mirsad Hadzikadic, Mark Faust
Automation of an Off-Grid Vertical Farming System to Optimize Power Consumption

The world’s resources are finite. As the population increases, these resources must be better utilized to maintain or improve living standards. One area ripe for improvement is agricultural farming. Farms use the most available freshwater and vast swaths of land. Additionally, fertilizer runoff pollutes nearby rivers and reservoirs. One solution is vertical farms. Vertical farms can be stacked on top of each other, reducing land, water, and fertilizer usage significantly. However, this comes at the trade-off of increased energy usage. Renewable energy sources can supply the power, reducing the compromise, but they are unreliable, and their power production varies hourly and by season. An automated system can adapt to the changing power availability without harming plants because plants acclimate to a wide variety of growing conditions in nature. This research focused on automating a vertical farm system to adjust to weather variations without experiencing blackouts while maximizing power usage and improving growing conditions for the plants in the systems.

Otto Randolph, Bahram Asiabanpour
Workflow for Investigating Thermodynamic, Structural, and Energy Properties of Condensed Polymer Systems

Soft matter materials and polymers are widely used in the controlled delivery of drugs. Simulation and modeling provide insight at the atomic scale enabling a level of control unavailable to experiments. We present a workflow protocol for modeling, simulating, and analyzing structural and thermodynamic response properties of poly(lactic-co-glycolic acid) (PLGA), a well-studied and FDA-approved material. We concatenate a battery of molecular dynamics, computational chemistry, highly parallel scripting, and analysis tools for generating properties of bulk polymers in the condensed phase. We provide the workflow leading to the glass transition temperature, enthalpy, density, isobaric heat capacity, thermal expansion coefficient, isothermal compressibility, bulk modulus, sonic velocity, cohesive energy, and solubility parameters. Calculated properties agree very well with experiments, when available. This methodology is currently being extended to a variety of polymer types and environments.

James Andrews, Estela Blaisten-Barojas

Grid, Cloud, & Cluster Computing – Methodologies and Applications

Frontmatter
The SURF System for Continuous Data and Applications Placement Across Clouds

In a hybrid cloud environment, as well as in a multi-cloud environment, an enterprise employs a number of local sites (or data centers) and cloud data center(s) that may be geographically distributed. The problem of where to place and replicate data and applications is complicated by multiple dynamically changing conditions. We describe two types of algorithms: data movement (conservative and optimistic of various types) and recovery from various system faults. They may be integrated into various system types. These systems may have their own correctness requirements. The system we provide is charged with creating an illusion that data is stationary. These algorithms are implemented on top of a ZooKeeper compact distributed database. The algorithms were extensively tested over three public clouds.

Oded Shmueli, Itai Shaked
The Abaco Platform: A Performance and Scalability Study on the Jetstream Cloud

Abaco is an open-source, distributed cloud-computing platform based on the Actor Model of Concurrent Computation and Linux containers funded by the National Science Foundation and hosted at the Texas Advanced Computing Center. Abaco recently implemented an autoscaler feature that allows for automatic scaling of an actor’s worker pool based on an actor’s mailbox queue length. In this paper, we address several research questions related to the performance of the Abaco platform with manual and autoscaler functionality. Performance and stability are tested by systematically studying the aggregate FLOPS and hashrate throughput of Abaco in various scenarios. From testing we establish that Abaco correctly scales to 100 Jetstream “m1.medium” instances and achieves over 19 TFLOPS.

Christian R. Garcia, Joe Stubbs, Julia Looney, Anagha Jamthe, Mike Packard, Kreshel Nguyen
Enterprise Backend as a Service (EBaaS)

In the world where we have computers and World Wide Web, web applications have become more and more popular. There has been a constant decrease in installed applications with people mostly relying on web applications to get their work done. With constant innovations in the field of computer, we see tons of start-ups every day and what better option do they have than reaching to million people with a web application of their own. Talking about web applications, we usually have (1) Frontend: what a user can see on their screen while accessing that web application and (2) Backend: what front-end communicates with to process the users’ requests. Since the invention of RESTful web services, developers have relied on APIs to which front-end sends request to in order to get an appropriate response. RESTful APIs have become more of a standard in developing the back-end and more often than not, they are pretty basic with only queries changing to get data from the database. This paper provides a solution to automate the development of back-end and thus does not need any expert knowledge other than the knowledge of the underlying database and hence even a nondeveloper or a developer with no prior experience in developing back-end can easily get access to the back-end. The solution discussed here will ask user to provide database details and will create the database along with the downloadable code for back-end which will be ready to use to interact with the front-end and the database.

Gokay Saldamli, Aditya Doshatti, Darshil Kapadia, Devashish Nyati, Maulin Bodiwala, Levent Ertaul
Secure Business Intelligence

Enterprise organizations have relied on correct data in business intelligence visualization and analytics for years. Before the adoption of the cloud, most data visualizations were executed and displayed inside enterprise applications. As application architectures have moved to the cloud, many cloud services now provide business intelligence functionality. The services are delivered in a way that is more accessible for end users using web browsers, mobile devices, data feeds, and email attachments. Unfortunately, along with all the benefits of the cloud business intelligence services comes complexity. The complexity can lead to slow response times, errors, data leakage, and integrity issues. An information technology department or service provider must get ahead of the problems by automating the execution of reports to know when availability or integrity issues exist and dealing with those issues before they turn into end-user trouble tickets. The development of the business intelligence code must also include tools to express the privacy requirements of the data exposed in the report or document. In this paper, we present two tools we developed to help guarantee the confidentiality, integrity, and availability of business intelligence. The first tool is our client-side correctness programming language that allows execution against many cloud documents and business intelligence services. The secBIML language enables issues to be proactively discovered before the end-users experience the problems. The other tool in our work is a server-side programming language that allows the creation of reports and business documents. The secBIrpts language enables an organization to express their privacy requirements utilizing a hierarchical security model.

Aspen Olmsted
Framework for Monitoring the User’s Behavior and Computing the User’s Trust

Traditional access control, simple methods for virus detection, and intrusion detection are unable to manage variety of malicious and network attacks. The number of users might get hacked because of limitation in basic security protection. To implement a secure, reliable, and safe cloud-computing environment, we need to consider the trust issue. A trusted cloud is guaranteed to be safe from user terminals; combined with the concept of a trusted network, it evaluates, forecasts, monitors, and manages the user’s behavior to eliminate malicious datacenter attacks which are performed by unwanted cloud users and hackers; as a result, there is improved cloud security. In this chapter, we propose a Framework for Monitoring the User’s Behavior and Computing the User’s trust (FMUBCT). This model detects abnormal user behavior by creating user-behavior history patterns and compares them with current user behavior. The outcome of the comparison is sent to a trust computation center to calculate a user trust value. FMUBCT is flexible and scalable as it considers more evidence to monitor and evaluate user behavior. Finally, the simulation of FMUBCT shows that the model can effectively evaluate the users.

Maryam Alruwaythi, Kendall Nygard
Selective Compression Method for High-Quality DaaS (Desktop as a Service) on Mobile Environments

As computing is defined as a concept that various IT devices are able to be used at any time at any place, with the ongoing growth of BYOD (bring your own device) tending to use DaaS on personal smart devices and company or official work as it is, now DaaS has to be supported smoothly in mobile environment. Hence, the study suggests high-quality transmission method to use smoothly on smart devices required in order to improve VDI environment to DaaS. To prove goal achievement, effectiveness results were analyzed through function and scenario tests on “gradual terminal protocol” technology and other virtualization common use solution protocols.

Baikjun Choi, Sooyong Park
SURF: Optimized Data Distribution Technology

In a hybrid cloud environment, an enterprise employs a number of local sites (or data centers) and cloud data center(s) of possibly multiple cloud service providers. SURF is a technology for controlling the distribution of data and applications in such an environment. Distribution decisions are based on (a) enterprise policy, (b) performance characteristics, and (c) pricing tables. The technology may be used as the core of the multiple system types. SURF’s distributed algorithms, and its data distribution decision component, were examined via extensive simulations on a hypothetical application and are vastly superior in a changing environment to naive placement methods.

Oded Shmueli, Itai Shaked
Securing Mobile Cloud Computing Using Encrypted Biometric Authentication

Mobile cloud computing (MCC) is based on the integration of cloud computing and mobile devices that inherit cloud computing characteristics such as on-demand self-service, broad network access, and measured services. Also, MCC inherited the security threats of cloud computing, like loss of data, exposing the data to a third party, or unauthorized access to resources. Although there are many researches in the area of protecting data from illegitimate access using traditional encryption techniques, this paper discusses a new methodology for preventing unauthorized access to resources by encrypting user’s password along with its biometric identification (fingerprint) and storing them to the cloud. As a result, only authorized users can generate keys for encrypting their data and store them to the cloud. The proposed methodology will protect the identity of the user and keep user data from unauthorized access.

Iehab AlRassan
Performance Analysis of Remote Desktop Session Host with Video Playback Scenarios

In many places, desktop as a service under the Windows environment has been provided using Virtual Desktop Infrastructure (VDI) and Remote Desktop Session Host (RDSH). A number of studies have been conducted on analysis of sole performance of RDP or hypervisor performance, and few studies have been conducted on performance analysis when a number of RDSH are running. The performance analysis of RDSH published by Microsoft is not suitable to estimate the acceptable number of users in servers running in the current use environment where videos are frequently used because RDSH employ the models that exclude video-related tasks. This study aims to analyze the performance including video playback scenarios in the RDSH environment and estimate the acceptable number of servers through the performance analysis results.

Baikjun Choi, Sooyong Park
Mining_RNA: WEB-Based System Using e-Science for Transcriptomic Data Mining

High-throughput gene expression studies yielded a great number of large datasets, and these are freely available in biological databases. Re-analyzing these studies individually or in clusters can produce new results relevant to the scientific community. The purpose of this work is to develop a WEB system based on the e-Science paradigm. The system should read massive amounts of data from the Gene Expression Omnibus (GEO) database, pre-process, mine, and display it in a user-friendly interface. Thus, it is intended to mitigate the difficulty in interpreting data from transcriptomic studies made using the DNA microarray technique. Also presented will be the preliminary results obtained from the initial stages of development, as well as the proposed architecture for the system.

Carlos Renan Moreira, Christina Pacheco, Marcos Vinícius Pereira Diógenes, Pedro Victor Morais Batista, Pedro Fernandes Ribeiro Neto, Adriano Gomes da Silva, Stela Mirla da Silva Felipe, Vânia Marilande Ceccatto, Raquel Martins de Freitas, Thalia Katiane Sampaio Gurgel, Exlley Clemente dos Santos, Cynthia Moreira Maia, Thiago Alefy Almeida e Cicília Raquel Maia Sousa, Leite
Backmatter
Metadata
Title
Advances in Parallel & Distributed Processing, and Applications
Editors
Dr. Hamid R. Arabnia
Leonidas Deligiannidis
Michael R. Grimaila
Douglas D. Hodson
Prof. Kazuki Joe
Masakazu Sekijima
Fernando G. Tinetti
Copyright Year
2021
Electronic ISBN
978-3-030-69984-0
Print ISBN
978-3-030-69983-3
DOI
https://doi.org/10.1007/978-3-030-69984-0