Skip to main content
Top

2017 | Book

High Performance Computing

Third Latin American Conference, CARLA 2016, Mexico City, Mexico, August 29–September 2, 2016, Revised Selected Papers

insite
SEARCH

About this book

This book constitutes the proceedings of the Third Latin American Conference on High Performance Computing, CARLA 2016, held in Mexico City, Mexico, in August/September 2016.

The 30 papers presented in this volume were carefully reviewed and selected from 70 submissions. They are organized in topical sections named: HPC Infrastructure and Applications; Parallel Algorithms and Applications; HPC Applications and Simulations.

Table of Contents

Frontmatter

HPC Infrastructure and Applications

Frontmatter
Efficient P2P Inspired Policy to Distribute Resource Information in Large Distributed Systems
Abstract
The computational infrastructures are becoming larger and more complex. Their organization and interconnection are acquiring new dimensions with the increasing adoption of Cloud Technology and the establishment of Federations of cloud providers.
These large interconnected systems require monitoring at different levels of the infrastructure: from the availability of hardware resources to the effective provision of services and verification of terms of the established agreements.
Monitoring becomes a fundamental component of any Cloud Service or Federation, as the up-to-date information about resources in the system is extremely important to be used as an input to the scheduler component. The way in which the different members of such a distributed system obtain and distribute the resource information is what is known as Resource Information Distribution Policy.
Moving towards the obtention of a scalable and easy to maintain policy leads to interaction with the Peer to Peer (P2P) paradigm. Some of the proposed policies are based on establishing a ranking according to previous communications between nodes. These policies are known as learning based methods or Best-Neighbor (BN). However, the use of this type of policies shows poor performance and limited scalability compared with defacto Hierarchical or other hybrid policies.
In this work, we introduce pBN which is a fully distributed resource information policy based on P2P. We analyze some reasons that could produce the poor performance in standard BN and propose an improvement which shows performance and bandwidth consumption similar to Hierarchical policy and other hybrid variations. To compare the different policies, a specific simulation tool is used with different system sizes and exponential network topology.
Paula Verghelet, Esteban Mocskos
Performance Evaluation of Multiple Cloud Data Centers Allocations for HPC
Abstract
This paper evaluates the behavior of the Microsoft Azure G5 cloud instance type over multiple Data Centers. The purpose is to identify if there are major differences between them and to help the users choose the best option for their needs. Our results show that there are differences in the network level for the same instance type in different locations and inside the same location at different times. The network performance causes interference in the applications level, as we could verify in our results.
Eduardo Roloff, Emmanuell Diaz Carreño, Jimmy K. M. Valverde-Sánchez, Matthias Diener, Matheus da Silva Serpa, Guillaume Houzeaux, Lucas M. Schnorr, Nicolas Maillard, Luciano Paschoal Gaspary, Philippe Navaux
Communication-Aware Affinity Scheduling Heuristics in Multicore Systems
Abstract
This article presents the application of heuristic algorithms to solve the affinity scheduling problem in multicore computing systems. Affinity scheduling is a technique that allows efficient utilization of heterogeneous computing systems, by assigning a set of tasks to cores, taking into account specific efficiency and quality-of-service criteria. The heuristics proposed in this article are useful methods to solve realistic instances of the communication-aware affinity scheduling problem, which account for the different speed of communication and data transfer between tasks executing in different cores on a multicore system. The experimental analysis demonstrates that the proposed heuristics outperform the results computed using traditional scheduling techniques up to 12.3% when considering both the communication and synchronization times between tasks.
Diego Regueira, Santiago Iturriaga, Sergio Nesmachnow
Penalty Scheduling Policy Applying User Estimates and Aging for Supercomputing Centers
Abstract
In this article we address the problem of scheduling on realistic high performance computing facilities using incomplete information about tasks execution times. We introduce a variation of our previous Penalty Scheduling Policy, including an aging scheme that increases the priority of jobs over time. User-provided runtime estimates are applied as in the original Penalty Scheduling Policy, but a realistic priority schema is proposed to avoid starvation. The experimental evaluation of the proposed scheduler is performed using real workload logs, and validated using a job scheduler simulator. We study different realistic workload scenarios to evaluate the performance of the Penalty Scheduling Policy with aging. The main results suggest that using the proposed scheduler with the aging scheme, the waiting time of jobs in the high performance computing facility is significantly reduced (up to 50% in average).
Nestor Rocchetti, Miguel Da Silva, Sergio Nesmachnow, Andrei Tchernykh
Accelerating All-Sources BFS Metrics on Multi-core Clusters for Large-Scale Complex Network Analysis
Abstract
All-Sources BFS (AS-BFS) is the main building block in a variety of complex network metric algorithms, such as the average path length and the betweenness centrality. However, AS-BFS calculations involve as many full BFS traversals as the total number of vertices, rendering AS-BFS impractical on commodity systems for real-world graphs with millions of vertices and links. In this paper we present our experience with the acceleration of AS-BFS graph metrics on multi-core HPC clusters by outlining hybrid coarse-grain parallel algorithms for computing the average path-length, the diameter and the betweenness centrality of complex networks in a lock-free fashion. We report speedups of up to 171\(\times \) on a heterogeneous cluster of 12-core Intel Xeon and 32-core AMD Opteron multi-core nodes; as well as resource utilizations of up to 75%.
Alberto Garcia-Robledo, Arturo Diaz-Perez, Guillermo Morales-Luna
Exploration of Load Balancing Thresholds to Save Energy on Iterative Applications
Abstract
The power consumption of High Performance Computing systems is an increasing concern as large-scale systems grow in size and, consequently, consume more energy. In response to this challenge, we proposed two variants of a new energy-aware load balancer that aim at reducing the energy consumption of parallel platforms running imbalanced scientific applications without degrading their performance. Our research combines Dynamic Load Balancing with Dynamic Voltage and Frequency Scaling techniques in order to reduce the clock frequency of underloaded computing cores which experience some residual imbalance even after tasks are remapped. This work presents a trade-off evaluation between runtime, power demand and total energy consumption when applying these two energy-aware load balancer variants on real-world applications. In this way, we can define which is the best threshold value for each application under the total energy consumption, total execution time or the average power demand focus.
Edson L. Padoin, Laércio L. Pilla, Márcio Castro, Philippe O. A. Navaux, Jean-François Méhaut

Parallel Algorithms and Applications

Frontmatter
Design of a Task-Parallel Version of ILUPACK for Graphics Processors
Abstract
In many scientific and engineering applications, the solution of large sparse systems of equations is one of the most important stages. For this reason, many libraries have been developed among which ILUPACK stands out due to its efficient inverse-based multilevel preconditioner. Several parallel versions of ILUPACK have been proposed in the past. In particular, two task-parallel versions, for shared and distributed memory platforms, and a GPU accelerated data-parallel variant have been developed to solve symmetric positive definite linear systems. In this work we evaluate the combination of both previously covered approaches. Specifically, we leverage the computational power of one GPU (associated with the data-level parallelism) to accelerate each computation of the multicore (task-parallel) variant of ILUPACK. The performed experimental evaluation shows that our proposal can accelerate the multicore variant when the leaf tasks of the parallel solver offer an acceptable dimension.
José I. Aliaga, Ernesto Dufrechou, Pablo Ezzatti, Enrique S. Quintana-Ortí
A Taxonomy of Workflow Scheduling Algorithms
Abstract
A workflow is a set of steps or tasks that model the execution of a process, e.g., protein annotation, invoice generation and composition of astronomical images. Workflow applications commonly require large computational resources. Hence, distributed computing approaches (such as Grid and Cloud computing) emerge as a feasible solution to execute them. Two important factors for executing workflows in distributed computing platforms are (1) workflow scheduling and (2) resource allocation. As a consequence, there is a myriad of workflow scheduling algorithms that map workflow tasks to distributed resources subject to task dependencies, time and budget constraints. In this paper, we present a taxonomy of workflow scheduling algorithms, which categorizes the algorithms into (1) best-effort algorithms (including heuristics, metaheuristics, and approximation algorithms) and (2) quality-of-service algorithms (including budget-constrained, deadline-constrained and algorithms simultaneously constrained by deadline and budget). In addition, a workflow engine simulator was developed to quantitatively compare the performance of scheduling algorithms.
Fernando Aguilar-Reyes, J. Octavio Gutierrez-Garcia
An Efficient Implementation of Boolean Gröbner Basis Computation
Abstract
The computation of boolean Gröbner bases has become an increasingly popular technique for solving systems of boolean equations that appear in cryptography. This technique has been used to solve some cryptosystems for the first time. In this paper, we describe a new concurrent algorithm for boolean Gröbner basis computation that is capable of solving the first HFE challenge. We also discuss implementation details, including optimal runtime parameters that depend on the CPU architecture. Our implementation is available as open source software.
Rodrigo Alexander Castro Campos, Feliú Davino Sagols Troncoso, Francisco Javier Zaragoza Martínez
Accelerating Hash-Based Query Processing Operations on FPGAs by a Hash Table Caching Technique
Abstract
Extracting valuable information from the rapidly growing field of Big Data faces serious performance constraints, especially in the software-based database management systems (DBMS). In a query processing system, hash-based computational primitives such as the hash join and the group-by are the most time-consuming operations, as they frequently need to access the hash table on the high-latency off-chip memories and also to traverse whole the table. Subsequently, the hash collision is an inherent issue related to the hash tables, which can adversely degrade the overall performance.
In order to alleviate this problem, in this paper, we present a novel pure hardware-based hash engine, implemented on the FPGA. In order to mitigate the high memory access latencies and also to faster resolve the hash collisions, we follow a novel design point. It is based on caching the hash table entries in the fast on-chip Block-RAMs of FPGA. Faster accesses to the correspondent hash table entries from the cache can lead to an improved overall performance.
We evaluate the proposed approach by running hash-based table join and group-by operations of 5 TPC-H benchmark queries. The results show 2.9×–4.4× speedups over the cache-less FPGA-based baseline.
Behzad Salami, Oriol Arcas-Abella, Nehir Sonmez, Osman Unsal, Adrian Cristal Kestelman
Distributed Big Data Analysis for Mobility Estimation in Intelligent Transportation Systems
Abstract
This article describes the application of distributed computing techniques for the analysis of big data information from Intelligent Transportation Systems. Extracting useful mobility information from large volumes of data is crucial to improve decision-making processes in smart cities. We study the problem of estimating demand and origin-destination matrices based on ticket sales and location of buses in the city. We introduce a framework for mobility analysis in smart cities, including two algorithms for the efficient processing of large mobility data from the public transportation in Montevideo, Uruguay. Parallel versions are proposed for distributed memory (e.g., cluster, grid, cloud) infrastructures and a cluster implementation is presented. The experimental analysis performed using realistic datasets demonstrate that significatively speedup values, up to 16.41, are obtained.
Enzo Fabbiani, Pablo Vidal, Renzo Massobrio, Sergio Nesmachnow
Evaluation of a Master-Slave Parallel Evolutionary Algorithm Applied to Artificial Intelligence for Games in the Xeon-Phi Many-Core Platform
Abstract
Evolutionary algorithms are non-deterministic metaheuristic methods that emulate the evolution of species in nature to solve optimization, search, and learning problems. This article presents a parallel implementation of evolutionary algorithms on Xeon Phi for developing an artificial intelligence to play the NES Pinball game. The proposed parallel implementation offloads the execution of the fitness function evaluation to Xeon Phi. Multiple evolution schemes are studied to get the most efficient resource utilization. A micro-benchmarking of the Xeon Phi coprocessor is performed to verify the existing technical documentation and obtain detail knowledge of its behavior. Finally, a performance analysis of the proposed parallel evolutionary algorithm is presented, focusing on the characteristics of the evaluated platform.
Sebastián Rodríguez Leopold, Facundo Parodi, Sergio Nesmachnow, Esteban Mocskos
A Software Framework for 2D Mesh Based Simulations in Discrete Time with Local Interaction
Abstract
Some features shared by families of natural phenomena may be exploited for the process of implementation of software simulation tools. An analogy of this situation is the experimentation in manufacturing, where the products are designed by organisations in a way that it is possible to exploit commonality in components and process. This work aims to exploit commonality in some simulation problems in order to create a software framework allowing the reusing of code to reduce effort in the implementation. The proposed framework shall include the core components for the simulation of varied phenomena. The interested researchers can use parts of the framework and then adapt the remaining components to their specific simulation problems. After this discussion, a test case is proposed from previous works related to lava flow simulations showing experimental results. Some guidelines for the design of the framework are presented, as well as a discussion about them.
Sergio A. Gélvez C., Gabriel Pedraza, Carlos J. Barrios H
A GPU Parallel Implementation of the RSA Private Operation
Abstract
The implementation of the RSA private operation tends to be expensive since its computationally complexity is cubic with respect to the bit-size of its private key. As a consequence, considerable effort has been put into optimizing this operation. In this work, we present a parallel implementation of the RSA private operation using the Single Instruction Multiple Thread (SIMT) threading model of Graphics Processor Unit (GPU) platforms. The underlying modular arithmetic is performed by means of the Residue Number System (RNS) representation. By combining these two approaches, we present a GPU software library that achieves high-speed timings for the RSA private operation when using 1024-, 2048- and 3072-bit secret keys.
Nareli Cruz-Cortés, Eduardo Ochoa-Jiménez, Luis Rivera-Zamarripa, Francisco Rodríguez-Henríquez
Reducing the Overhead of Message Logging in Fault-Tolerant HPC Applications
Abstract
With the exascale era within reach, the high performance computing community is preparing to embrace the challenges associated with extreme-scale systems. Resilience raises as one of the major hurdles in making those systems usable for the advance of science and industry. Message logging is a well-known strategy to provide fault tolerance, one that is promising due to its ability to avoid global restart. However, message-logging protocols may suffer considerable overhead if implemented for the general case. This paper introduces a new message-logging protocol that leverages the benefits of a flexible parallel programming paradigm. We evaluate the protocol using a particular type of applications and demonstrate it can keep a low performance penalization when scaling up to 128,000 cores.
Esteban Meneses
Dense and Sparse Matrix-Vector Multiplication on Maxwell GPUs with PyCUDA
Abstract
We present a study on Matrix-Vector Product operations in the Maxwell GPU generation through the PyCUDA python library. Through this lens, a broad analysis is performed over different memory management schemes. We identify the approaches that result in higher performance in current GPU generations when using dense matrices. The found guidelines are then applied to the implementation of the sparse matrix-vector product, covering structured (DIA) and unstructured (CSR) sparse matrix formats. Our experimental study on different datasets reveals that there is room for little improvement in the current state of the memory hierarchy, and that the expected Pascal GPU generation will get a major benefit from our techniques.
Francisco Nurudín Álvarez, José Antonio Ortega-Toro, Manuel Ujaldón

HPC Applications and Simulations

Frontmatter
Enhancing Energy Production with Exascale HPC Methods
Abstract
High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.
Rafael Mayo-García, José J. Camata, José M. Cela, Danilo Costa, Alvaro L. G. A. Coutinho, Daniel Fernández-Galisteo, Carmen Jiménez, Vadim Kourdioumov, Marta Mattoso, Thomas Miras, José A. Moríñigo, Jorge Navarro, Philippe O. A. Navaux, Daniel de Oliveira, Manuel Rodríguez-Pascual, Vítor Silva, Renan Souza, Patrick Valduriez
Three-Dimensional CSEM Modelling on Unstructured Tetrahedral Meshes Using Edge Finite Elements
Abstract
The last decade has been a period of rapid growth for electromagnetic methods (EM) in geophysics, mostly because of their industrial adoption. In particular, the marine controlled-source electromagnetic method (CSEM) has become an important technique for reducing ambiguities in data interpretation in hydrocarbon exploration. In order to be able to predict the EM signature of a given geological structure, modelling tools provide us with synthetic results which we can then compare to real data. On the other hand and among the modelling methods for EM based upon 3D unstructured meshes, the Nédélec Edge Finite Element Method (EFEM) offers a good trade-off between accuracy and number of degrees of freedom, i.e. size of the problem. Furthermore, its divergence-free basis is very well suited for solving Maxwell’s equation. On top of that, we present the numerical formulation and results of 3D CSEM modelling using the Parallel Edge-based Tool for Geophysical Electromagnetic Modelling (PETGEM) on unstructured tetrahedral meshes. We validated our experiments against quasi-analytical results in canonical models.
Octavio Castillo-Reyes, Josep de la Puente, José María Cela
A Parallel Evolutionary Approach to the Molecular Docking Problem
Abstract
The ligand-protein molecular docking is an unsolved problem in Bioinformatics consisting in determining the way in which two such molecules bind in nature, depending on their structure and interaction. The solution of this problem is one of the core aims of Bioinformatics and the basis for the rational drug design process. Through the use of evolutionary and parallelization techniques, a new approach is presented, consisting of a threaded implementation of an island model genetic algorithm. The results show a mixed outcome, with an aided search version achieving quick and accurate predictions, while the more ambitious free search proposal still does not produce acceptable results. Additional advantages of the software obtained are cross-platform nature, reasonable performance on average consumer hardware and ease of use.
Daniel Espinosa-Galindo, Jesús A. Fernández-Flores, Inés A. Almanza-Román, Rosaura Palma-Orozco, Jorge L. Rosas-Trigueros
Deep Learning Applied to Deep Brain Stimulation in Parkinson’s Disease
Abstract
In order to better model complex real-world data such as biomedical signals, one approach is to develop pattern recognition techniques and robust features that capture the relevant information. In this paper, we use deep learning methods, and in particular multilayer perceptron, to build an algorithm that can predict subcortical structures of patients with Parkinson’s disease, based on microelectrode records obtained during deep brain stimulation. We report on experiments using a data set involving 52 microelectrode records for the structures: zona incerta, subthalamic nucleus, thalamus nucleus, and substantia nigra. The results show that the combination of features and deep learning produces 99.2% precision of detection and classification on the average of the subcortical structures under study. In conclusion, based on the high precision obtained in the classification, deep learning could be used to predict subcortical structure, and mainly the subthalamic nucleus for neurostimulation.
Pablo Guillén
Computational Simulation of the Hemodynamic Behavior of a Blood Vessel Network
Abstract
During development, blood vessel networks adapt to gradual changes in the oxygen required by surrounding tissue, shear stress, and mechanical stretch. The possible adaptations include remodeling the vessel network and thickening the walls of blood vessels. However, the treatment of several vascular diseases including cerebral arteriovenous malformations, arteriosclerosis, aneurysms, and vascular retinal disorders, may lead to abrupt changes that could produce hemorrhage or other problems. Modeling the hemodynamic behavior of a blood vessel network may help assess or even diminish the risks associated with each treatment. In this work, we briefly describe the radiological studies available to study the anatomy and hemodynamics of a patient. We then describe the segmentation, smoothing, healing, skeletonyzation, and meshing processes that are needed to obtain an initial model for the numerical simulations. Additionally, we state some important concepts about blood rheology and blood vessel elasticity. Further, we include a system of equations to describe the interaction between flowing blood and the elastic blood vessels.
Nathan Weinstein, Alejandro Aviles, Isidoro Gitler, Jaime Klapp
Scaling Properties of Soft Matter in Equilibrium and Under Stationary Flow
Abstract
A brief review is presented of the scaling of complex fluids, polymers and polyelectrolytes in solution and in confined geometry, in thermodynamical, structural and rheology properties using equilibrium and non-equilibrium dissipative particle dynamics simulations. All simulations were carried out on high performance computational facilities using parallelized algorithms, solved on computers using both central and graphical processing units. The scaling approach is shown to be a unifying axis around which general trends and basic knowledge can be gained, illustrated through a series of case studies.
Armando Gama Goicochea
On Finite Size Effects, Ensemble Choice and Force Influence in Dissipative Particle Dynamics Simulations
Abstract
The influence of finite size effects, choice of statistical ensemble and contribution of the forces in numerical simulations using the dissipative particle dynamics (DPD) model are revisited here. Finite size effects in stress anisotropy, interfacial tension and dynamic viscosity are computed and found to be minimal with respect to other models. Additionally, the choice of ensemble is found to be of fundamental importance for the accurate calculation of properties such as the solvation pressure, especially for relatively small systems. Lastly, the contribution of the random, dissipative and conservative forces that make up the DPD model in the prediction of properties of simple liquids such as the pressure is studied as well. Some tricks of the trade are provided, which may be useful for those carrying out high-performance numerical simulations using the DPD model.
Miguel Ángel Balderas Altamirano, Elías Pérez, Armando Gama Goicochea
Ab initio DFT Calculations for Materials in Nuclear Research
Abstract
Currently, high performance computing is a very important tool in material science. The study of materials at the microscopic level for obtaining macroscopic properties from the behavior at atomic level is a big challenge, even more when a large number of atoms are involved in the analysis. One of the most important open source codes capable of performing ab initio density functional theory (DFT) calculations with many hundreds of atoms at low computational cost is the SIESTA code. This code is able to perform self-consistent electronic structure simulations based on DFT for very complex materials. The performance of this code is tested in this work by applying it to the study of typical core structural materials used in nuclear reactors such as Zr and Zircaloy-2. These materials are commonly used for the cladding of the fuel rods used in Light Water Reactors (LWR) and CANDU reactors. First-principles calculations for Zr, Zircalloy-2 and modified structures of them were performed with microstructural defects in order to analyze material damage. Adsorption energy of I2 on Zr (0 0 0 1) surfaces as a function of the distance is also presented. Results showed how this kind of simulations can be carried out for large systems at a relatively cheap computational cost.
E. Mayoral, A. Rey, Jaime Klapp, A. Gómez, M. Mayoral
Super Free Fall of a Liquid Frustum in a Semi-infinite Cone
Abstract
In this paper we have analyzed theoretically the super free fall of a near inviscid mass of liquid, which fills partially a small section of a very long vertical conical pipe. Through the use of a one-dimensional inviscid model, we describe the simultaneous and pecular motion of the two interphases of the liquid.
Áyax Torres, Salomón Peralta, Abraham Medina, Jaime Klapp, Francisco Higuera
A Particle Method for Fluid-Structure Interaction Simulations in Multiple GPUs
Abstract
This chapter is a presentation of the programming philosophy behind a novel numerical particle method for the simulation of the interaction of compressible fluids and elastic structures, specifically designed to run in multiple Graphics Processing Units (GPUs). The code has been developed using the CUDA C Application Programming Interface (API) for fine-grain parallelism in the GPUs and the Message Passing Interface library (MPI) for the distribution of threads in the Central Processing Units (CPUs) and the communication of shared data between GPUs. The numerical algorithm does not use smoothing kernels nor weighting functions for the computation of differential operators. A novel approach is used to compute gradients using averages of radial finite differences and divergences using Gauss’ theorem by approximations based on area integrals around local spheres around each particle. The interactions of the particles inside the fluid are modelled using the isothermal, compressible Navier-Stokes equations and a simple equation of state. The elastic material is modelled using inter-particle springs with damping. Results show the potential of the method for the simulation of flows in complex geometries.
Julián Becerra-Sagredo, Leonardo Sigalotti, Jaime Klapp
Scheduling Algorithms for Distributed Cosmic Ray Detection Using Apache Mesos
Abstract
This article presents two scheduling algorithms applied to the processing of astronomical images to detect cosmic rays on distributed memory high performance computing systems. We extend our previous article that proposed a parallel approach to improve processing times on image analysis using the Image Reduction and Analysis Facility IRAF software and the Docker project over Apache Mesos. By default, Mesos introduces a simple list scheduling algorithm where the first available task is assigned to the first available processor. On this paper we propose two alternatives for reordering the tasks allocation in order to improve the computational efficiency. The main results show that it is possible to reduce the makespan getting a speedup = 4.31 by adjusting how jobs are assigned and using Uniform processors.
Germán Schnyder, Sergio Nesmachnow, Gonzalo Tancredi, Andrei Tchernykh
The Impetus Project: Using abacus for the High Performance Computation of Radiative Tables for Accretion onto a Galaxy Black Hole
Abstract
We present the intensive calculations of digital tables for the radiative terms that appear in the energy and momentum equations used to simulate the accretion onto supermassive black holes (SMBHs) at the centers of galaxies. Cooling and heating rates are presented, calculated with a Spectral Energy Distribution constructed from: an accretion disk plus an X-ray power-law and an accretion disk plus a Corona. The electronic structures of atoms, the photoionization cross-sections, and the recombination rates are treated in great detail. With the recent discovery of outflows originating at sub-parsec scales, these tables may provide a useful tool for modeling gas accretion processes onto a SMBH.
José M. Ramírez-Velasquez, Jaime Klapp, Ruslan Gabbasov, Fidel Cruz, Leonardo Di G. Sigalotti
Database of CMFGEN Models in a 6-Dimensional Space
Abstract
We present a database of 25,000 atmospheric models (which is to grow to a grand total of 75000 models by the conclusion of the project) with stellar masses between 9 and 120 M\(_{\odot }\), covering the region of the OB main sequence and W-R stars in the H–R diagram. The models were calculated using the ABACUS I supercomputer and the stellar atmosphere code CMFGEN. The parameter space has 6 dimensions: surface temperature of the star, also called the effective temperature (\(T_\mathrm{eff}\)), luminosity (L), metallicity (Z), and three stellar wind parameters, the exponent (\(\beta \)) of the wind velocity law, the terminal velocity (\(V_{\infty }\)), and the volume filling factor (\(F_{cl}\)). For each model, we also calculate synthetic spectra in the UV (900–2,000 Å), optical (3,500–7,000 Å), and near IR (10,000–30,000 Å) ranges. For comparison with observations, the synthetic spectra were rotationally broaden using ROTIN3, by covering the range between 10 and 350 km s\(^{-1}\) with steps of 10 km s\(^{-1}\), resulting a library of 1,575,000 synthetic spectra.
Janos Zsargó, Celia Rosa Fierro, Jaime Klapp, Anabel Arrieta, Lorena Arias, D. John Hillier
Cosmography with the Hubble Rate: The Eis Approach
Abstract
The statefinder parameters characterize the expansion history of the Universe in a model independent way. The standard method to estimate them is named Standard Cosmography (SC). In this paper we show how these estimations turn out to be highly biased and the standard deviations of their probability distributions very large. The Eis method was tailored to minimize these drawbacks. Here, with the aid of mock supernovae catalogs, we show how our new method works, and that it surpasses the performance of SC for both the bias and dispersion of the estimated statefinders.
Jaime Klapp, Alejandro Aviles, Orlando Luongo
Backmatter
Metadata
Title
High Performance Computing
Editors
Carlos Jaime Barrios Hernández
Isidoro Gitler
Jaime Klapp
Copyright Year
2017
Electronic ISBN
978-3-319-57972-6
Print ISBN
978-3-319-57971-9
DOI
https://doi.org/10.1007/978-3-319-57972-6

Premium Partner