2016 | OriginalPaper | Chapter
Distributed Systems Performance for Big Data
Authors : Marcelo Paiva Ramos, Paulo Marcelo Tasinaffo, Eugenio Sper de Almeida, Luis Marcelo Achite, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
Published in: Information Technology: New Generations
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
This paper describes a methodology for working with distributed systems, and achieve performance in Big Data, through the framework Hadoop, Python programming language, and Apache Hive module. The efficiency of the proposed methodology is tested through a case study that addresses a real problem found in the supercomputing environment of the Center for Weather Forecasting and Climate Studies linked to the Brazilian Institute for Space Research (CPTEC / INPE), which provides Society a work able to predict disasters and save people lives. In all three experiments involving the issue, using the Cray XT-6 supercomputer: (i) the first issue involves programming in Python and a sequential and monoprocessed arquitecture; (ii) the second uses Python and Hadoop framework, over parallel and distributed arquitecture; (iii) the latter combines Hadoop and Hive in a parallel and distributed arquitecture. The main results of these experiments are compared, discussed, and topics beyond the scope in this research are exposed as recommendations and suggestions for future work.