2013 | OriginalPaper | Buchkapitel
1st Workshop on Big Data Management in Clouds – BDMC2012
verfasst von : Alexandru Costan, Ciprian Dobre
Erschienen in: Euro-Par 2012: Parallel Processing Workshops
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
As data volumes increase at exponential speed in more and more application fields of science, the challenges posed by handling Big Data in the Exabyte era gain an increasing importance. High-energy physics, statistics, climate modeling, cosmology, genetics or bio-informatics are just a few examples of fields where it becomes crucial to efficiently manipulate Big Data, which are typically shared at large scale. Rapidly storing this data, protecting it from loss and analyzing it to understand the results are significant challenges, made more difficult by decades of improvements in computation capabilities that have been unmatched in storage. For many applications, the overall performance and scalability becomes clearly driven by the performance of the data handling subsystem. As we anticipate Exascale systems in 2020, there is a growing consensus in the scientific community that revolutionary new approaches are needed in computational science data management. These new trends lead us to rethink the traditional file-based data management abstraction for large-scale applications. Moreover, for obvious cost-related reasons, new architectures are clearly needed as well as alternate infrastructures to supercomputers., like hybrid or HPC clouds.