Skip to main content
main-content

Tipp

Weitere Kapitel dieses Buchs durch Wischen aufrufen

2019 | OriginalPaper | Buchkapitel

18. Hadoop: A Standard Framework for Computer Cluster

verfasst von: Eljar Akhgarnush, Lars Broeckers, Thorsten Jakoby

Erschienen in: The Impact of Digital Transformation and FinTech on the Finance Professional

Verlag: Springer International Publishing

share
TEILEN

Abstract

Hadoop has become a standard for processing big data in a clustered environment. This article provides an introduction to Hadoop/HDSF and other important Apache projects including Spark, Hive and HBase. The basic concepts like worker nodes and cluster manager are also introduced here.
Fußnoten
1
RC—record columnar files.
 
2
ORC—optimized RC files.
 
Literatur
Zurück zum Zitat Capriolo, E., Wampler, D., & Rutherglen, J. (2012). Programming Hive: Data warehouse and query language for Hadoop. Sebastopol: O’Reilly Media, Inc. Capriolo, E., Wampler, D., & Rutherglen, J. (2012). Programming Hive: Data warehouse and query language for Hadoop. Sebastopol: O’Reilly Media, Inc.
Zurück zum Zitat Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113. Dean, J., & Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters. Communications of the ACM, 51(1), 107–113.
Zurück zum Zitat Gerecke, K., & Poschke, K. (2010). IBM system storage-Kompendium. Ehningen, Germany: IBM. Gerecke, K., & Poschke, K. (2010). IBM system storage-Kompendium. Ehningen, Germany: IBM.
Zurück zum Zitat Meng, X., & Bradley, J. (2016). MLlib: Machine learning in Apache Spark. Journal of Machine Learning Research, 1, 1235–1241. Meng, X., & Bradley, J. (2016). MLlib: Machine learning in Apache Spark. Journal of Machine Learning Research, 1, 1235–1241.
Metadaten
Titel
Hadoop: A Standard Framework for Computer Cluster
verfasst von
Eljar Akhgarnush
Lars Broeckers
Thorsten Jakoby
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-23719-6_18

Premium Partner