Driving Big Data with Hadoop Tools and Technologies

The core components of Hadoop, namely Hadoop Distributed File System (HDFS), MapReduce, and Yet Another Resource Negotiator (YARN) are explained. This chapter also examines the features of HDFS such as its scalability, reliability, and its robust nature. Apache Hadoop is an open‐source fr...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Big Data s. 1
Hlavní autori: Balusamy, Balamurugan, Abirami R, Nandhini, Kadry, Seifedine, Gandomi, Amir H
Médium: Kapitola
Jazyk:English
Vydavateľské údaje: United States Wiley 2021
John Wiley & Sons
John Wiley & Sons, Incorporated
John Wiley & Sons, Inc
Vydanie:1
Predmet:
ISBN:9781119701828, 1119701821
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:The core components of Hadoop, namely Hadoop Distributed File System (HDFS), MapReduce, and Yet Another Resource Negotiator (YARN) are explained. This chapter also examines the features of HDFS such as its scalability, reliability, and its robust nature. Apache Hadoop is an open‐source framework written in Java that supports processing of large data sets in streaming access pattern across clusters in a distributed computing environment. HBase is a column‐oriented NoSQL database that is a horizontally scalable open‐source distributed database built on top of the HDFS. When the structured data is huge and RDBMS is unable to support the huge data, the data is transferred to HDFS through a tool called SQOOP (SQL to Hadoop). The basic difference flume and SQOOP is that SQOOP is used in ingesting structured data into Hive, HDFS, and HBase, whereas Flume is used to ingest large amounts of streaming data into Hive, HDFS, and HBase.
ISBN:9781119701828
1119701821
DOI:10.1002/9781119701859.ch5