Apache Hadoop Architecture New Exam 100% Verified
Cluster - ANSWER A collection of computers that work together and provide data
storage, data processing, and resource management.
Node - ANSWER A single computer from a Cluster.
Master Node - ANSWER Manage distribution of work and data to Worker Nodes.
Daemon - ANSWER A Program operating through a Node-each Apache Hadoop
daemon performs a particular function through the Cluster.
What are the 3 Main components of a Cluster? - ANSWER Processing, Resource
Management, and Storage
H.D.F.S. basic concepts - ANSWER 1. Filesystem through Java
2. Is alongside of Native filesystem
3. Redundant Storage of Massive amouts of data
H.D.F.S. performs best through. - ANSWER Modest Number of Large Files (Millions of
100 M.B.
What's not allowed from H.D.F.S.? - ANSWER It's [ONLY] 'Write Once' and No Random
Writes to files are allowed
H.D.F.S. is optimized for. - ANSWER Large-streaming reads of files ([not] Random
Reads)
, How are files stored through H.D.F.S.? - ANSWER 1. Data files are divided into 128
M.B. Blocks, which are distributed during Load Time.
2. Each Block replicated into multiple data Nodes.
3. NameNode stores Metadata about files and Blocks
What will happen if NameNode and how often should the NameNode be run? -
ANSWER The Cluster becomes inaccessible The NameNode needs to run always
H.D.F.S NameNode Availability: How is this -up? - ANSWER High Availability-2
NameNodes-active and Standby
H.D.F.S NameNode Availability: Small Clusters can use. - ANSWER 'Classic Mode'
H.D.F.S NameNode Availability: Classic Mode - ANSWER - 1 NameNode
- 1 'Helper' Node called as Secondary NameNode (Bookkeeping and [not] Backup)
Copy from the Local Disk to the User's Directory - ANSWER -put
Copy file foo.txt from the Local Disk to the User's Directory - ANSWER $ hdfs dfs -put
foo.txt foo.txt
Get a directory Listing of the User's Home directory via H.D.F.S. - ANSWER hdfs dfs -ls
List the contents of H.D.F.S. Root directory - ANSWER hdfs dfs -ls /
Display the contents of the H.D.F.S. file /User/Hasan/giggles.txt - ANSWER $ hdfs dfs
-cat /User/Hasan/giggles.txt
Copy H.D.F.S. file to the Local Disk - ANSWER -get
Cluster - ANSWER A collection of computers that work together and provide data
storage, data processing, and resource management.
Node - ANSWER A single computer from a Cluster.
Master Node - ANSWER Manage distribution of work and data to Worker Nodes.
Daemon - ANSWER A Program operating through a Node-each Apache Hadoop
daemon performs a particular function through the Cluster.
What are the 3 Main components of a Cluster? - ANSWER Processing, Resource
Management, and Storage
H.D.F.S. basic concepts - ANSWER 1. Filesystem through Java
2. Is alongside of Native filesystem
3. Redundant Storage of Massive amouts of data
H.D.F.S. performs best through. - ANSWER Modest Number of Large Files (Millions of
100 M.B.
What's not allowed from H.D.F.S.? - ANSWER It's [ONLY] 'Write Once' and No Random
Writes to files are allowed
H.D.F.S. is optimized for. - ANSWER Large-streaming reads of files ([not] Random
Reads)
, How are files stored through H.D.F.S.? - ANSWER 1. Data files are divided into 128
M.B. Blocks, which are distributed during Load Time.
2. Each Block replicated into multiple data Nodes.
3. NameNode stores Metadata about files and Blocks
What will happen if NameNode and how often should the NameNode be run? -
ANSWER The Cluster becomes inaccessible The NameNode needs to run always
H.D.F.S NameNode Availability: How is this -up? - ANSWER High Availability-2
NameNodes-active and Standby
H.D.F.S NameNode Availability: Small Clusters can use. - ANSWER 'Classic Mode'
H.D.F.S NameNode Availability: Classic Mode - ANSWER - 1 NameNode
- 1 'Helper' Node called as Secondary NameNode (Bookkeeping and [not] Backup)
Copy from the Local Disk to the User's Directory - ANSWER -put
Copy file foo.txt from the Local Disk to the User's Directory - ANSWER $ hdfs dfs -put
foo.txt foo.txt
Get a directory Listing of the User's Home directory via H.D.F.S. - ANSWER hdfs dfs -ls
List the contents of H.D.F.S. Root directory - ANSWER hdfs dfs -ls /
Display the contents of the H.D.F.S. file /User/Hasan/giggles.txt - ANSWER $ hdfs dfs
-cat /User/Hasan/giggles.txt
Copy H.D.F.S. file to the Local Disk - ANSWER -get