Page 2 of 2

High Availability Set up – HDFS/YARN using Quorum

In this blog, I am going to talk about how to configure and manage a High availability HDFS (CDH 5.12.0) cluster.  In earlier releases, the NameNode was a single point of failure (SPOF) in a HDFS cluster. Each cluster had a single NameNode, and if that machine or process became unavailable, the cluster as a whole would be unavailable until the NameNode was either restarted or brought up on a separate machine. The Secondary NameNode did not provide failover capability.

The HA architecture solved this problem of NameNode availability by allowing us to have two NameNodes in an active/passive configuration. So, we have two running NameNodes at the same time in a High Availability cluster:

  • Active NameNode
  • Standby/Passive NameNode.

We can implement the Active and Standby NameNode configuration in following two ways:

  • Using Quorum Journal Nodes
  • Shared Storage using NFS

Using the Quorum Journal Manager (QJM) is the preferred method for achieving high availability for HDFS. Read here to know more about QJM and NFS methods. In this blog, I’ll implement the HA configuration for quorum based storage and here are the IP address and corresponding machines Names/roles.


  • NameNode machines – NN1/NN2 of equivalent hardware and spec
  • JournalNode machines – The JournalNode daemon is relatively lightweight, so these daemons can reasonably be collocated on machines with other Hadoop daemons, for example NameNodes, the JobTracker, or the YARN ResourceManager. There must be at least three JournalNode daemons, since edit log modifications must be written to a majority of JournalNodes.So 3 JN’s runs on NN1/NN2 and MGT Server.
  • Note that when running with N JournalNodes, the system can tolerate at most (N – 1) / 2 failures and continue to function normally.
  • The ZookeerFailoverController (ZKFC) is a Zookeeper client that also monitors and manages the NameNode status. Each of the NameNode runs a ZKFC also. ZKFC is responsible for monitoring the health of the NameNodes periodically.
  • Resource Manager Running on same NameNode NN1/NN2.
  • Two Data Nodes – DN1 and DN2

Continue reading → High Availability Set up – HDFS/YARN using Quorum

Set up Hadoop Cluster – Multi-Node

From my previous blog, we learnt how to set up a Hadoop Single Node Installation. Now, I will show how to set up a Hadoop Multi Node Cluster. A Multi Node Cluster in Hadoop contains two or more DataNodes in a distributed Hadoop environment.  This is practically used in organizations to store and analyse their Petabytes and Exabytes of data.

Here in this blog, we are taking three machine to set up multi-node cluster – MN and DN1/DN2.

  • Master node (MN) will run the NameNode and ResourcesManager Daemons.
  • Data Nodes (DN1 and DN2) will be our data nodes that stores the actual data and provide processing power to run the jobs. Both hosts will run the DataNode and NodeManager daemons.

Software Required:

  • REHL 7 – Set up MN and DN1/DN2 with REHL 7 operating system – Minimal Install.
  • Hadoop-2.7.3
  • JAVA 7
  • SSH

Configure the System

First of all, we have to edit hosts file in /etc/ folder in MasterNode (MN) , specify the IP address of each system followed by their host names.

Disable the firewall restrictions. Continue reading → Set up Hadoop Cluster – Multi-Node

Install Apache Hadoop – Single Node REHL 7

Hadoop is a Java-based programming framework that supports the processing and storage of extremely large datasets on a cluster of inexpensive machines. It was the first major open source project in the big data playing field and provides high throughput access to application data .

The main goal of this tutorial is to get a simple Hadoop installation up and running so that you can play around with the software and learn more about it.

Environment: This  blog has been tested in the following software version.

  • REHL ( Red hat Linux 7.4) on Virtual box 5.2
  • Hadoop 2.7.3 version
  • update /etc/hosts file with Hostname and IP address.

[root@cdhs ~]# cat /etc/hosts cdhs

Dedicated Hadoop system user:

After VM set up, please add a non sudo user dedicated to Hadoop which will be used to configure Hadoop. Following command will add the user hduser and the group hadoop to VM machine. Continue reading → Install Apache Hadoop – Single Node REHL 7