In this blog, I am going to talk about how to configure and manage a High availability HDFS (CDH 5.12.0) cluster.  In earlier releases, the NameNode was a single point of failure (SPOF) in a HDFS cluster. Each cluster had a single NameNode, and if that machine or process became unavailable, the cluster as a whole would be unavailable until the NameNode was either restarted or brought up on a separate machine. The Secondary NameNode did not provide failover capability.

The HA architecture solved this problem of NameNode availability by allowing us to have two NameNodes in an active/passive configuration. So, we have two running NameNodes at the same time in a High Availability cluster:

  • Active NameNode
  • Standby/Passive NameNode.

We can implement the Active and Standby NameNode configuration in following two ways:

  • Using Quorum Journal Nodes
  • Shared Storage using NFS

Using the Quorum Journal Manager (QJM) is the preferred method for achieving high availability for HDFS. Read here to know more about QJM and NFS methods. In this blog, I’ll implement the HA configuration for quorum based storage and here are the IP address and corresponding machines Names/roles.

CDMH1

  • NameNode machines – NN1/NN2 of equivalent hardware and spec
  • JournalNode machines – The JournalNode daemon is relatively lightweight, so these daemons can reasonably be collocated on machines with other Hadoop daemons, for example NameNodes, the JobTracker, or the YARN ResourceManager. There must be at least three JournalNode daemons, since edit log modifications must be written to a majority of JournalNodes.So 3 JN’s runs on NN1/NN2 and MGT Server.
  • Note that when running with N JournalNodes, the system can tolerate at most (N – 1) / 2 failures and continue to function normally.
  • The ZookeerFailoverController (ZKFC) is a Zookeeper client that also monitors and manages the NameNode status. Each of the NameNode runs a ZKFC also. ZKFC is responsible for monitoring the health of the NameNodes periodically.
  • Resource Manager Running on same NameNode NN1/NN2.
  • Two Data Nodes – DN1 and DN2

Pre-requirements:

First of all, we have to edit hosts file in /etc/ folder in NameNode(NN1) , specify the IP address of each system followed by their host names. Each machine need a static IP address and all VM’s machines should be ping able from each other.

All VM machines are set up with REHL 7 operating system. And disable the firewall restrictions on all VM’s.

Setup Java:

Hadoop is written in Java so we need to set up Java first. Install the Oracle Java Development Kit (JDK) as below on all nodes.

Logout and login and you can see java version.

Download/install CDH 5.13.X

If you want to create your own YUM repository, download the appropriate repo file, create the repo, distribute the repo file as described under:

http://archive.cloudera.com/cdh5/repo-as-tarball/5.13.0/

You can use Hadoop 2.x distribution as well.

Now install below components on NameNode1/2 (NN1/NN2)/and disable the run level start of components.

Similarly install the DataNode and NodeManager rpms on DN1/DN2.

Managment server (MGT) will be installed with journalNode, ZooKeeper and client Service.

Distribute Authentication Key-pairs:

Login to NN1 as the root user, and generate an ssh-key. Copy the generated ssh Key to all Node’s authorized keys.

Now test the passwordless connections. Create a file “hosts” with IP address of all nodes.

Configuration Details:

To configure HA NameNodes, you must add several configuration options to your hdfs-site.xml configuration file. Choose a logical name for this nameservice, for example “ha-cluster”. Open the core-site.xml file from the Active Name node and add the below properties.

cd /etc/hadoop/conf

Open the HDFS-site.xml file, add this Datanode directory path in dfs.datanode.data.dir property and other properties.

Edit yarn-site.xml on NN1.

edit and set yarn as the default framework for MapReduce operations and other HA properties.

update slaves with DataNodes VM’s DN1/DN2.

Once all files updated on NN1 and now copy configs files from NN1 to all other nodes.

Deploying Zookeeper:

In a conf directory you have zoo_sample.cfg file, create the zoo.cfg using zoo_sample.cfg file. Open the zoo.cfg file. Add the custom directory path to the dataDir property if you want and add the below details regarding remaining node, in the zoo.cfg file. I kept the same directory. See screen shot below and copy the Zookeeper conf file to NN2 and MGT server.

Zoo

Now initialize the zookeeper and start the Zookeeper service.

Now start journal node now.

Format HDFS:

Format the NameNode on NN1 only if new cluster. If converting the existing cluster from NON-HA to HA then re-initilize it as i did.

Start And Monitor HDFS Services:

Now start the NameNode Service on NN1.

Now NN2 has different meta data.. So how you’ll sync it with NN1, you can either copy the entire meta data directory or use this command. Start the NameNode servie on NN2.

Start the DataNode Services on DN1/DN2.

The next step is to initialize required state in ZooKeeper. You can do so by running the following command from one of the NameNode hosts. This will create a znode in ZooKeeper inside of which the automatic failover system stores its data.

Check Zookeeper connected or not.

Now start ZKFC service. Note: wherever you start the ZKFC first, that’ll become the active node.

Web Interface:

Open the browser and explore further.

http://192.168.1.150:50070/dfshealth.html#tab-overview

http://192.168.1.151:50070/dfshealth.html#tab-overview

Check Services with command line.

Test HA set up:

See process Id and kill the active NameNode on NN1.

Repeat the same test on NN2 and also check the web URL’s.

YARN HA set up:

Now start the ResourceManager and NodeManager on NN1/NN2.

Complete the Failover test here.

http://192.168.1.150:8088/cluster/cluster

http://192.168.1.151:8088/cluster/cluster

Once you have configured the YARN resource manager for HA, if the active resource manager is down or is no longer on the list, one of the standby resource managers becomes active and resumes resource manager responsibilities. This way, jobs continue to run and complete successfully.

Please share and like if you this blog is help for you.

Thanks

Mandy

Leave a Reply