Commissioning nodes stand for adding new nodes in current cluster which operates your Hadoop framework. In contrast, decommissioning nodes stands for removing nodes from your cluster. This is very useful feature to handle node failure during the operation of Hadoop cluster without stopping entire Hadoop nodes in your cluster.
You can’t decommission a DataNode or host with DataNode if number of the data nodes equals to the replication factor. if you attempt to decommission a datanode in such situation the data node decommission process will not complete. you have to abort the decommission process and change the replication factor.
In my case, I have two data node and decommission one will leave only on data node. Before decomm process , change the replication factor to 1.
Same can be done via command line.
hdfs dfs -setrep -R -w 1/
Now restart the stale services. Continue reading → Decommission/Recommission – DataNode in Cloudera