Oracle Big Data Cloud Service – Introduction

Big data is a topic that everyone seems to be talking about it. But still many of us wonder  “What exactly is big data”. Which technology provider should I use?? I have written couple of blog on apache hadoop, cloudera distribution and AWS EMR service as well. And today in this blog, I’ll go through Oracle Big data Cloud Service and what is included in the service.

What is Oracle Big data cloud Service???

Oracle Big Data Cloud Service is an automated cloud service for big data processing. It is optimized to run different set of workloads from Hadoop-only workloads (ETL, Spark, Hive)  to interactive SQL queries using SQL-on-hadoop tools. Here are some key features of Oracle cloud big data service:

  • Create cloudera certified cluster in quick time.
  • Cluster set up is always fault tolerant with HA hadoop and security infrastructure
  • Fully tested hadoop upgrades ( version skipping supported)
  • Maximum versatility:  With cloudera distribution including Apache hadoop enterprise data hub, you can use hadoop, hive, impala, spark etc… Also you can install and operate third-party tools.

Continue reading → Oracle Big Data Cloud Service – Introduction

HDFS Command line – Manage files and directories.

In my previous blog, we have configured hadoop single and cluster set up. Now try to create the files and directories on Hadoop distributed file system(HDFS).  You can see the full list here.

When I started the hdfs commands I got confused with three different command syntax. All three commands appears to be same but have some differences as explained below.

  • hadoop fs {args}

FS relates to a generic file system which can point to any file systems like local, HDFS etc. So this can be used when you are dealing with different file systems such as Local FS, (S)FTP, S3, and others.

  • hadoop dfs {args}

dfs is very specific to HDFS. would work for operation relates to HDFS. This has been deprecated and we should use hdfs dfs instead.

  • hdfs dfs {args}

same as 2nd i.e would work for all the operations related to HDFS and is the recommended command instead of hadoop dfs Continue reading → HDFS Command line – Manage files and directories.

Analyze Big data with EMR

Amazon Enterprise MapReduce is a fully managed cluster platform that process and analyze larger amount of data.  When you run a large amount of data you eventually run into processing problems. By using hadoop cluster EMR can help in reducing large processing problems and split big data sets into smaller jobs and distribute them across many compute nodes. EMR can do this with big data framework and open source projects. Big data framework includes :

  • Apache Hadoop, Spark, Hbase
  • Presto
  • Zeppelin, Ganglia, Pig, hive etc..

Amazon EMR mainly used for log processing and analysis, ETL Processing, Clickstream analysis and Machine learning.

EMR Architecture:

Amazon EMR architecture contains following three types of nodes:

  • Master Nodes:
    • EMR have Single Master Node and don’t have another master node to fail over.
    • Master node manages resources of the cluster
    • Co-ordinates distribution and parallel execution of MapReduce executable.
    • Tracking and directing HDFS.
    • Monitor health of core and task nodes.
    • Resource Manager also running on master nodes which is responsible for scheduling the resources.
  • Core  nodes:
    • Core nodes are slaves nodes and run the tasks as directed by master node.
    • Core contains data as part of HDFS or EMRFS. So data daemons runs on core nodes and store the data.
    • Core nodes also run NodeManager which takes action from Resource Manager like how to manage the resources.
    • ApplicationMaster is task which negotiates the  resources with Resource Manager and working with NodeManager to execute and monitor application containers.
  • Task Nodes:
    • Task nodes also controlled by master and are optional.
    • These nodes are required to provide extra capacity to the cluster in terms of CPU and memory
    • Can be added/removed  any time   from running cluster.

Continue reading → Analyze Big data with EMR

Configure HA – HiveMetastore and Load Balancing for HiveServer2

Apache hive is a Data Warehouse software project built on top of apache Hadoop for providing data summary, query and analysis. Hive gives an SQL like interface to query data stored in various databases and file systems that integrate with Hadoop.

Configuring High Availability for Hive requires the following components to be fail proof:

  • Hive MetaStore – RDBMS (MySQL)
  • ZooKeeper
  • Hive MetaStore Server
  • HiveServer2

Set up MySQL db:

First of all set up hive metastore as MySql database. Here are the steps:

Now login MySQL database and create the hive database /user. And grant the privileges.

Install Hive:

Add the service to cluster through Cloudera Manager. Continue reading → Configure HA – HiveMetastore and Load Balancing for HiveServer2

Create/Restore a snapshot of an HDFS directory

In this tutorial, we focus on HDFS snapshots. Common use cases of HDFS snapshots include backups and protection against user errors.

Create a snapshot of HDFS directory:

HDFS directories must be enabled for snapshots in order for snapshots to be created. Steps are:

  • From the Clusters tab -> select HDFS service.
  • Go to the File Browser tab. Select the file directory.


  • Verify the Snapshottable Path and click Enable Snapshots.

snap2.PNG Continue reading → Create/Restore a snapshot of an HDFS directory

Decommission/Recommission – DataNode in Cloudera

Commissioning nodes stand for adding new nodes in current cluster which operates your Hadoop framework. In contrast, decommissioning nodes stands for removing nodes from your cluster. This is very useful feature to handle node failure during the operation of Hadoop cluster without stopping entire Hadoop nodes in your cluster.


You can’t decommission a DataNode or host with DataNode if number of the data nodes equals to the replication factor. if you attempt to decommission a datanode in such situation the data node decommission process will not complete. you have to abort the decommission process and change the replication factor.


In my case, I have two data node and decommission one will leave only on data node. Before decomm process , change the replication factor to 1.

Same can be done via command line.

Now restart the stale services. Continue reading → Decommission/Recommission – DataNode in Cloudera