Over a million developers have joined DZone.
Platinum Partner

Hurdles to Your First Hadoop Cluster

· Big Data Zone

The Big Data Zone is presented by Exaptive.  Learn about how to rapidly iterate data applications, while reusing existing code and leveraging open source technologies.

Yesterday we were working on setting up our first Hadoop cluster. Though there are many online documentation on this even then we faced a few challenges getting with it. In this post I am providing details on the faced problems and solutions:

Passwordless login from NameNode to DataNode and vice versa:
Though setting paswordless login from NameNode to DataNode was easy. We had to just follow the steps mentioned at different tutorials:

ssh-keygen -t rsa
.ssh/id_rsa.pub nsinfra@datanode1:~nsinfra/.ssh/authorized_keys
.ssh/id_rsa.pub nsinfra@datanode2:~nsinfra/.ssh/authorized_keys

We executed above three commands and assigned 700 permission on both .ssh & authorized_keys on NameNode before copying it to DataNodes. And we were able to ssh datanode from namenode without password.
But we struggled with the reverse process, we referred to one of the tutorial from net to setup Hadoop cluster. Below were the commands specified:

ssh-keygen -t rsa
.ssh/id_rsa.pub nsinfra@datanode1:~nsinfra/.ssh/authorized_keys2

Since we had 3 datanode in the cluster, looking at above command structure, we made an assumption that we need to have three different auth keys and thereby executed the below:

ssh-keygen -t rsa
.ssh/id_rsa.pub nsinfra@datanode1:~nsinfra/.ssh/authorized_keys1
.ssh/id_rsa.pub nsinfra@datanode2:~nsinfra/.ssh/authorized_keys2
.ssh/id_rsa.pub nsinfra@datanode3:~nsinfra/.ssh/authorized_keys3

Also, we assigned 700 permission on the .ssh and authorized_keyX as before.

To our surprise only 2nd worked, while the datanode2 and datanode3 could not be SSHed without password. When dwelling deep in SSH tutorials we realized the issue. We have taken authorized_keyX file for granted and changed the name. But authorized_key2 is the only file which is used by the SSH and creating 1 and 3 version makes no sense. Now the question was how do we setup 3 machines? We found that version2 is common for all the machine and hence copying 1's and 3's contents in 2 shall resolve the problem.

cat authorized_keys1 >> authorized_keys2
cat authorized_keys3 >> authorized_keys2

java.io.IOException: Incompatible namespaceIDs
After successfully setting nodes with passwordless SSH access we started the cluster. NameNode got started correctly but the DataNodes were not. We analyzed this using the JPS utility, processes named datanode were not started on DataNodes. We further looked at logs of data node, and found logged exceptions:

... ERROR org.apache.hadoop.dfs.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/dfs/data: ...
        at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:281)
        at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:121)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:230)
        at org.apache.hadoop.dfs.DataNode.(DataNode.java:199)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1202)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1146)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1167)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1326)

On googling we found that this is a common problem and being refered as HDFS-107 (formerly known as HADOOP-1212). Below are the solution which were mentioned by experts:

QuickFix#1: Cleanup and restart

  1. Stop the cluster
  2. Delete the data directory on the problematic DataNode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /app/hadoop/tmp/dfs/data
  3. Reformat the NameNode (NOTE: all HDFS data is lost during this process!)
  4. Restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.

Fix#2: Updating namespaceID of problematic DataNodes
This workaround is “minimally invasive” as you only have to edit one file on the problematic DataNodes:
  1. Stop the DataNode
  2. Edit the value of namespaceID in /current/VERSION to match the value of the current NameNode
  3. Restart the DataNode
I have used QuickFix version and got everything setup right since we didn't have had any data yet.

The Big Data Zone is presented by Exaptive.  Learn how rapid data application development can address the data science shortage.


Published at DZone with permission of Abhishek Jain , DZone MVB .

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}