08 Mar / 2012

Getting DNS right for Hadoop / HBase clusters

Hadoop and HBase (especially HBase) are very picky about DNS
entries.  When setting  up a Hadoop cluster one doesn’t
always have access to a DNS server.  So  here is ‘poor
developers’ guide to getting DNS correct.

Following these simple steps, can avoid a few thorny issues down the
line.

  • set Hostname
  • verify hostname –> IP address resolution is working (DNS
    resolution)
  • verify IP address –> hostname resolution is working (reverse
    DNS)
  • DNS verification tool

 

1) Hostname

I like to set these to FULLY QUALIFIED NAMES.

so ‘ hadoop1.lab.mycompany.com
is good

just ‘hadoop1’   is not.

on CENTOS

set this in  ‘/etc/sysconfig/network

HOSTNAME=hadoop1.lab.mycompany.com

on UBUNTU:

set this on ‘/etc/hostname

hadoop1.lab.mycompany.com

just reboot the host for hostname settings to take effect (to be safe)

Do this at every node.

2) DNS entries when you don’t have DNS server

So don’t want to mess around (or can’t) with DNS server?  No
worries.  We can use   ‘/etc/hosts’ file to make our own
tiny DNS for our hadoop cluster

file : /etc/hosts

add the following *AFTER* the entries already present in /etc/hosts.

### hadoop cluster

# format

# ip_addres
fully_qualified_hostname     alias

10.1.1.101
hadoop1.lab.mycompna.com    hadoop1

10.1.1.102
hadoop2.lab.mycompny.com    hadoop2

# and so on….

Few things to note:

  1. the content of this file has to be distributed across the cluster
    on all machines.  DO NOT copy the file onto target machines, the
    hadoop section needs to be APPENDED to /etc/hosts (see below for a
    quick script to do it)
  2. the first entry is IP address (usually an internal IP address)
  3. second entry is the FULLY QUALIFIED HOST NAME.   This
    makes sure reverse DNS lookup picks up the correct hostname
  4. 3rd entry is a shorthand alias;   It saves me some
    typing.  So I can just type ‘ssh hadoop1’  rather
    than   ‘ssh  hadoop1.lab.mycompany.com’

One of common mistake that happens here is when host alias and fully
qualified hostnames are swapped.

following isn’t correct,

10.1.1.101    hadoop1
hadoop1.lab.mycompany.com

aliases should follow, fully qualified host names.

The hadoop cluster section of /etc/hosts file has to be distributed on
all cluster nodes.

How to distribute the DNS entries across the cluster?

There are  confguration managemnet systems (CMS) like Chef and Puppet that makes distributing
config files on a cluster easy.   For a very large cluster,
using  a CMS could be a recommended choice.

Here is a quick way to distribute the files:

If you have SSH password-less login setup between master and slaves,
the following would work:

1) backup existing hosts file:  (do this only ONCE!)

run the following script with ROOT privileges

2) save the hadoop specific DNS entries into a file

say ‘hadoop_hosts’ is our file that has the following content:
### hadoop cluster
10.1.1.101     hadoop1.lab.mycompna.com    hadoop1
10.1.1.102     hadoop2.lab.mycompny.com    hadoop2
# and so on….

3) run the following script;  it will copy the custom hosts files to destination and append it to the existing  /etc/hosts file

Checking DNS across the cluster

Here is a simple Java utility I wrote to verify that DNS is working ok in ALL cluster machines.

The tool is called : HADOOP-DNS-CHECKER , it is on GitHub.

Here are some features:

  • It is written in Java, so it will resolve hostnames just like Hadoop / Hbase would (or at least close enough)
  • It is written in pure Java, doesn’t use any third party libraries.  So it is very easy to compile and run.  If you are running Hadoop, you already have JDK installed anyway
  • it does both   IP lookup and reverse DNS lookup
  • will also check if machine’s own hostname resolves correctly
  • it can run on a single machine
  • it can run on machines across cluster (as long as passsword-less ssh is enabled)

To run this, say from hadoop master:

  • get the code (using git : git clone  git@github.com:sujee/hadoop-dns-checker.git)
  • compile:  ./compile.sh    it should create a jar file ‘a.jar’
  • create a hosts file (‘my_hosts’)  containing all machines in your hadoop cluster:
    hadoop1.domain.com
    hadoop2.domain.com
    hadoop3.domain.com
  • first run this in a single machine mode:
    ./run.sh   my_hosts
    here is a sample output:

    ==== Running on : c2107.pcs.hds.com/172.17.34.99 =====
    # self check…
    — host : c2107.pcs.hds.com
    host lookup : success (172.17.34.99)
    reverse lookup : success (c2107.pcs.hds.com)
    is reachable : yes
    # end self check

    — host : c2107.pcs.hds.com
    host lookup : success (172.17.34.99)
    reverse lookup : success (c2107.pcs.hds.com)
    is reachable : yes

    — host : c2108.pcs.hds.com
    host lookup : success (172.17.34.100)
    reverse lookup : success (c2108.pcs.hds.com)
    is reachable : yes

  • great.  Now we can run this on cluster.  It will login to each machine specified in ‘hosts’ file, and run this script.
    ./run-on-cluster.sh hosts2

if any error is encountered it will print out ‘*** FAIL *** ‘.  So it is easy to spot any errors

Hope you find this useful;  Please leave your comments below.

Sujee Maniyam
Sujee is a founder, principal at Elephant Scale where he provides consulting and training on Big Data technologies

4 Comments:


  • By Mohammad Tariq 06 Aug 2012

    Great post Sujee..quite often people struggle with DNS resolution problems as soon as they start their Hadoop journey..Would be very helpful

  • By Paul Baclace 16 Aug 2012

    Thanks for the full treatment of this topic, Sujee. I thought I would not need this, but it seems to be an issue on HP Cloud at this time.

    Do the names really need to be fully qualified? In the spirit of foo.local, I recently started using foo.wan and foo.vpn for some some setups.

    • By Sujee Maniyam 16 Aug 2012

      Paul
      no need for FULLY qualified domains. Just need to have ‘a domain’.

      I’v used hostnames like ‘hadoop1.cluster’, as long as I had a matching entry /etc/hosts things are fine.
      Amazon EC2 internal IPs have the same format ‘ip-1-2-3-4.internal’

  • By Akmal 04 Mar 2015

    Thanks man, you’ve saved my day! Great explanation!

Leave a Reply



Copyright 2015 Sujee Maniyam (