DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Setting Up a ScyllaDB Cluster on AWS Using Terraform
  • AWS Redshift Data Sharing: Unlocking the Power of Collaborative Analytics
  • Implementing EKS Multi-Tenancy Using Capsule (Part 3)
  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops

Trending

  • Build a Simple REST API Using Python Flask and SQLite (With Tests)
  • The Future of Java and AI: Coding in 2025
  • How to Create a Successful API Ecosystem
  • Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. How to Set Up a Multi-Node Hadoop Cluster on Amazon EC2, Part 1

How to Set Up a Multi-Node Hadoop Cluster on Amazon EC2, Part 1

Learn how to set up a four node Hadoop cluster using AWS EC2, PuTTy(gen), and WinSCP.

By 
Hardik Pandya user avatar
Hardik Pandya
·
Jan. 23, 14 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
135.4K Views

Join the DZone community and get the full member experience.

Join For Free

after spending some time playing around on single-node pseudo-distributed cluster, it's time to get into real world hadoop. depending on what works best – its important to note that there are multiple ways to achieve this and i am going to cover how to setup multi-node hadoop cluster on amazon ec2. we are going to setup 4 node hadoop cluster as below.

  • namenode (master)
  • secondarynamenode
  • datanode (slave1)
  • datanode (slave2)

here’s what you will need

  1. amazon aws account
  2. putty windows client (to connect to amazon ec2 instance)
  3. puttygen (to generate private key – this will be used in putty to connect to ec2 instance)
  4. winscp (secury copy)

this will be a two part series

in part-1 i will cover infrastructure side as below

  1. setting up amazon ec2 instances
  2. setting up client access to amazon instances (using putty.)
  3. setup winscp access to ec2 instances

in part-2 i will cover the hadoop multi node cluster installation

  1. hadoop multi-node installation and setup

1. setting up amazon ec2 instances

with 4 node clusters and minimum volume size of 8gb there would be an average $2 of charge per day with all 4 running instances. you can stop the instance anytime to avoid the charge, but you will loose the public ip and host and restarting the instance will create new ones,. you can also terminate your amazon ec2 instance anytime and by default it will delete your instance upon termination, so just be careful what you are doing.

1.1 get amazon aws account

if you do not already have a account, please create a new one. i already have aws account and going to skip the sign-up process. amazon ec2 comes with eligible free-tier instances.

screen shot 2014-01-10 at 1.33.28 pm

1.2 launch instance

once you have signed up for amazon account. login to amazon web services, click on my account and navigate to amazon ec2 console

launch_instance

1.3 select ami

i am picking ubuntu server 12.04.3  server 64-bit os

step_1_choose_ami 1.4 select instance type

select the micro instance

step_2_instance_type

1.5 configure number of instances

as mentioned we are setting up 4 node hadoop cluster, so please enter 4 as number of instances. please check amazon ec2 free-tier requirements, you may setup 3 node cluster with < 30gb storage size to avoid any charges.  in production environment you want to have secondaynamenode as separate machine

step_3_instance_details

1.6 add storage

minimum volume size is 8gb

step_4_add_storage

1.7 instance description

give your instance name and description

step_5_instance_description

1.8 define a security group

create a new security group, later on we are going to modify the security group with security rules.

step_6_security_group

1.9 launch instance and create security pair

review and launch instance.

amazon ec2 uses public–key cryptography to encrypt and decrypt login information. public–key cryptography uses a public key to encrypt a piece of data, such as a password, then the recipient uses the private key to decrypt the data. the public and private keys are known as a key pair .

create a new keypair and give it a name “hadoopec2cluster” and download the keypair (.pem) file to your local machine. click launch instance

hadoopec2cluster_keypair

1.10 launching instances

once you click “launch instance” 4 instance should be launched with “pending” state

launching_instance

once in “running” state we are now going to rename the instance name as below.

  1. hadoopnamenode (master)
  2. hadoopsecondarynamenode
  3. hadoopslave1 (data node will reside here)
  4. haddopslave2  (data node will reside here)

running_instances

please note down the instance id, public dns/url (ec2-54-209-221-112.compute-1.amazonaws.com)  and public ip for each instance for your reference.. we will need it later on to connect from putty client.  also notice we are using “hadoopec2securitygroup”.

public_dns_ip_instance_id

you can use the existing group or create a new one. when you create a group with default options it add a rule for ssh at port 22.in order to have tcp and icmp access we need to add 2 additional security rules. add ‘all tcp’, ‘all icmp’ and ‘ssh (22)’ under the inbound rules to “hadoopec2securitygroup”. this will allow ping, ssh, and other similar commands among servers and from any other machine on internet. make sure to “apply rule changes” to save your changes.

these protocols and ports are also required to enable communication among cluster servers. as this is a test setup we are allowing access to all for tcp, icmp and ssh and not bothering about the details of individual server port and security.

security_group_rule

2. setting up client access to amazon instances

now, lets make sure we can connect to all 4 instances.for that we are going to use putty client we are going setup password-less ssh access among servers to setup the cluster. this allows remote access from master server to slave servers so master server can remotely start the data node and task tracker services on slave servers.

we are going to use downloaded hadoopec2cluster.pem file to generate the private key (.ppk). in order to generate the private key we need puttygen client. you can download the putty and puttygen and various utilities in zip from here .

2.1 generating private key

let’s launch puttygen client and import the key pair we created during launch instance step – “hadoopec2cluster.pem”

navigate to conversions and “import key”

import_key

load_private_key once you import the key you can enter passphrase to protect your private key or leave the passphrase fields blank to use the private key without any passphrase. passphrase protects the private key from any unauthorized access to servers using your machine and your private key.

any access to server using passphrase protected private key will require the user to enter the passphrase to enable the private key enabled access to aws ec2 server.

2.2 save private key

now save the private key by clicking on “save private key” and click “yes” as we are going to leave passphrase empty.

save_privatekey

save the .ppk file and give it a meaningful name

save_ppk_file

now we are ready to connect to our amazon instance machine for the first time.

2.3 connect to amazon instance

let’s connect to hadoopnamenode first. launch putty client, grab the public url , import the .ppk private key that we just created for password-less ssh access. as per amazon documentation , for ubuntu machines username is “ubuntu”

2.3.1 provide private key for authentication

connect_to_hadoopnamenode3

2.3.2 hostname and port and connection type

and “open” to launch putty session

connect_to_hadoopnamenode4

when you launch the session first time, you will see below message, click “yes” connect_to_hadoopnamenode

and will prompt you for the username, enter ubuntu, if everything goes well you will be presented welcome message with unix shell at the end.

connect_to_hadoopnamenode2

if there is a problem with your key, you may receive below error message putty_error_message

similarly connect to remaining 3 machines hadoopsecondarynamenode, haddopslave1,hadoopslave2 respectively to make sure you can connect successfully.

4_connnected_amazon_instances

2.4 enable public access

issue ifconfig command and note down the ip address. next, we are going to update the hostname with ec2 public url and finally we are going to update /etc/hosts file to map  the ec2 public url with ip address. this will help us to configure master ans slaves nodes with hostname instead of ip address.

following is the output on hadoopnamenode ifconfig

ifconfig

host_name_ip_address_mapping

now, issue the hostname command, it will display the ip address same as inet address from ifconfig command.

hostname_by_ip

we need to modify the hostname to ec2 public url with below command

prebuffer_0nbsp;sudo hostname ec2-54-209-221-112.compute-1.amazonaws.com

update_hostname

2.5 modify /etc/hosts

lets change the host to ec2 public ip and hostname.

open the /etc/hosts in vi, in a very first line it will show 127.0.0.1 localhost, we need to replace that with amazon ec2 hostname and ip address we just collected.

open_etc_hosts_in_vi

modify the file and save your changes

modify_vi_hosts

repeat 2.3 and 2.4 sections for remaining 3 machines.

3. setup winscp access to ec2 instances

in order to securely transfer files from your windows machine to amazon ec2 winscp is a handy utility.

provide hostname, username and private key file and save your configuration and login

winscp

winscp_error

if you see above error, just ignore and you upon successful login you will see unix file system of a logged in user /home/ubuntu your amazon ec2 ubuntu machine.

winscp_view

upload the .pem file to master machine (hadoopnamenode). it will be used while connecting to slave nodes during hadoop startup daemons.

if you have made this far, good work! things will start to get more interesting in part-2 .

cluster hadoop AWS

Published at DZone with permission of Hardik Pandya, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Setting Up a ScyllaDB Cluster on AWS Using Terraform
  • AWS Redshift Data Sharing: Unlocking the Power of Collaborative Analytics
  • Implementing EKS Multi-Tenancy Using Capsule (Part 3)
  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!