Running Hive Jobs on AWS EMR
Join the DZone community and get the full member experience.
Join For Free
in
a previous post
i showed how to run a simple job using
aws elastic mapreduce (emr)
. in this example we continue to make use of emr but now to run a
hive
job. hive is a data warehouse system for
hadoop
that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in hadoop compatible file systems.
to create the job in emr i will still make use of the
cli
(written in ruby) supplied with emr (for installation see
here
). the job that i am going to create is described in more detail in the
‘getting started guide analyzing big data with aws’
.
-
create the emr cluster
elastic-mapreduce-ruby$ ./elastic-mapreduce --create --name "my job flow" --hive-interactive --key-pair-file ../../../4synergy_palma.pem --enable-debugging --alive created job flow j-2cc8q43iwsq42
i enabled debugging and like i showed here i defined the logging directory as a parameter in my credentials.json file.
while the job is running and the cluster is being created you can see the progress by listing the job details:
elastic-mapreduce-ruby$ ./elastic-mapreduce --list -j j-2cc8q43iwsq42 j-2cc8q43iwsq42 starting my job flow pending setup hadoop debugging pending setup hive
|
after a few minutes:
elastic-mapreduce-ruby$ ./elastic-mapreduce --list -j j-2cc8q43iwsq42 j-2cc8q43iwsq42 starting ec2-54-228-55-226.eu-west-1.compute.amazonaws.com my job flow pending setup hadoop debugging pending setup hive
we see a public dns is provided of the master node but the setup is still running so we wait a little longer till we see this:
elastic-mapreduce-ruby$ ./elastic-mapreduce --list -j j-2cc8q43iwsq42 j-2cc8q43iwsq42 waiting ec2-54-228-55-226.eu-west-1.compute.amazonaws.com my job flow completed setup hadoop debugging completed setup hive
now we can ssh into the master node by supplying the following command:
ssh hadoop@ec2-54-228-55-226.eu-west-1.compute.amazonaws.com -i 4synergy_palma.pem
you might need to make the pem file readable for the user you use ssh with. you can do so by running
chmod og-rwx ~/mykeypair.pem
add the host to the list of known hosts and we get the following startup screen:
linux (none) 3.2.30-49.59.amzn1.i686 #1 smp wed oct 3 19:55:00 utc 2012 i686 -------------------------------------------------------------------------------- welcome to amazon elastic mapreduce running hadoop and debian/squeeze. hadoop is installed in /home/hadoop. log files are in /mnt/var/log/hadoop. check /mnt/var/log/hadoop/steps for diagnosing step failures. the hadoop ui can be accessed via the following commands: jobtracker lynx http://localhost:9100/ namenode lynx http://localhost:9101/ -------------------------------------------------------------------------------- hadoop@ip-10-48-206-175:~$
next we start up the hive console on this node so we can add a jar library to the hives runtime. this jar library is used for instance to have easy access to s3 buckets:
hadoop@ip-10-48-206-175:~$ hive logging initialized using configuration in file:/home/hadoop/.versions/hive-0.8.1/conf/hive-log4j.properties hive history file=/mnt/var/lib/hive_081/tmp/history/hive_job_log_hadoop_201305261845_2098337447.txt hive> add jar /home/hadoop/hive/lib/hive_contrib.jar; added /home/hadoop/hive/lib/hive_contrib.jar to class path added resource: /home/hadoop/hive/lib/hive_contrib.jar hive>
now lets create the hive table and have it represent the apache log files that are in a s3 bucket.
run the following command in the hive console to create the table:
hive> create table serde_regex( > host string, > identity string, > user string, > time string, > request string, > status string, > size string, > referer string, > agent string) > row format serde 'org.apache.hadoop.hive.contrib.serde2.regexserde' > with serdeproperties ( > "input.regex" = "([^ ]*) ([^ ]*) ([^ ]*) (-|\\[[^\\]]*\\]) ([^ > \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\"[^\"]*\") ([^ > \"]*|\"[^\"]*\"))?", > "output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s" > ) > location 's3://elasticmapreduce/samples/pig-apache/input/'; ok time taken: 17.146 seconds hive>
now we can run hive queries in this table. to run a job to count all records in the apache log files:
hive> select count(1) from serde_regex; total mapreduce jobs = 1 launching job 1 out of 1 number of reduce tasks determined at compile time: 1 in order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= in order to limit the maximum number of reducers: set hive.exec.reducers.max= in order to set a constant number of reducers: set mapred.reduce.tasks= starting job = job_201305261839_0001, tracking url = http://ip-10-48-206-175.eu-west-1.compute.internal:9100/jobdetails.jsp?jobid=job_201305261839_0001 kill command = /home/hadoop/bin/hadoop job -dmapred.job.tracker=10.48.206.175:9001 -kill job_201305261839_0001 hadoop job information for stage-1: number of mappers: 1; number of reducers: 1 2013-05-26 19:06:46,442 stage-1 map = 0%, reduce = 0% 2013-05-26 19:07:02,857 stage-1 map = 16%, reduce = 0%, cumulative cpu 4.03 sec 2013-05-26 19:07:03,871 stage-1 map = 16%, reduce = 0%, cumulative cpu 4.03 sec ... break .... 2013-05-26 19:07:59,677 stage-1 map = 100%, reduce = 100%, cumulative cpu 11.06 sec 2013-05-26 19:08:00,709 stage-1 map = 100%, reduce = 100%, cumulative cpu 11.06 sec mapreduce total cumulative cpu time: 11 seconds 60 msec ended job = job_201305261839_0001 counters: mapreduce jobs launched: job 0: map: 1 reduce: 1 accumulative cpu: 11.06 sec hdfs read: 593 hdfs write: 7 success total mapreduce cpu time spent: 11 seconds 60 msec ok 239344 time taken: 111.722 seconds hive>
to show all fields of a row:
hive> select * from serde_regex limit 1; ok 66.249.67.3 - - [20/jul/2009:20:12:22 -0700] "get /gallery/main.php?g2_controller=exif.switchdetailmode&g2_mode=detailed&g2_return=%2fgallery%2fmain.php%3fg2_itemid%3d15741&g2_returnname=photo http/1.1" 302 5 "-" "mozilla/5.0 (compatible; googlebot/2.1; +http://www.google.com/bot.html)" time taken: 2.335 seconds
after playing around we have to terminate the cluster (so costs are kept to a minimum). because we started the job in ‘interactive’ mode so we were able to logon the the server and run our ‘ad-hoc’ queries we have to terminate it ourselves:
elastic-mapreduce-ruby$ ./elastic-mapreduce --terminate j-2cc8q43iwsq42 terminated job flow j-2cc8q43iwsq42 elastic-mapreduce-ruby$ ./elastic-mapreduce --list -j j-2cc8q43iwsq42 j-2cc8q43iwsq42 shutting_down ec2-54-228-55-226.eu-west-1.compute.amazonaws.com my job flow completed setup hadoop debugging completed setup hive elastic-mapreduce-ruby$
after termination we have still access to the created log files in our defined s3 bucket:
although it might not be very useful in this case because we ran the cluster in interactive mode this option can be helpful when you bootstrap the cluster. in that case the queries run automatically and the cluster terminates when it is finished (together with the log files).
one way to browse through this logging is by using the
debugging tool of emr
. go to the
management console
and select the emr service. in the start screen select the job flow you sued for this example and click the ‘debug’ button:
now we see the steps of our previous job flow. the step in which we are interested here is the interactive jobs. click on the view jobs link of that line:
now we see to jobs of which we can ‘view tasks’ by clicking the corresponding link. finally click the ‘view attempts’ of the reduce or map task and you will have access to the copied log files:
for more information about using hive with emr see here .
Published at DZone with permission of $$anonymous$$. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Tactics and Strategies on Software Development: How To Reach Successful Software [Video]
-
10 Traits That Separate the Best Devs From the Crowd
-
Integrate Cucumber in Playwright With Java
-
What Is Envoy Proxy?
Comments