Faster Datanodes With Less Wait IO in Hadoop
Join the DZone community and get the full member experience.
Join For FreeI have noticed often that the check Hadoop uses to calculate usage for the data nodes causes a fair amount of wait io on them driving up load.
Every cycle we can get from every spindle we want!
So I came up with a nice little hack to use df instead of du.
Here is basically what I did so you can do it too.
mv /usr/bin/du /usr/bin/bak_du vi /usr/bin/du and save this inside of it #!/bin/sh mydf=$(df $2 | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $3 }') echo -e "$mydf\t$2"
then give it execute permission
chmod a+x /usr/bin/du
restart you data node check the log for no errors and make sure it starts back up
viola
Now when Hadoop calls “du -sk /yourhdfslocation” it will be expedient with its results
whats wrong with this?
1) I assume you have nothing else on your disks that you are storing so df is really close to du since almost all of your data is in HDFS
2) If you have more than 1 volume holding your hdfs blocks this is not exactly accurate so you are skewing the size of each vol by only calculating one of them and using that result for the others…. this is simple to fix just parse your df result differently and use the path passed into the second paramater to know which vol to grep in your df result… your first volume is going to be larger anyways most likely and you should be monitoring disk space another way so it is not going to be very harmefull if you just check and report the first volume’s size
3) you might not have your HDFS blocks on your first volume …. see #2 you can just grep the volume you want to report
Published at DZone with permission of Joe Stein, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
How AI Will Change Agile Project Management
-
5 Common Data Structures and Algorithms Used in Machine Learning
-
Reactive Programming
-
Decoding eBPF Observability: How eBPF Transforms Observability as We Know It
Comments