Hadoop 2.0: With YARN, the Game Changes
Join the DZone community and get the full member experience.Join For Free
hadoop was created with hdfs, a distributed file system, and map/reduce framework, a distributed processing platform. with yarn, hadoop moves from being a distributed processing framework into a distributed operating system.
“operating system,” that sounded a little exaggerated when i wrote it, so just for fun, i picked up a copy of tanenbaum’s “modern operating systems” *, i have lying around from my days as a student – tanenbaum says there are two views for what an os is:
- a virtual machine: “…the function of the operating system is to present the user with the equivalent of an extended machine or virtual machine that is easier to program that the underlying hardware”
- a resource manager: “… the job of the operating system is to provide for an orderly and controlled allocation of the processors, memories, and i/odevices among the various programs competing for them”
hadoop already had the first part nailed down in its 1.0 release (actually almost from its inception). with yarn, it gets the second, so again, in my opinion, hadoop now can be considered a distributed operating system.
so, yarn is hadoop resource manager, but what does that mean. well, previous versions of hadoop were built around map/reduce (there were a few attempts at providing more computation paradigms but m/r was the main and almost only choice). the map/reduce framework, in the form of the jobtracker and tasktracker handled both the division of work as well as managing the resources of the servers – in the form of map and reduce slots that each node was configured to have.
with hadoop 2.0, the realization that map/reduce, while great for some use cases, is not the only game in town led to a better, more flexible design that separates the responsibility of handling the computational resources from running anything, like map/reduce, on these resources. yarn, as mentioned above, is that new resource manager.
there’s a lot more to say about yarn, of course, and i highly recommend reading hortonworks’ arun murthy’s excellent series of posts introducing it.
what i do want to emphasize is the effect that this separation already has on hadoop's eco-system. here are a few samples:
- storm on yarn - twitter’s streaming framework made to run on hadoop (yahoo)
- apache samza – a storm alternative developed from the ground up on yarn (apache)
- hoya – hbase on yarn, enabling on the fly clusters (hortonworks)
- weave – a wrapper around yarn to simplify deploying applications on it (continuuity)
- giraph – a graph processing system (apache)
- llama – a framework to allow external servers to get resources form yarn – (cloudera)
- spark on yarn – spark is an in-memory cluster for analytics
- tez – a generalization of map/reduce to any directly acyclic graph of tasks (hortonworks)
in summary: in my opinion, the introduction of yarn into the hadoop stack is a game changer, and it isn’t some theoretic thing that would happen in the distant future - hadoop 2.0 is now ga , so it is all right here, right now …
Published at DZone with permission of Arnon Rotem-gal-oz, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.