Is there a future for Map/Reduce?
Join the DZone community and get the full member experience.Join For Free
google’s jeffrey dean and sanjay ghemawat filed the patent request and published the map/reduce paper 10 years ago (2004). according to wikipedia doug cutting and mike cafarella created hadoop, with its own implementation of map/reduce, one year later at yahoo – both these implementations were done for the same purpose – batch indexing of the web.
back than, the web began its “web 2.0″ transition, pages became more dynamic , people began to create more content – so an efficient way to reprocess and build the web index was needed and map/reduce was it. web indexing was a great fit for map/reduce since the initial processing of each source (web page) is completely independent from any other – i.e. a very convenient map phase and you need to combine the results to build the reverse index. that said, even the core google algorithm – the famous pagerank is iterative (so less appropriate for map/reduce), not to mention that as the internet got bigger and the updates became more and more frequent map/reduce wasn’t enough. again google (who seem to be consistently few years ahead of the industry) began coming up with alternatives like google percolator or google dremel (both papers were published in 2010, percolator was introduced at that year, and dremel has been used in google since 2006).
so now, it is 2014, and it is time for the rest of us to catch up with google and get over map/reduce and for multiple reasons:
- end-users’ expectations (who hear “big data” but interpret that as “fast data”)
- iterative problems like graph algorithms which are inefficient as you need to load and reload the data each iteration
- continuous ingestion of data (increments coming on as small batches or streams of events) – where joining to existing data can be expensive
- real-time problems – both queries and processing
in my opinion, map/reduce is an idea whose time has come and gone – it won’t die in a day or a year, there is still a lot of working systems that use it and the alternatives are still maturing. i do think, however, that if you need to write or implement something new that would build on map/reduce – you should use other option or at the very least carefully consider them.
so how is this change going to happen ? luckily, hadoop has recently adopted yarn (you can see my presentation on it here ), which opens up the possibilities to go beyond map/reduce without changing everything … even though in effect, a lot will change. note that some of the new options do have migration paths and also we still retain the access to all that “big data” we have in hadoopm as well as the extended reuse of some of the ecosystem.
the first type of effort to replace map/reduce is to actually subsume it by offering more flexible batch. after all saying map/reduce is not relevant, deosn’t mean that batch processing is not relevant. it does mean that there’s a need to more complex processes. there are two main candidates here tez and spark where tez offers a nice migration path as it is replacing map/reduce as the execution engine for both pig and hive and spark has a compelling offer by combining batch and stream processing (more on this later) in a single engine.
the second type of effort or processing capability that will help kill map/reduce is mpp databases on hadoop. like the “flexible batch” approach mentioned above, this is replacing a functionality that map/reduce was used for – unleashing the data already processed and stored in hadoop. the idea here is twofold
- to provide fast query capabilities* – by using specialized columnar data format and database engines deployed as daemons on the cluster
- to provide rich query capabilities – by supporting more and more of the sql standard and enriching it with analytics capabilities (e.g. via madlib )
efforts in this arena include impala from cloudera, hawq from pivotal (which is essentially greenplum over hdfs), startups like hadapt or even actian trying to leverage their paraccel acquisition with the recently announced actian vector . hive is somewhere in the middle relying on tez on one hand and using vectorization and columnar format (orc) on the other
the third type of processing that will help dethrone map/reduce is stream processing. unlike the two previous types of effort this is covering a ground the map/reduce can’t cover, even inefficiently. stream processing is about handling continuous flow of new data (e.g. events) and processing them (enriching, aggregating, etc.) them in seconds or less. the two major contenders in the hadoop arena seem to be spark streaming and storm though, of course, there are several other commercial and open source platforms that handle this type of processing as well.
in summary – map/reduce is great. it has served us (as an industry) for a decade but it is now time to move on and bring the richer processing capabilities we have elsewhere to solve our big data problems as well.
last note - i focused on hadoop in this post even thought there are several other platforms and tools around. i think that regardless if hadoop is the best platform it is the one becoming the de-facto standard for big data (remember betamax vs vhs?)
one really, really last note – if you read up to here, and you are a developer living in israel, and you happen to be looking for a job – i am looking for another developer to join my technology research team @ amdocs. if you’re interested drop me a note: arnon.rotemgaloz at amdocs dot com or via my twitter/linkedin profiles
*esp. in regard to analytical queries – operational sql on hadoop with efforts like phoenix ,ibm’s bigsql or splice machine are also happening but that’s another story
illustration idea found in james mickens’s talk in monitorama 2014 – (which is, by the way, a really funny presentation – go watch it) -ohh yeah… and pulp fiction :)
Published at DZone with permission of Arnon Rotem-gal-oz, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.