Use Cassandra to Run Hadoop MapReduce
So if you are looking for a good NoSQL read of HBase vs. Cassandra you can check out http://ria101.wordpress.com/2010/02/24/hbase-vs-cassandra-why-we-moved/. In short HBase is good for reads and Cassandra for writes. Cassandra does a great job on reads too so please do not think I am shooting either down in any way. I am just saying that both HBase and Cassandra have great value and useful purpose in their own right and even use cases exists to run both. HBase recently got called up as a top level apache project coming up and out of Hadoop.
Having worked with Cassandra a bit I often see/hear of folks asking about running Map/Reduce jobs against the data stored in Cassandra instances. Well Hadoopers & Hadooperettes the Cassandra folks in the 0.60 release provide a way to-do very nicely. It is VERY straight forward and well thought through. If you want to see the evolution check out the JIRA issue https://issues.apache.org/jira/browse/CASSANDRA-342
So how do you it? Very simple, Cassandra provides an implementation of InputFormat. Incase you are new to Hadoop the InputFormat is what the mapper is going to use to load your data into it (basically). Their subclass connects your mapper to pull the data in from Cassandra. What is also great here is that the Cassandra folks have also spent the time implementing the integration in the classic “Word Count” example.
See https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/ for this example. Cassandra rows or row fragments (that is, pairs of key + SortedMap of columns) are input to Map tasks for processing by your job, as specified by a SlicePredicate that describes which columns to fetch from each row. Here’s how this looks in the word_count example, which selects just one configurable columnName from each row:
ConfigHelper.setColumnFamily(job.getConfiguration(), KEYSPACE, COLUMN_FAMILY); SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays.asList(columnName.getBytes())); ConfigHelper.setSlicePredicate(job.getConfiguration(), predicate);
Cassandra also provides a Pig LoadFunc for running jobs in the Pig DSL instead of writing Java code by hand. This is in https://svn.apache.org/repos/asf/cassandra/trunk/contrib/pig/.