Apache Projects are the Justice League of Scalability
Join the DZone community and get the full member experience.Join For Free
in this post i will define what i believe to be the most important projects within the apache projects for building scalable web sites and generally managing large volumes of data.
if you are not aware of the apache projects, then you should be. do not mistake this for the apache web (httpd) server , which is just one of the many projects in the apache software foundation. each project is it’s own open-source movement and, while java is often the language of choice, it may have been created in any language.
i often see developers working on solutions that can be easily solved by using one the tools in the apache projects toolbox. the goal of this post is to raise awareness of these great pieces of open-source software. if this is not new to you, then hopefully it will be a good recap of some of the key projects for building scalable solutions.
the apache software foundation
provides support for the apache community of open-source software projects.
you have probably heard of many of these projects, such as cassandra or hadoop, but maybe you did not realize that they all come under the same umbrella. this umbrella is known as the apache projects and is a great place to keep tabs on existing and new projects. it is also a gold seal of approval in the open-source world. i think of these projects, or tools, as the superheroes of building modern scalable websites. by day they are just a list of open-source projects listed on a rather dry and hard to navigate website at http://www.apache.org . but by night… they do battle with some of the world’s most gnarly datasets. terabytes and even petabytes are nothing to these guys. they are the nemesis of high throughput, the digg effect, and the dreaded query-per-second upper limit. they laugh in the face of limited resources. ok, maybe i am getting carried away here. let’s move on…
joining the justice league
prior to a project joining the apache projects it will first be accepted into the apache incubator . for instance, deltacloud has recently been accepted into the apache incubator .
the apache incubator has two primary goals:
- ensure all donations are in accordance with the asf legal standards
- develop new communities that adhere to our guiding principles
you can find a list all the projects currently in the apache incubator here .
here is a list, in no particular order, of tools you will find in the apache software foundation projects.
- apache cassandra
- apache hadoop
- apache hbase
- apache zookeeper
- apache solr
- apache activemq
- apache mahout
while each have their major benefits, none is a “silver bullet” or a “golden hammer”. when you design software that scales or does any job very well, then you have to make certain choices. if you are not using the software for the task it is was designed for then you will obviously find it’s weakness. understanding and balancing the strengths and weaknesses of various solutions will enable you to better design your scalable architecture. just because facebook uses cassandra to do battle with it’s petabytes of data, does not necessarily mean it will be a good solution for your “simple” terabyte-wrestling architecture. what you do with the data is often more important than how much data you have. for instance, facebook has decided that hbase is now a better solution for many of their needs. [this is the cue for everyone to run to the other side of the boat].
kryptonite is one of the few things that can kill superman.
the word kryptonite is also used in modern speech as a synonym for achilles’ heel, the one weakness of an otherwise invulnerable hero.
now, let’s look in more detail at some of the projects that can fly faster than a speeding bullet and leap tall datasets is a single [multi-server, distributed, parallelized, fault-tolerant, load-balanced and adequately redundant] bound.
cassandra was built by facebook to hold it’s ridiculous volumes of data within it’s email system. it’s much a like distributed key-value store, but with a hierarchy. the model is very similar to most nosql databases. the data-model consists of columns, column-families and super-columns. i will not go into detail here about the data-model, but there is a great intro (see “ wtf is a supercolumn? an intro to the cassandra data model “) that you can read.
cassandra can handle fast writes and reads, but its kryptonite is consistency . it takes time to make sure all the nodes serving the data have the same value. for this reason facebook is now moving away from cassandra for its new messaging system, to hbase. hbase is a nosql database built on-top of hadoop. more on this below.
apache hadoop, son of apache nutch and later raised under the watchful eye of yahoo, has since become an apache project. nutch is an open-source web search project built on lucene. the component of nutch that became hadoop gave nutch it’s “web-scale”.
hadoop’s goal was to manage large volumes of data on commodity hardware, such as thousands of desktop harddrives. hadoop takes much of it’s design from a paper published by google on their bigtable . it stores data on the hadoop distributed file-system (a.k.a. “hdfs”) and manages the running of distributed map-reduce jobs. in a previous post i gave an example using ruby with hadoop to perform map-reduce jobs.
map-reduce is a way to crunch large datasets using two simple algorithms (“map” and “reduce”). you write these algorithms specific to the data you are processing. although the your map and reduce code can be extremely simple , it scales across the entire dataset using hadoop. this applies even if you have petabytes of data across thousands of machines. your resulting data can be found in a directory on you hdfs disk when the map-reduce job completes. hadoop provides some great web-based tools for visualizing your cluster and monitoring the progress of any running jobs.
hadoop deals with very large chunks of data. you can tell hadoop to map-reduce across everything it finds under a specific directory within hdfs and then output the results to another directory within hdfs. hadoop likes [really really] large files (many gigabytes) made from large chunks (eg. 128mb), so it can stream through them quickly, without many disk-seeks and manage distributing the chunks effectively.
hadoop’s kryptonite would be it’s brute force. it is designed for churning through large volumes of data, rather than being real-time. a common use-case is to spool up data for a period of time (an hour) and then run your map-reduce jobs on that data. by doing this you can very efficiently process vasts amounts of data, but you would not have real-time results.
hadoop: the definitive guide by tom white
apache hbase is a nosql layer on-top of hadoop that adds structure to your data. hbase uses write-ahead logging to manage writes, which are then merged down to hdfs. a client request is responded to as soon as the update is written to the write-ahead log and the change is made in memory. this means that updates are very fast. the read side is also fast, since data is stored on disk in key order. subsequently, because data is stored on disk in key-order, scans across sequential keys are fast, due to the low number disk seeks required. larger scans are not currently possible.
hbase’s krytonite would be similar to most nosql databases out there. many use-cases still benefit from using relational database and hbase is not a relational database.
look out for this book coming soon
hbase: the definitive guide by lars george (may 2011)
lars has an excellent blog that covers hadoop and hbase thoroughly.
apache zookeeper is the janitor of our justice league. it is being used more and more in scalable applications such as apache hbase, apache solr (see below) and katta . it manages an application’s distributed needs, such as configuration, naming and synchronization. all these tasks are important when you have a large cluster with constantly failing disks, failing servers, replication and shifting roles between your nodes.
apache solr is built on-top of one of my favorite apache projects, apache lucene [java] . lucene is a powerful search engine api written in java. i have built large distributed search engines with lucene and have been very happy with the results.
solr packages up lucene as a product that can be used stand-alone. it provides various ways to interface with the search engine, such as via xml or json requests. therefore, java knowledge is not a requirement for using it. it adds a layer to lucene that makes it more easily scale across a cluster of machines.
a message queue is way to quickly collect data, funnel the data through your system and use the same information for multiple services. this provides separation, within your architecture, between collecting data and using it. data can be entered into different queues (data streams). different clients can subscribe to these queues and use the data as they wish.
activemq has two types of queue, “queue” and “topic”.
the queue type “queue” means that each piece of data on the queue can only be read once. if client “a” reads a piece of data off the queue then client “b” cannot read it, but can read the next item on the queue. this is a good way of dividing up data across a cluster. all the clients in the cluster will take a share of the data and process it, but the whole dataset will only be processed once. faster clients will take a larger share of the data and slow clients will not hold-up the queue.
a “topic” means that each client subscribed will see all the data, regardless of what the other clients do. this is useful if you have different services all requiring the same dataset. it can be collected and managed once by activemq, but utilized by multiple processors. slow clients can cause this type of queue to back-up.
the son of lucene and now a hadoop side-kick, apache mahout was born to be an intelligence engine. from the hindi word for “elephant driver” (hadoop being the elephant), mahout has grown into a top-level apache project in it’s own right, mastering the art of artificial intelligence on large datasets. while hadoop can tackle the more heavy-weight datasets on it’s own, more cunning datasets require a little more algorithmic manipulation. much of the focus of mahout is on large datasets using map-reduce on-top of hadoop, but the code-base is optimized to run well on non-distributed datasets as well.
we appreciate your comments
if you found this blog post useful then please leave a comment below. i would like to hear which other apache projects you think deserve more attention and if you have ever been saved, like lois lane, by one of the above.
Published at DZone with permission of Phil Whelan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.