As I learned about HBase and HDFS, I wanted to understand how HDFS actually does its replication, whether it's an synchronous replication, what is the flow. As it turns out, it wasn't easy to find the answers to these questions, but I ran into a very good page on the internals and how HDFS works that is worth sharing:
Understanding Hadoop Clusters and the Network
Another aspect of HBase and HDFS is the ingenuity of having a filesystem that is good enough for Hadoop and then building a database on top of it, without reinventing the wheel. Kudos to Google for their work on GFS, MapReduce, and BigTable, which inspired the open source implementations.