An Introduction to HBase
An Introduction to HBase
int his article, let's take a look at an introduction to HBase and also explore how to create 3 node HBase clusters.
Join the DZone community and get the full member experience.Join For Free
In our last two articles, we talked about the HDFS Cluster and Zookeeper Cluster. Which is needed for deploying OpenTSDB in clustered mode. Continuing to the series, we are going to talk about HBase, which will be used by OpenTSDB in the cluster to store data.
HBase is a column-oriented NoSQL database management system that runs on top of Hadoop Distributed File System (HDFS).
It is a part of the Hadoop ecosystem that provides random real-time read/write access to data in the Hadoop File System.
One can store the data in HDFS either directly or through HBase. Data consumer reads/accesses the data in HDFS randomly using HBase. HBase sits on top of the Hadoop File System and provides read and write access.
It is well suited for sparse data sets, which are common in many big data use cases. Like most of other Apache projects, it is also mainly written in Java. It can store the huge amount of data from terabytes to petabytes. HBase is not a relational database system. Unlike the Relational Database System, it does not support a structured query language like SQL. It is built for low latency operations, which is having some specific features compared to traditional relational models.
Storage Mechanism in HBase:
HBase is a column-oriented database. It stores data in tables and sorted by RowId. In table schema, only column family is defined. It is a key-value pair. A table has multiple column families and each column family can have any number of columns. HBase stores data on disk in a column-oriented format, it is distinctly different from traditional columnar databases.
In HBase, the tables are divided into regions and served by region servers.
The Main Component Of HBase are:
- Master server usage Apache Zookeeper and assigns region to the region server
- Responsible for load balancing. It will reduce the load from busy servers and assign that region to less occupied servers.
- Responsible for schema changes (HBase table creation, the creation of column families etc).
- Interface for creating, deleting, updating tables
- Monitor all the region servers in the cluster.
The HBase tables are the tables that are split horizontally into regions and are managed by region server.
HBase Region Server:
Regions are assigned to a node in the cluster called Region server. Region Server manages Region. When data size grows beyond the limit, to reduce the load on one Region Server. HBase automatically splits the table and distributes the load to another Region Server. A single region server can server around 1000 regions.
The process of splitting tables into regions is called Sharding and it is done automatically.
Role of Region Server:
- It communicates with the client and handles data-related operation
- Decide the size of the region
- Splitting regions automatically
- Handling read and writes requests
- Handle the read and write request for all the regions under it.
HFile is a file-based data structure that is used to store data in HBase. It is key/value type of file data structure. A file of sorted key/value pairs. Both keys and values are byte arrays. This data structure supports random read and writes operation on the table. Using key it will update the values on the table.
MemStore is a write buffer. Before permanent write data is a buffered in MemStore. When MemStore is full it content is flushed to HFile. It doesn't write in existing HFile instead it creates a new one.
HBase uses HDFS to store data. For more info please refer our article: An Introduction to HDFS.
HBase uses ZooKeeper as a centralized monitoring server to maintain configuration information. It also provides distributed synchronization. For more info, please refer to our last article: An Introduction to ZooKeeper.
For deploying HBase, we will use the harisekhon/hbase:1.2 docker image.
Create hbase-site.xml file in /root/hadoop/location in all 3 VM's.
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://namenode:8020/hbase</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>zoo1,zoo2,zoo3</value> </property> <property> <name>hbase.zookeeper.session.timeout</name> <value>60000</value> </property> <property> <name>hbase.status.published</name> <value>false</value> </property> <property> <name>hbase.region.replica.replication.enabled</name> <value>true</value> </property> </configuration>
Replace zoo1,zoo2,zoo3 with respective zookeeper IP.
HBase on VM 1:
docker run -dit --name hbase1 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 -v /root/hadoop/hbase-site.xml:/hbase-1.2.6/conf/hbase-site.xml --env-file hbase_env --network generic-class-net -h hbase1.generic-class-net harisekhon/hbase:1.2
HBase on VM 2:
docker run -dit --name hbase2 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 -v /root/hadoop/hbase-site.xml:/hbase-1.2.6/conf/hbase-site.xml --env-file hbase_env --network generic-class-net -h hbase2.generic-class-net harisekhon/hbase:1.2
HBase on VM 3:
docker run -dit --name hbase3 -p 8080:8080 -p 8085:8085 -p 9090:9090 -p 9095:9095 -p 16000:16000 -p 16010:16010 -p 16201:16201 -p 16301:16301 -v /root/hadoop/hbase-site.xml:/hbase-1.2.6/conf/hbase-site.xml --env-file hbase_env --network generic-class-net -h hbase3.generic-class-net harisekhon/hbase:1.2
Once all the services are deployed, you can see the HBase Status on http://<VM1 | VM2 | VM3 IP>:16010/master-status.
Published at DZone with permission of Nitin Ranjan , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.