DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • The Magic of Apache Spark in Java
  • SQL vs NoSQL and SQL to NoSQL Migration
  • A Practical Guide to Data Obfuscation
  • Big Data Case Study: Zhihu's 1.3 Trillion Rows of Data, Milliseconds of Response Time

Trending

  • Distributed Tracing Best Practices
  • Best GitHub-Like Alternatives for Machine Learning Projects
  • Securing Your Applications With Spring Security
  • DevOps Uses a Capability Model, Not a Maturity Model
  1. DZone
  2. Data Engineering
  3. Databases
  4. Apache Phoenix: An SQL Driver for HBase

Apache Phoenix: An SQL Driver for HBase

Istvan Szegedi user avatar by
Istvan Szegedi
·
May. 21, 14 · Interview
Like (1)
Save
Tweet
Share
23.41K Views

Join the DZone community and get the full member experience.

Join For Free

introduction

hbase is one of the most popular nosql databases, it is available in all major hadoop distributions and also part of aws elastic mapreduce as an additional application. out of the box it has its own data model operations such as get, put, scan and delete and it does not offer sql-like capabilities, as oppose to, for instance, cassandra query language, cql.
apache phoenix is a sql layer on top of hbase to support the most common sql-like operations such as create table, select, upsert, delete, etc. originally it was developed by salesforce.com engineers for internal use and was open sourced. in 2013 it became an apache incubator project.

architecture

we have covered hbase in more detail in this article. just a quick recap: hbase architecture is based on three key components: hbase master server, hbase region servers and zookeeper.

hbase-architecture

the client needs to find the regionservers in order to work with the data stored in hbase. in essence, regions are the basic elements for distributing tables across the cluster. in order to find the region servers, the client first will have to talk to zookeeper.

hbase-lookup

the key elements in the hbase datamodel are tables, column families, columns and rowkeys. the tables are made of columns and rows. the individual elements at the column and row intersections (cells in hbase term) are version based on timestamp. the rows are identified by rowkeys which are sorted – these rowkeys can be considered as primary keys and all the data in the table can be accessed via them.

the columns are grouped into column families; at table creation time you do not have to specify all the columns, only the column families. columns have a prefix derived from the column family and its own qualifier,a column name looks like this: ‘contents:html’.

as we have seen, hbase classic data model is not designed with sql in mind. under the hood it is a sorted multidimensional map. that is where phoenix comes to the rescue; it offers a sql skin on hbase. phoenix is implemented as a jdbc driver. from architecture perspective a java client using jdbc can be configured to work with phoenix driver and can connect to hbase using sql-like statements. we will demonstrate how to use squirrel client , a popular java-based graphical sql client together with phoenix.

getting started with phoenix

you can download phoenix from apache download site . different phoenix versions are compatible with different hbase versions, so please, read phoenix documentation to ensure you have the correct setup. in our tests we used phoenix 3.0.0 with hbase 0.94, the hadoop distribution was cloudera cdh4.4 with hadoop v1.. the phoenix package contains both hadoop version 1 and version 2 drivers for the clients so we had to use the appropriate hadoop-1 files, see the details later on when talking about squirrel client.

once you unzipped the downloaded phoenix package, you need to copy the relevant phoenix jar files to the hbase region servers in order to ensure that the phoenix client can communicate with them, otherwise you may get an error message saying that the client and server jars are not compatible.

$ cd ~/phoenix/phoenix-3.0.0-incubating/common
$ cp phoenix-3.0.0-incubating-client-minimal.jar  /usr/lib/hbase/lib
$ cp phoenix-core-3.0.0-incubating.jar /usr/lib/hbase/lib

after you copied the jar files to the region servers, we had to restart them.

phoenix provides a command line tool called sqlline – it is a utility written in python. its functionality is similar to oracle sqlplus or mysql command line tools; not too sophisticated but does the job for simply use cases.

before you start using sqlline, you can create a sample database table, populate it and run some simple queries as follows:

$ cd ~/phoenix/phoenix-3.0.0.0-incubating/bin
$ ./psql.py localhost ../examples/web_stat.sql ../examples/web_stat.csv ../examples/web_stat_queries.sql

this will run a create table statement:

create table if not exists web_stat (
     host char(2) not null,
     domain varchar not null,
     feature varchar not null,
     date date not null,
     usage.core bigint,
     usage.db bigint,
     stats.active_visitor integer
     constraint pk primary key (host, domain, feature, date)
);

then load the data stored in the web_stat csv file:

na,salesforce.com,login,2013-01-01 01:01:01,35,42,10
eu,salesforce.com,reports,2013-01-02 12:02:01,25,11,2
eu,salesforce.com,reports,2013-01-02 14:32:01,125,131,42
na,apple.com,login,2013-01-01 01:01:01,35,22,40
na,salesforce.com,dashboard,2013-01-03 11:01:01,88,66,44
...

and the run a few sample queries on the table, e.g.:

-- average cpu and db usage by domain
select domain, avg(core) average_cpu_usage, avg(db) average_db_usage 
from web_stat 
group by domain 
order by domain desc;

now you can connect to hbase using sqlline:

$ ./sqlline.py localhost
[cloudera@localhost bin]$ ./sqlline.py localhost
..
connecting to jdbc:phoenix:localhost
driver: org.apache.phoenix.jdbc.phoenixdriver (version 3.0)
autocommit status: true
transaction isolation: transaction_read_committed
..
done
sqlline version 1.1.2
0: jdbc:phoenix:localhost> select count(*) from web_stat;
+------------+
|  count(1)  |
+------------+
| 39         |
+------------+
1 row selected (0.112 seconds)
0: jdbc:phoenix:localhost> select host, sum(active_visitor) from web_stat group by host;
+------+---------------------------+
| host | sum(stats.active_visitor) |
+------+---------------------------+
| eu   | 698                       |
| na   | 1639                      |
+------+---------------------------+
2 rows selected (0.294 seconds)
0: jdbc:phoenix:localhost>

using squirrel with phoenix

if you prefer to use a graphical sql client with phoenix, you can download e.g. squirrel from here. after that the first step is to copy the appropriate phoenix driver jar file to squirrel lib directory:

$ cd ~/phoenix
$ cp phoenix-3.0.0-incubating/hadoop-1/phoenix-3.0.0.-incubatibg-client.jar ~/squirrel/lib

now you are ready to configure the jdbc driver in squirrel client, as shown in the picture below:

squirrel-1

then you can connect to phoenix using the appropriate connect string (jdbc:phoenix:localhost in our test scenario):

squirrel-2

once connected, you can start executing your sql queries:
squirrel-3

phoenix on amazon web services – aws elastic mapreduce with phoenix

you can also use phoenix with aws elastic mapreduce. when you create a cluster, you need to specify apach hadoop version, then configure hbase as additional application and define the bootsrap action to load phoenix onto your aws emr cluster. see the details below in the pictures:

aws-emr-3

aws-emr-5

once the cluster is running, you can login to the master node using ssh and check your phoenix configuration.
aws-emr-9

conclusion

sql is one of the most popular languages used by data scientists and it is likely to remain so. with the advent of big data and nosql databases the volume, variety and velocity of the data have significantly increased but still the demand for traditional, well-known languages to process them did not change too much. sql on hadoop solutions are gaining momentum. apache phoenix is interesting open source player to offer sql layer on top of hbase.

sql Database MySQL Apache Phoenix Driver (software) Data science Big data Amazon Web Services

Published at DZone with permission of Istvan Szegedi, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • The Magic of Apache Spark in Java
  • SQL vs NoSQL and SQL to NoSQL Migration
  • A Practical Guide to Data Obfuscation
  • Big Data Case Study: Zhihu's 1.3 Trillion Rows of Data, Milliseconds of Response Time

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: