Impala and SQL on Hadoop
Join the DZone community and get the full member experience.Join For Free
the origins of impala can be found in f1 – the fault-tolerant distributed rdbms supporting google’s ad business .
one of many differences between mapreduce and impala is that in impala, the intermediate data moves from process to process directly instead of storing it on hdfs for processes to get at the data needed for processing. this provides a huge performance advantage and doing so while consuming few cluster resources. less hardware to-do more!
there are many advantages to this approach over alternative approaches for querying hadoop data, including:
- thanks to local processing on data nodes, network bottlenecks are avoided.
- a single, open, and unified metadata store can be utilized.
- costly data format conversion is unnecessary and thus no overhead is incurred.
- all data is immediately query-able, with no delays for etl.
- all hardware is utilized for impala queries as well as for mapreduce.
- only a single machine pool is needed to scale.
we encourage you to read the documentation for further exploration!
there are still transformation steps required to optimize the queries but impala can help to-do this for you with parquet file format . better compression and optimized runtime performance is realized using the parquetformat though many other file types are supported.
Published at DZone with permission of Joe Stein, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.