Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Amazon Redshift Spectrum: Diving Into the Data Lake!

DZone's Guide to

Amazon Redshift Spectrum: Diving Into the Data Lake!

Spectrum adds one more tool to your Redshift-based data warehouse. You can now use it to probe and analyze your data lake on an as-needed basis for a low per-query price.

· Database Zone ·
Free Resource

RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.  

Amazon’s Simple Storage Service S3 has been around since 2006. Enterprises have been pumping their data into this data lake at a furious rate. Within ten years of its birth, S3 stored over two trillion objects, each up to five terabytes in size. Enterprises know their data is valuable and worth preserving. But much of this data lies inert, in “cold” data lakes, unavailable for analysis, so-called “dark data.”

The Dark Data Problem. Source: Amazon AWS.

The Dark Data Problem. Source: Amazon AWS.

Analyzing “Dark Data”

So, what lies below the surface of data lakes? The first thing for enterprises is to find out what dark data they have accumulated, then analyze it in search of valuable insights. This means that analysts need solutions that allow them to access petabytes of dark data.

With Amazon Redshift Spectrum, you can query data in Amazon S3 without first loading it into Amazon Redshift. For nomenclature purposes, we’ll use “Redshift” for “Amazon Redshift” and “Spectrum” for “Amazon Redshift Spectrum.”

Today, there are three major existing ways to access and analyze data in S3.

  1. Amazon Elastic MapReduce (EMR). EMR uses style queries to access and process large data sets in S3.
  2. Amazon Athena. Athena offers a console to query S3 data with standard SQL and no infrastructure to manage. Athena also has an API.
  3. Amazon Redshift. You can load data from S3 into an Amazon Redshift cluster for analysis.

So, why not use these existing options? For example, companies already use Amazon Redshift to analyze their “hot” data. So, why not load that cold data from S3 into Redshift and call it a day?

Two reasons:

  1. Effort. Loading data into Amazon Redshift involves extract, transform, and load (ETL) steps. Those steps are necessary to convert and structure data for analysis. Amazon estimates that figuring out the right ETL consumes 70% of an analytics project.
  2. Cost. You may not even know what data to extract until you have analyzed it a bit. Uploading lots of cold S3 data for analysis requires growing your clusters. That translates to paying more, as Redshift pricing is based on the size of your cluster. Meanwhile, you continue to pay S3 storage charges for retaining your cold data.

Redshift Spectrum offers the best of both worlds. With Spectrum, you can:

  • Continue using your analytics applications with the same queries you’ve written for Redshift.
  • Leave cold data as-is in S3 and query it via Amazon Redshift without ETL processing. That includes joining data from your data lake with data in Redshift, using a single query.
  • Decouple processing from storage. Because there’s no need to increase cluster size, you can save on Redshift storage.
  • Pay only when you run queries against S3 data. Spectrum queries cost a reasonable $5/terabyte of data processed.

Data Stack with Amazon Redshift, Amazon Redshift Spectrum, Amazon Athena, AWS Glue, and S3.

Spectrum is the “glue” or “bridge” layer that provides Redshift an interface to S3 data. Redshift becomes the access layer for your business applications. Spectrum is the query processing layer for data accessed from S3. The above picture illustrates the relationship between these services.

A Closer Look at Redshift Spectrum

From a deployment perspective, Spectrum is “under the hood.” It’s a group of managed nodes in your private VPC, available to any of your Redshift clusters that are Spectrum-enabled. It pushes compute-intensive tasks down to the Redshift Spectrum layer. That layer is independent of your Amazon Redshift cluster.

There are three key concepts to understand how to run queries with Redshift Spectrum:

  1. External data catalog
  2. External schemas
  3. External tables

The external data catalog contains the schema definitions for the data you wish to access in S3. It’s a central metadata repository for your data assets. Potential options for your data catalog:

The external schema contains your tables. External tables allow you to query data in S3 using the same SELECT syntax as with other Amazon Redshift tables. External tables are read-only, i.e. you can’t write to an external table.

You can keep writing your usual Redshift queries. The main change with Spectrum is that the queries now also contain a reference to data stored in S3.

Joining Internal and External Tables

The Redshift query engine treats internal and external tables the same way. You can do the typical operations like queries and joins on either type of table. Or a combination of both: Query an external table and join its data with that from an internal one.

An example: You are using Redshift to analyze data of your e-commerce site visitors — what pages they visit, how long they stay, what they buy (or not), etc. You keep a year’s worth of data in your Redshift clusters. Older data, you move to S3.

Then, you notice an odd seasonal variation. You want to see if this was also true for past years vs. an aberration for this year. Luckily, you have saved historic clickstream data in S3, going back many years. You can now access that historic data via an external table with Spectrum and run the same queries you’re running in Amazon Redshift. Or, create new insights by joining other past data with this year’s.

Redshift parses, compiles, and distributes a SQL query to the nodes in a cluster the normal way. The part of the query that references an external gets sent to Spectrum. Spectrum processes the relevant data in S3 and sends the result back to Redshift. Redshift collects the partial results from its nodes and Spectrum, concatenates, joins, etc. and returns the complete result.

Summary

A few points to keep in mind when working with Spectrum:

  • Your business applications remain unchanged and don’t know how or where a query is running. The only change for the business analyst is when defining access to external tables.
  • External data remains in S3, there is no ETL to load it into your Redshift cluster. That decouples your storage layer in S3 from your processing layer with Redshift and Spectrum.
  • You don’t need to increase the size of your Redshift cluster to process data in S3. You only pay for the S3 data your queries actually access.
  • Redshift does all the hard work of minimizing the number of Spectrum nodes needed to access the S3 data. It also makes processing between Redshift and Spectrum efficient.

You should also do the homework to ensure that processing of data in S3 is economical and efficient. You can save on costs and get better performance if you partition the data, compress data, or convert it to columnar formats such as Apache Parquet.

In summary, Spectrum adds one more tool to your Redshift-based data warehouse investment. You can now use its power to probe and analyze your data lake on an as-needed basis for a very low per-query price.

Get comfortable using NoSQL in a free, self-directed learning course provided by RavenDB. Learn to create fully-functional real-world programs on NoSQL Databases. Register today.

Topics:
data warehouse ,amazon s3 ,database ,dark data ,data lake ,data analytics ,redshift

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}