Introducing Hydrator Data Pipelines
Introducing Hydrator Data Pipelines
Instead of operating on individual records, Hydrator now has the flexibility to operate at a record level or at a feed level. This paradigm shift allowed us to add three new plugin types that let users create complex data pipelines through the same simple drag and drop user interface.
Join the DZone community and get the full member experience.Join For Free
Hortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.
Cask Hydrator lets you easily create ETL pipelines through a simple drag and drop user interface. We’ve found that our users like the simplicity of Hydrator, but often want to create pipelines that are more complex than simple transformations. For example, you may want to remove duplicate data, count how many records satisfy some criteria, or even run a machine learning algorithm. To support use cases like these, we made one fundamental change to Hydrator in CDAP 3.4; instead of operating on individual records, Hydrator now has the flexibility to operate at a record level or at a feed (i.e. collection of records) level. This paradigm shift allowed us to add three new plugin types — Aggregate, Compute and Model — that let users create complex data pipelines through the same simple drag and drop user interface.
The first new plugin type is the Aggregate type, and consists of two phases. In the first phase, each input record is assigned to zero or more groups. In the second phase, each group is aggregated into zero or more output records. For example, Cask Hydrator includes a Deduplicate plugin that groups records by a subset of their fields, then chooses one record in the group as the canonical record. Let’s suppose the records input to the Deduplicate plugin have time_window, ticker, and price fields. You could configure the plugin to group by ticker and time_window, then pick the record with the highest price for each group. You can do this easily using the Hydrator UI:
Cask Hydrator also includes a GroupByAggregate plugin that can compute SQL-like aggregates. For example, with similar input data, you could group by ticker, then compute a count, sum, min, and max for each group:
The plugin will group, then compute aggregates as configured:
You are also free to implement your own aggregator plugin. For example, it is straightforward to write a TopK aggregator that groups by a field and outputs the top k records in each group.
The second new plugin type is the Model type and allows you to construct and store machine learning models in Spark. This type of plugin is a sink, which means it is not connected to other stages in your pipeline. The plugin receives all its input records as an RDD, and can run any logic you could normally do in Spark. This makes the Spark sink a natural place to run and store the many different machine learning algorithms available in Spark.
The last new plugin type is the Compute type. A Compute plugin is similar to a Model plugin in that it runs using the Spark execution framework. The only difference is that it must output an RDD, which allows it to be connected to other plugins in a Hydrator pipeline. This allows you to leverage the power of Spark to create a wide variety of useful plugins. For example, you could load the classification model trained by another plugin to tag each input record with an additional category field, run a feature selection algorithm to filter out irrelevant records, or just sample your data. For example, you can easily write a plugin that normalizes a field to a value between 0 and 1:
This plugin computes the minimum and maximum values for a specific field, then uses those values to scale that field to a value between zero and one.
Since Cask Hydrator is built on top of the Cask Data Application Platform (CDAP), your data pipelines automatically get all the benefits that CDAP provides. You can configure different actions to run after your data pipeline run has finished. For example, you can send an email if the run failed, or run a database query if it succeeded. Metrics, logging, scheduling, and lineage all come out of the box. There is also no limit on how many of these new plugin types can be used in the same pipeline, and no restrictions on how they can be connected. Hydrator handles the magic of transforming your data pipeline into a workflow of Spark and MapReduce jobs. You can find more information on creating custom ETL plugins here. In an upcoming blog post, we will take a peek under the hood and examine how Hydrator transforms your data pipeline into a workflow.
Published at DZone with permission of Albert Shau . See the original article here.
Opinions expressed by DZone contributors are their own.