Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Real-time Data Pipelines with Kafka Connect

DZone's Guide to

Real-time Data Pipelines with Kafka Connect

Information about Kafka Connect sourced from Spark Summit East 2016.

· Big Data Zone
Free Resource

Need to build an application around your data? Learn more about dataflow programming for rapid development and greater creativity. 

Ewen Cheslack-Postava from Confluent, did a very interesting talk about Kafka Connect. "Building Realtime Data Pipelines with Kafka Connect and Spark Streaming". Kafka Connect runs as a cluster that enables you to process and Kafka Connect, install it here from Confluent

Kafka Connect works with Spark Streaming to enable you to do ingest and process a constant stream of data. Ewen used the example of streaming from a database as rows change. But you can also ingest logs, twitter streams, anything that's changing.

You can aggregate, join different streams of data for your application. You don't want one off tools, don't want to copy data, with Kafka Connect and Spark Streaming you can have one consistent solution for all types of data.  This works nice from postgresql.

ETL tends to combine things into one big hacky set of activities. Like in all things development, we like to have separation of concerns in building pipelines. You usually have the option of adhoc with a bunch of overhead or very weak ETL abstractions with no guarantees on your particular data type. You can easily build a scalable ETL pipeline without all the hackiness and specializaiton with Kafka Connect. Kafka Connect is a large-scale streaming data import/export tool for Kafka. It is an open source part of apache project for Kafka. The concepts are pretty simple. You have two modes, you can be a source connector which copies data into Kafka. Or you can be a Sink that gets data out of Kafka. The key thing you need in your data is a way to identify is a particular row or piece of data has been processed. You need an offset, which if you have used Kafka you will understand way. If you are using a database source, getting data from a table, you need to use an offset like timestamp to generate a stream to kafka. You can also use a sequence number id or a combination of fields that will be unique and easy for Kafka to identify. For the HDFS Sink, Kafka Connect reads from Kafka topic(s) stream to directory as files in chunks, with one partition to a sequence of files in hdfs, labelled by offsets. Which is pretty logically for HDFS writing and easy to work with from your Spark jobs.

What is nice and fits well into the Spark developer mindset, you can handle batch and streaming workloads with the same code.

ETL is broken up into a separation of concerns with Extract being done from the Source system, transformation being doing in Spark Streaming and some in Kafka Connect and Load being done on the Sink side with Kafka Connect. You are reducing the number of tools you need to learn, operate and maintain.

Mapping is defined by input system, in Kafka it is by Kafka partition. If a RDBMS is your source, it is defined by the database.

Internally data is Key-Value Pairs with a byte array with a Generic Data API on top providing an abstraction for Connects. The schema is stored to be reused on extract. Each event has an offset, you could copy an entired database by partitioning by table. Each event is a row updated. Again, the offset is meaningful to the table (what data is processed. autoincrementing primary key id, timestamp, or combined fields)

What really makes this great for ETL is that their is built in parallelism in Kafka Connect. So you have a scalable data copying system that builds on the Spark and Kafka that you already use. Copying is broad and easy to grab entire database or all logs at once, making great for the constant stream of data that is becoming more and more the norm.  Kafka Connect breaks things down into Tasks that run in parallel in threads working on partitions.


Image title


You have two options of running Kafka Connect. You can use the Standalone (Agent) execution model where you have a single process on a single machine with one configuration file. This is good for testing and development. Also this is sometimes needed if you have a source on a machine that cannot be remote accessed, say a web log file on a server. The other execution model is distributed mode. This provides elasticity, as tasks are balanced across workers. They are monitored, if a worker dies, automatic rebalancing, tasks are autobalanced. Kafka Connect can run on Mesos, Yarn or Kubernetes. The distributed mode reuses Kafka Consumer Group functionality so is tested and stable. With this kind of setup you can run "Data Integration as a Service" in your enterprise. Kafka Connect includes a REST API to the Connect Cluster. Now you have a way to Transform, Aggregate, Join different data sources with the same consistent API. As with Kafka there are delivery guarantees. Kafka Connect provides automatic offset checkpointing and recovery and supports at least once delivery. For HDFS, you can guarantee exactly once for HDFS.  At most once swaps write and commit, on restart task checks offsets and rewinds.

Kafka Connect is a great open source project that I recommend evaluating for your enterprise.

Check out the Exaptive data application Studio. Technology agnostic. No glue code. Use what you know and rely on the community for what you don't. Try the community version.

Topics:
kafka ,logs ,nosql ,spark ,big data ,spark streaming

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}