Introducing the Kafka Processor Client
How to effectively use the new Apache Kafka processor client for your data processing needs.
Join the DZone community and get the full member experience.Join For Free
if you work on systems delivering large quantities of data, you have probably heard of kafka if you aren’t using it already. at a very high level, kafka is a fault tolerant, distributed publish-subscribe messaging system that is designed for speed and the ability to handle hundreds of thousands of messages. kafka has many applications, one of which is real-time processing. real time processing typically involves reading data from a topic (source) doing some analytic or transformation work then writing the results to another topic (sink). currently, to do this type of work your choices are:
- using your own custom code by using a kafkaconsumer to read in the data then writing out data via a kafkaproducer .
- use a full fledged stream-processing framework such as spark steaming , flink , or storm .
while both of those approaches are great, in some cases it would be nice to have a solution that was somewhere in the middle of those two. to that end, a processor api was proposed via the kip – kafka improvement proposals process. the aim of the processor api is to introduce a client to enable processing data consumed from kafka and writing the results back into kafka. there are two components of the processor client:
- a “lower-level” processor that providea api’s for data-processing, composable processing, and local state storage.
- a “higher-level” stream dsl that would cover most processor implementation needs.
this marks a start of a series covering the new kafka processor client, with this post covering the “lower-level” processor functionality. in subsequent posts we’ll cover the “higher-level” dsl and advanced use cases where we’ll bring in other technologies. for a full description of the motivation and goals of the processor client the reader is encouraged to read the original proposal . disclaimer: i’m not affiliated with confluent , just an avid user of kafka.
potential use cases for the processor api
in my opinion , here are a few reasons the processor api will be a very useful tool:
- there is a need for notification/alerts on singular values as they are processed. in other words the business requirements are such that you don’t need to establish patterns or examine the value(s) in context with other data being processed. for example, you want an immediate notification that a fraudulent credit card has been used.
- you filter your data when running analytics. filtering out a medium to large percentage of data ideally should be re-partitioned to avoid data-skew issues. partitioning is an expensive operation, so by filtering out what data is delivered to your analytics cluster, you can save the filter-repartition step.
- you want to run analytics on only a portion of your source data, while delivering the entirety of you data to another store.
first processor example
the first example of the processor we’ll transform fictitious customer purchase data and perform the following actions:
- a processor to mask credit card numbers.
- a processor to collect the customer name and the amount spent to use in a rewards program.
- a procssor to collect the zip code and the item purchased to help determine shopping patterns.
let’s breifly describe the structure of the processor objects. all three processors extend the
class, which provides no-op overrides for the
methods. in this example we just need to implement the
method, where the action is performed on each message. after work is completed, the
method is called, which forwards the modified/new key-value pair to downstream consumers. (the
method retrieves the
instance variable initialized in the parent class by the
method is called, committing the current state of the stream including the message offset.
building the graph of processors
now we need to define the dag to determine the flow of messages. this is where “the rubber meets the road” so to speak for the processor api. to build our graph of processing nodes we use the toplogybuilder . although our messages are json, we need to define serializer and deserializer instances since the processors work with types. here’s the portion of the code from the purchaseprocessordriver that builds the graph topology, serializers, and deserializers.
//serializers for types used in the processors jsondeserializer<purchase> purchasejsondeserializer = new jsondeserializer<>(purchase.class); jsonserializer<purchase> purchasejsonserializer = new jsonserializer<>(); jsonserializer<rewardaccumulator> rewardaccumulatorjsonserializer = new jsonserializer<>(); jsonserializer<purchasepattern> purchasepatternjsonserializer = new jsonserializer<>(); stringdeserializer stringdeserializer = new stringdeserializer(); stringserializer stringserializer = new stringserializer(); topologybuilder topologybuilder = new topologybuilder(); topologybuilder.addsource("source", stringdeserializer, purchasejsondeserializer, "src-topic") .addprocessor("process", creditcardanonymizer::new, "source") .addprocessor("process2", purchasepatterns::new, "process") .addprocessor("process3", customerrewards::new, "process") .addsink("sink", "patterns", stringserializer, purchasepatternjsonserializer, "process2") .addsink("sink2", "rewards",stringserializer, rewardaccumulatorjsonserializer, "process3") .addsink("sink3", "purchases", stringserializer, purchasejsonserializer, "process"); //use the topologybuilder and streamingconfig to start the kafka streams process kafkastreams streaming = new kafkastreams(topologybuilder, streamingconfig); streaming.start();
there’s several steps here, so let’s do a quick walkthrough
- on line 11 we add a source node named “source” with a stringdeserializer for the keys and jsonserializer genericized to work with purchase objects and one to n number of topics that will feed this source node. in this case we are using input from one topic, “src-topic”.
next we start adding processor nodes. the
addprocessormethod takes a string for the name, a processorsupplier and one to n number of parent nodes. here the first processor is a child of the “source” node, but is a parent of the next two processors. a quick note here about the syntax for our processorsupplier. the code is leveraging method handles which can used as lambda expressions for supplier instances in java 8. the code goes on to define two more processors in a similar manner.
finally we add sinks (output topics) to complete our messaging pipeline. the
addsinkmethod takes a string name, the name of a topic, a serializer for the key, a serializer for the value and one to n number of parent nodes. in the 3
addsinkmethods we can see the jsondeserializer objects that were created earlier in the code.
here’s a diagram of the final result of the topologybuilder:
the processor api is not limited to working with current values as they arrive, but is also capable of maintaining state to use in aggregation, summation or joining messages that arrive later. to take advantage of stateful processing, create a
by using the
method when creating the processing topology. there are two types of stores that can be created: 1) in-memory and 2) a
store which takes advantage of using off-heap memory. the choice of which one to use could depend on how long lived the values will be. for a larger number of fairly static values the rocksdb would be a good choice, but for short lived entries the in-memory might be a better fit. the
class provides the serializer/deserializer instances when specifying string, integer or long keys and values. but if you are using a custom type for the keys and/or values, you’ll need to provide a custom
stateful processor example
in this example we see the
method along with two overridden methods:
method extracts the stock symbol, updates/creates the trade information, then places the summary results in the store.
method we are:
- setting the processorcontext reference.
processorcontext.schedulemethod which controls how frquently the
punctuatemethod is executed. in this case it’s every 10 seconds.
getting a reference to the state store created when constructing the
topologybuilder(we’ll see that part next).
method iterates over all the values in the store and if they have been updated with the last 11 seconds, the stocktransactionsummary object is sent to consumers.
constructing a topologybuilder with a state store
as in the previous example, looking at the processor code is only half the story. here’s the section from the
that creates our
topologybuilder builder = new topologybuilder(); jsonserializer<stocktransactionsummary> stocktxnsummaryserializer = new jsonserializer<>(); jsondeserializer<stocktransactionsummary> stocktxnsummarydeserializer = new jsondeserializer<>(stocktransactionsummary.class); jsondeserializer<stocktransaction> stocktxndeserializer = new jsondeserializer<>(stocktransaction.class); jsonserializer<stocktransaction> stocktxnjsonserializer = new jsonserializer<>(); stringserializer stringserializer = new stringserializer(); stringdeserializer stringdeserializer = new stringdeserializer(); builder.addsource("stocks-source", stringdeserializer, stocktxndeserializer, "stocks") .addprocessor("summary", stocksummary::new, "stocks-source") .addstatestore(stores.create("stock-transactions").withstringkeys() .withvalues(stocktxnsummaryserializer,stocktxnsummarydeserializer).inmemory().maxentries(100).build(),"summary") .addsink("sink", "stocks-out", stringserializer,stocktxnjsonserializer,"stocks-source") .addsink("sink-2", "transaction-summary", stringserializer, stocktxnsummaryserializer, "summary"); system.out.println("starting kafkastreaming"); kafkastreams streaming = new kafkastreams(builder, streamingconfig); streaming.start(); system.out.println("now started");
for the most part, this is very similiar code in terms of creating serializers, deserializers, and the topology builder. but there is one difference. on lines 13 and 14 we are creating an in-memory state store (named “summary”) to be used by the processor. the name passed to the
method is the same we used in the processor
method to retieve the store. when specifying the keys we can use the convenience method
that requires no argments since strings are a supported type. but since we are using a typed value, the
method is used and provides serializer and deserializer instances.
running the processors with example code
the examples shown here can be run against a live kafka cluster. instructions are provided in the github repository for the blog.
so far we have covered the “lower level” portion of the processor api for kafka. hopefully one can see the usefulness and versatility this new api will bring to current and future users of kafka. in the next post we will cover the “higher level” dsl api and cover addtion topics such as joining and time window functions. one final thing to keep in mind is that the processor api/kafka stream is a work in progress and will continue to change for a while.
Published at DZone with permission of Bill Bejeck, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.