DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Best Practices for Scaling Kafka-Based Workloads
  • Building a Reactive Event-Driven App With Dead Letter Queue
  • High-Speed Real-Time Streaming Data Processing
  • How to Create — and Configure — Apache Kafka Consumers

Trending

  • The Role of Functional Programming in Modern Software Development
  • Teradata Performance and Skew Prevention Tips
  • A Guide to Container Runtimes
  • How to Build Scalable Mobile Apps With React Native: A Step-by-Step Guide
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Linking Apache Ignite and Apache Kafka for Highly Scalable and Reliable Data Processing

Linking Apache Ignite and Apache Kafka for Highly Scalable and Reliable Data Processing

Here's how to link Apache Kafka and Ignite, for maintaining scalability and reliability for data processing. We'll explore injecting data with KafkaStreamer, as well as IgniteSinkConnector.

By 
Roman Shtykh user avatar
Roman Shtykh
·
Mar. 03, 16 · Analysis
Likes (13)
Comment
Save
Tweet
Share
23.5K Views

Join the DZone community and get the full member experience.

Join For Free

Apache Ignite is a "high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash-based technologies." It has a vast, mature and continuously growing arsenal of in-memory components to boost the performance and increase the scalability of applications. In addition, it provides a great number of integration solutions including integrations with Apache Spark and Hadoop, Apache Flume, Apache Camel, and others.

Both Apache Ignite and Apache Kafka are known for their high scalability and reliability, and in this article, I will describe how one can easily link it to Apache Kafka messaging system to achieve a robust data processing pipeline. I will present two solutions available out of the box: KafkaStreamer and IgniteSinkConnector, based on the recently released feature of Apache Kafka -- Kafka Connect.

Injecting Data via KafkaStreamer

Fetching data from Kafka and injecting it into Ignite for further in-memory processing became possible from Ignite 1.3 via its KafkaStreamer, which is an implementation of IgniteDataStreamer using Kafka's consumer to pull data from Kafka brokers and efficiently placing it into Ignite caches.

To use it, first of all, you will have to add KafkaStreamer dependency to the pom.xml

        <dependency>
            <groupId>org.apache.ignite</groupId>
            <artifactId>ignite-kafka</artifactId>
            <version>${ignite.version}</version>
        </dependency>

Assuming you have a cache with String keys and String values, data streaming can be done in a simple manner.

KafkaStreamer<String, String, String> kafkaStreamer = new KafkaStreamer<>();

try (IgniteDataStreamer<String, String> stmr = ignite.dataStreamer(null)) {
    // allow overwriting cache data
    stmr.allowOverwrite(true);

    kafkaStreamer.setIgnite(ignite);
    kafkaStreamer.setStreamer(stmr);

    // set the topic
    kafkaStreamer.setTopic(someKafkaTopic);

    // set the number of threads to process Kafka streams
    kafkaStreamer.setThreads(4);

    // set Kafka consumer configurations
    kafkaStreamer.setConsumerConfig(kafkaConsumerConfig);

    // set decoders
    kafkaStreamer.setKeyDecoder(strDecoder);
    kafkaStreamer.setValueDecoder(strDecoder);

    kafkaStreamer.start();
}
finally {
    kafkaStreamer.stop();
}

For consumer configurations, you can refer Apache Kafka docs.

Injecting Data via IgniteSinkConnector

From the upcoming Apache Ignite release 1.6 yet another way to integrate your data processing becomes possible. It is based on Kafka Connect, a new feature introduced in Apache Kafka 0.9 that enables scalable and reliable streaming data between Apache Kafka and other data systems.
Such integration enables continuous and safe streaming data from Kafka to Ignite for computing and transacting on large-scale data sets in memory.

Image title

Fig 1. Data Continuously Injected via IgniteSinkConnector and Queries by Users.

Connector can be found in 'optional/ignite-kafka.' It and its dependencies have to be on the classpath of a Kafka running instance.
Also, you will need to configure your connector, for instance, as follows

# connector
name=my-ignite-connector
connector.class=org.apache.ignite.stream.kafka.connect.IgniteSinkConnector
tasks.max=2
topics=someTopic1,someTopic2

# cache
cacheName=myCache
cacheAllowOverwrite=true
igniteCfg=/some-path/ignite.xml

where igniteCfg is set to the path to the Ignite cache configuration file, cacheName is the name of the cache you specify in '/some-path/ignite.xml' and the data from someTopic1,someTopic2 will be streamed to. cacheAllowOverwrite is set to true to enable overwriting existing values in the cache.

Another important configuration is Kafka Connect workers' properties, which can look like this for our simple example

bootstrap.servers=localhost:9092

key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false

internal.key.converter=org.apache.kafka.connect.storage.StringConverter
internal.value.converter=org.apache.kafka.connect.storage.StringConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false

offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000

Now you can start your connector, for example, in a standalone mode

bin/connect-standalone.sh myconfig/connect-standalone.properties myconfig/ignite-connector.properties

Note that for a distributed mode, myconfig/ignite-connector.properties are not passed on the command line. Use REST API to create, modify and destroy the connector.

Checking the Flow

For a very basic verification of the flow, we can perform the following quick check.

Start Zookeeper and Kafka servers

bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties

Since our sample schema handles string data, we can feed key1,val1 via kafka-console-producer (This is done just for illustration -- normally you will use a Kafka producer or source connectors to feed data)

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --property parse.key=true --property key.separator=,
key1,val1

Then, start the connector as described.

That's all! The data ends up in the Ignite cache, ready for further in-memory processing or analysis via SQL queries (See Apache Ignite docs).

kafka Data processing Apache Ignite

Opinions expressed by DZone contributors are their own.

Related

  • Best Practices for Scaling Kafka-Based Workloads
  • Building a Reactive Event-Driven App With Dead Letter Queue
  • High-Speed Real-Time Streaming Data Processing
  • How to Create — and Configure — Apache Kafka Consumers

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!