DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Event-Driven Architectures: Designing Scalable and Resilient Cloud Solutions
  • How to Integrate Event-Driven Ansible With Kafka
  • Using KRaft Kafka for Development and Kubernetes Deployment
  • Bridging Cloud and On-Premises Log Processing

Trending

  • DZone's Article Submission Guidelines
  • How Large Tech Companies Architect Resilient Systems for Millions of Users
  • Unlocking AI Coding Assistants Part 4: Generate Spring Boot Application
  • A Developer's Guide to Mastering Agentic AI: From Theory to Practice
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Confluent Kafka Installation and Demo

Confluent Kafka Installation and Demo

Learn how to insert rows to SQLite-DB and show those rows on an auto-created topic on Kafka via the JDBC-source connector.

By 
Ersin Gulbahar user avatar
Ersin Gulbahar
·
Oct. 26, 21 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
5.6K Views

Join the DZone community and get the full member experience.

Join For Free

About KAFKA and Confluent 

Apache Kafka is an open-source community distributed event streaming platform used by thousands of corporations for high-performance streaming, data pipelines, and critical applications. Kafka was developed by the Apache Software Foundation written in Scala and Java. 

Confluent Open Source is a developer-optimized distribution of Apache Kafka. Confluent Platform is a full-scale data streaming platform that enables you to easily access, store, and manage data as continuous, real-time streams. Confluent is a more complete distribution of Apache Kafka. It streamlines the admin operations procedures with much ease.

Confluent Kafka Installation And Demo

The goal is to insert rows to SQLite-DB and show those rows on an auto-created topic on Kafka via the JDBC-source connector.

Environment: Red Hat Linux 7.x

  1. Download Tarball
    1
    curl -O http://packages.confluent.io/archive/6.0/confluent-6.0.1.tar.gz
  2. Extract tar.gz
    1
    tar -xvf tar/confluent-6.0.1.tar.gz
  3. Define Confluent variables
    1
    2
    export CONFLUENT_HOME=/mydata/myuser/confluent-6.0.1
    export PATH=$PATH:$CONFLUENT_HOME/bin
  4. Install confluent-hub and kafka connect jdbc
    1
    2
    $CONFLUENT_HOME/bin/confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest
    confluent-hub install confluentinc/kafka-connect-jdbc:10.0.1

    You can see the confluent hub on your local web.

    http://localhost:9021/clusters
  5. Define JDBC-source file to /mydata/myuser/confluent-6.0.1/etc/kafka-connect-jdbc/:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    name=test-sqlite-jdbc-autoincrement
    connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
    value.converter=org.apache.kafka.connect.json.JsonConverter
    value.converter.schemas.enable=false 
    tasks.max=1
    connection.url=jdbc:sqlite:test.db
    mode=incrementing
    incrementing.column.name=id
    topic.prefix=turkcell.sqlite-jdbc-
  6. Run confluent services up:
    1
    confluent local services connect start

    The output like below:Kafka zookeeper output

  7. Load jdbc-source to the connector:
    1
    confluent local services connect connector load jdbc-source -c /mydata/myuser/confluent-6.0.1/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
  8. You can create SQLite-DB; create a table and insert rows:
    1
    2
    3
    4
    cd confluent-6.0.1/
    sqlite3 test.db
    sqlite> CREATE TABLE ttech(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR(255)); 
    sqlite> INSERT INTO ttech(name) VALUES('turkcell'); 

    You can see a row like this:

    Kafka row

  9. Look at the connect log to see if Kafka-source-JDBC fails or works successfully:
    1
    confluent local services connect log
  10. Finally, look at Kafka topic to see your newly-added record. I use Kafka Tool to check:

Kafka record

Some Useful Commands and Screenshots

  1.  See connector list: confluent local services connect connector --list
  2. See connector content: confluent local services connect connector config jdbc-source
  3. Unload connector: confluent local services connect connector unload jdbc-source
  4. See connect log: confluent local services connect log
  5. Change formatter from Avro to JSON in this file: /mydata/myuser/confluent-6.0.1/etc/kafka/connect-distributed.properties
  6. If you use schema-registry:
    •  Add key-value schema:
      • APL
         
        curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" --data '{"schema": "{\"type\":\"record\",\"name\":\"your_table\",\"fields\":[{\"name\":\"ID\",\"type\":\"long\"}]}"}' http://localhost:8091/subjects/your_table-key/versions
    • Check it is installed:
      • APL
         
        curl -X GET http://localhost:8091/subjects
    • Add the connector to sync to the Oracle table:
      • APL
         
        curl -XPOST --header "Content-Type: application/json"  localhost:8083/connectors  -d 
        '{  
            "name": "sink_my_table",  
            "config": {    
                "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",    
        		"tasks.max": 3,   
        		"connection.url": "jdbc:oracle:thin:@my.db:1961:MYSERVICE",
                "connection.user": "ora_user",    
                "connection.password": "XXXXXX",    
        		"table.name.format": "my_table",
                "topics": "my_table",
                "auto.create": "false",
        		"delete.enabled": "true",
        		"pk.mode": "record_key",
        		"pk.fields": "ID",
        		"insert.mode": "upsert",
        		"transforms": "TimestampConverter1,TimestampConverter2,TimestampConverter3",
        		"transforms.TimestampConverter1.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
        		"transforms.TimestampConverter1.field": "RECORDDATE",
        		"transforms.TimestampConverter1.target.type": "Timestamp",
        		"transforms.TimestampConverter2.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
        		"transforms.TimestampConverter2.field": "STARTDATE",
        		"transforms.TimestampConverter2.target.type": "Timestamp",
        		"transforms.TimestampConverter3.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
        		"transforms.TimestampConverter3.field": "ENDDATE",
        		"transforms.TimestampConverter3.target.type": "Timestamp"
        		
            }
        }' 
    • Look at connector config information:
      • APL
         
        curl -X GET http://localhost:8083/connectors/sink_my_table
    • Look connector status:
      • APL
         
        curl -X GET http://localhost:8083/connectors/sink_my_table/status
    • Delete connector:
      • APL
         
        curl -X DELETE http://localhost:8083/connectors/sink_my_table

Screenshot of Confluent control center

screenshot of confluent overview page

screenshot of topics

screenshot of connect clusters

screenshot of JDBC source

Hope it helps you!

kafka

Opinions expressed by DZone contributors are their own.

Related

  • Event-Driven Architectures: Designing Scalable and Resilient Cloud Solutions
  • How to Integrate Event-Driven Ansible With Kafka
  • Using KRaft Kafka for Development and Kubernetes Deployment
  • Bridging Cloud and On-Premises Log Processing

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!