DZone
Database Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Database Zone > Using CockroachDB CDC with Confluent Cloud Kafka and Schema Registry
Content provided by Cockroach Labs logo

Using CockroachDB CDC with Confluent Cloud Kafka and Schema Registry

Follow along in this article as I document the steps required to set up CockroachDB CDC with Confluent Schema Registry.

Artem Ervits user avatar by
Artem Ervits
CORE ·
Apr. 21, 22 · Database Zone · Tutorial
Like (1)
Save
Tweet
2.60K Views

Previous Articles on CDC:

  • SaaS Galore: Integrating CockroachDB with Confluent Kafka, FiveTran, and Snowflake
  • CockroachDB CDC Using Minio as Cloud Storage Sink - Part 3
  • CockroachDB CDC Using Hadoop Ozone S3 Gateway as Cloud Storage Sink

Motivation

I was working on a demo requiring data to be serialized in Avro format. CockroachDB supports Avro for change data capture when used in conjunction with Confluent Schema Registry. We have documentation on CDC with Avro using a local Schema Registry, but what is missing is hosted Schema Registry on Confluent Cloud. We are going to address that in the short term but in the meantime, this tutorial should suffice.

As of the time of this article, the hosted setup has nuances and I attempt to document it in the meantime.

This tutorial is using enterprise change feeds, you will need an enterprise license or access to a CockroachDB Dedicated environment.

High-Level Steps

  • Deploy a Confluent Kafka cluster and Schema Registry
  • Deploy a CockroachDB cluster with enterprise changefeeds
  • Verify

Step-By-Step Instructions

Deploy a Confluent Kafka Cluster and Schema Registry

You will need a Confluent cloud account, you can sign up for a free account using the following link.

Once you're done, create a cluster, you can use the steps in my previous article ("SaaS Galore") linked at the beginning of this post.

export KAFKA_CLUSTER=<cluster ID>
confluent kafka cluster use $KAFKA_CLUSTER
confluent api-key create --resource $KAFKA_CLUSTER
confluent api-key store --resource $KAFKA_CLUSTER --force
confluent kafka cluster describe $KAFKA_CLUSTER


To capture the endpoint from the console, we will need it to set up a changefeed in CockroachDB.

SASL_SSL://<confluent cloud kafka endpoint url>:9092

To create topics for your tables, I'm going to select the following four tables: stock, history, warehouse, district.

confluent kafka topic create stock --partitions 6
confluent kafka topic create warehouse --partitions 6
confluent kafka topic create history --partitions 6
confluent kafka topic create district --partitions 6 


Create Confluent Schema Registry if it doesn't exist. Capture the endpoint, and generate an API key and secret for SR.

Finally, set up an Avro consumer with the above information in a new terminal window.

confluent api-key use <API Key> --resource $KAFKA_CLUSTER

confluent kafka topic consume stock \
 --value-format avro \
 --from-beginning \
 --sr-endpoint https://<Confluent Schema Registry url>.confluent.cloud \
 --sr-api-key <SR API Key> \
 --sr-api-secret <SR Secret>


Deploy a CockroachDB Cluster With Enterprise Changefeeds

You can spin up a Dedicated cluster using the following directions. My cluster is a 3-node cluster in GCP with AZ failure tolerance in us-east4.

Enable changefeeds.

SET CLUSTER SETTING kv.rangefeed.enabled = true;


Create a changefeed pointing to the Kafka cluster and SR.

CREATE CHANGEFEED FOR TABLE stock, warehouse, district, history INTO "kafka://<confluent kafka cluster endpoint>:9092?tls_enabled=true&sasl_enabled=true&sasl_user=<Confluent Kafka API Key>&sasl_password=<Confluent Kafka Secret in url-encoded form>&sasl_mechanism=PLAIN" WITH updated, format = avro, confluent_schema_registry = "https://<SR API Key>:<SR Secret in url-encoded form>@<Confluent SR url>:443";


The key part in the command is that the Confluent Schema Registry expects the Schema Registry API Key and Secret in the basic auth form, not SASL. Once you fill that out, you should be off to the races.

The only thing that remains is generating a workload. We are going to use the TPC-C workload bundled with the Cockroach binary. In a new terminal window, run the following two commands.

Generate sample data:

cockroach workload fixtures import tpcc --warehouses=10 "postgresql://<user>@<Cockroach Cloud Dedicated url>:26257/tpcc?sslmode=verify-full&sslrootcert=/path/certs/cluster-ca.crt"     


Execute the workload:

cockroach workload run tpcc --warehouses=10 --ramp=3m --duration=1h "postgresql://<user>@<Cockroach Cloud Dedicated url>:26257/tpcc?sslmode=verify-full&sslrootcert=/path/certs/cluster-ca.crt"


Verify

At this point, if you switch to the terminal window with the Avro consumer, you will see that it scrolls quickly because it's consuming change data from CockroachDB.

This completes this tutorial. I hope you found it useful.


Comments

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo