DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Data Engineering
  3. Big Data
  4. Ingesting Golden Gate Records From Apache Kafka and Automagically Populating Any JDBC Tables

Ingesting Golden Gate Records From Apache Kafka and Automagically Populating Any JDBC Tables

As long as they have proper header data and records in JSON, it's really easy to process any number of table changes sent from tools via Apache Kafka in Apache NiFi.

Tim Spann user avatar by
Tim Spann
CORE ·
Jan. 03, 18 · Tutorial
Like (4)
Save
Tweet
Share
8.98K Views

Join the DZone community and get the full member experience.

Join For Free

Sometimes, you need to process any number of table changes sent from tools via Apache Kafka. As long as they have proper header data and records in JSON, it's really easy in Apache NiFi.

Requirements:

  1. Process each partition separately.
  2. Process records in order, as each message is an insert, update, or delete to an existing table in our receiving JDBC store.
  3. Re-process if data lost.

The main processor for routing must only run on the Primary Node.

Enforcing Order

We use kafka.offset to order the records, which makes sense in Apache Kafka topics.

After insert, update, and delete queries are built, let's confirm and enforce that strict ordering.

To further confirm processing in order, we make each connection in the flow FirstInFirstOutPrioritizer.

We route each partition to a different processor group (one local, the other remote):

Let's store some data in HDFS for each table:

Connect to Kafka and grab from our topic:

Let's connect to our JDBC store:

Let's do an update (table name is dynamic):

The Jolt processor has an awesome tester for trying out Jolt:

Make sure we connect our remote partitions:

Routing from routing server (Primary Node):

For processing partition 0 (run on the routing server):

We infer the schema with InferAvroSchema, so we don't need to know the embedded table layouts before a record arrives. In production, it makes sense to know all these in advance and to do integration tests and versioning of schemas. This is where Hortonworks Schema Registry is awesome. We name the Avro record after the table dynamically. We can get and store permanent schemas in the Hortonworks Schema Registry.

Process partition 1 (we can have one server or cluster per partition):

Process the partition 1 Kafka records from the topic:

This flow will convert our embedded JSON table record into New SQL:

Input: {"ID":2001,"GD":"F","DPTID":2,"FIRSTNAME":"Tim","LAST":"Spann"}
Output: INSERT INTO THETABLE (ID, GD, DPTID, FIRSTNAME, LAST) VALUES (?, ?, ?, ?, ?)
sql.args.5.value Spann
sql.table THETABLE

With all the field being parameters for a SQL Injection-safe parameter-based insert, update, or delete based on the control sent.

Golden Gate messages:

{"table": "SCHEMA1.TABLE7","op_type": "I","op_ts": "2017-11-01 04:31:56.000000","current_ts": "2017-11-01T04:32:04.754000","pos": "00000000310000020884","after": {"ID":1,"CODE": "B","NAME":"STUFF","DESCR" :"Department","ACTIVE":1}}

Using a simple EvaluateJsonPath, we pull out these control fields; for example, $.before.

The table name for ConvertJSONtoSQL is ${table:substringAfter('.')}. This is to remove all leading schema/tablespace names. From the drop-down for each of the three, we pick either UPDATE, INSERT, or DELETE based on the op_type.

We follow this with a PutSQL, which will execute on our destination JDBC database sink.

After that, I collect all the attributes, convert them to a JSON flow file, and save that to HDFS for logging and reporting. This step could be skipped or could be in another format or sent elsewhere.

Control Fields

  • pos: Position

  • table: Table to update in the data warehouse

  • current_ts: Timestamp

  • op_ts: Timestamp

  • op_type: Operation type (I=insert, U=update, D=delete)

Important Apache NiFi system fields:

  • kafka.offset

  • kafka.partition

  • kafka.topic

We can route and process these for special handling.

To create HDFS directories for changes:

su hdfs <br>hdfs dfs -mkdir -p /new/T1 <br>hdfs dfs -mkdir -p /new/T2 <br>hdfs dfs -mkdir -p /poc/T3
hdfs dfs -chmod -R 777 /new <br>hdfs dfs -ls -R /new

To create a test Apache Kafka topic:

./bin/kafka-topics.sh --create \
    --zookeeper localhost:2181 \
    --replication-factor 1 \
    --partitions 2 \
    --topic goldengate

Creating a MySQL Database as recipient JDBC server:

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.45.tar.gz
mysql
create database mydw;
CREATE USER 'nifi'@'%' IDENTIFIED BY 'MyPassWordIsSoAwesome!!!!';
GRANT ALL PRIVILEGES ON *.* TO 'nifi'@'%' WITH GRANT OPTION;
commit;
SHOW GRANTS FOR 'nifi'@'%';


#Create some tables in the database for your records.


create table ALOG (
AID VARCHAR(1),
TIMESEC INT,
SOMEVAL VARCHAR(255),
PRIMARY KEY (AID, TIMESEC)
);

Jolt filter:

Attribute: afterJolt
${op_type:equalsIgnoreCase("D"):ifElse("none", "after")}
Attribute: beforeJolt
${op_type:equalsIgnoreCase("D"):ifElse("before", "none")}

Jolt script to transform JSON:

[   {
    "operation": "shift",     
     "spec": {
      "${beforeJolt}": {
        "*": "&"
      },
      "${afterJolt}": {
        "*": "&"
      }
    }
  },   {
    "operation": "shift",
    "spec": {
      "*": "&"
    }
   } ]

And that's it!

kafka Database

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • API Design Patterns Review
  • Bye Bye, Regular Dev [Comic]
  • 7 Awesome Libraries for Java Unit and Integration Testing
  • Deploying Java Serverless Functions as AWS Lambda

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: