DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Building and Integrating REST APIs With AWS RDS Databases: A Node.js Example
  • Utilizing Database Hooks Like a Pro in Node.js
  • Node.js REST API Frameworks
  • Creating a Secure REST API in Node.js

Trending

  • Distributed Consensus: Paxos vs. Raft and Modern Implementations
  • Mastering Fluent Bit: Installing and Configuring Fluent Bit on Kubernetes (Part 3)
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • The Human Side of Logs: What Unstructured Data Is Trying to Tell You
  1. DZone
  2. Software Design and Architecture
  3. Integration
  4. How To Avoid “Schema Drift”

How To Avoid “Schema Drift”

This article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using LakeFS.

By 
Yaniv Ben Hemo user avatar
Yaniv Ben Hemo
·
Feb. 03, 23 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
10.6K Views

Join the DZone community and get the full member experience.

Join For Free

We are all familiar with drifting in-app configuration and IaC. We’re starting with a specific configuration backed with IaC files. Soon after, we are facing a “drift” or a change between what is actually configured in our infrastructure and our files. The same behavior happens in data. The schema starts in a certain shape. As data ingestion grows and scales to different sources, we get a schema drift, a messy, unstructured database with UI crashes, and an analytical layer that keeps failing due to a bad schema. We will learn how to deal with the scenario and how to work with dynamic schemas.

Schemas Are a Major Struggle

A schema defines the structure of the data format. Keys/Values/Formats/Types, a combination of all, results in a defined structure or simply — schema. Developers and data engineers, have you ever needed to recreate a NoSQL collection or recreate an object layout on a bucket because of different documents with different keys or structures? You probably did. An unaligned record structure across your data ingestion will crash your visualization, analysis jobs, and backend, and it is an ever-ending chase to fix it.

Schema Drift

Schema drift is the case where your sources often change metadata. Fields, keys, columns, and types can be added, removed, or altered on the fly.
Your data flow becomes vulnerable to upstream data source changes without handling schema drift. Typical ETL patterns fail when incoming columns and fields change because they tend to be tied to those sources. Stream requires a different toolset.

The following article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using LakeFS.

Confluent Schema Registry vs. Memphis Schemaverse

Confluent – Schema Registry

Confluent Schema Registry provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving your Avro®, JSON Schema, and Protobuf schemas. It stores a versioned history of all schemas based on a specified subject name strategy, provides multiple compatibility settings, and allows the evolution of schemas according to the configured compatibility settings and expanded support for these schema types. It provides serializers that plug into Apache Kafka® clients. They handle schema storage and retrieval for Kafka messages that are sent in any of the supported formats.

The Good

  • UI
  • Support Avro, JSON Schema, and Protobuf
  • Enhanced security
  • Schema Enforcement
  • Schema Evolution

The Bad

  • Hard to configure
  • No external backup
  • Manual serialization
  • Can be bypassed
  • Requires maintenance and monitoring
  • Mainly supports in Java
  • No validation

Schema Registry

Schema registry overview. Confluent Documentation

Memphis — Schemaverse

Memphis Schemaverse provides a robust schema store and schema management layer on top of the Memphis broker without a standalone compute or dedicated resources. With a unique and modern UI and programmatic approach, technical and non-technical users can create and define different schemas, attach the schema to multiple stations, and choose if the schema should be enforced or not. Memphis’ low-code approach removes the serialization part as it is embedded within the producer library. Schema Verse supports versioning, GitOps methodologies, and schema evolution.

The Good

  • Great UI and programmatic approach
  • Embed within the broker
  • Zero-trust enforcement
  • Versioning
  • Out-of-the-box monitoring
  • GitOps. Working with files
  • Low/no-code validation and serialization
  • No configuration
  • Native support in Python, Go, Node.js

The Bad

  • It does not support all formats yet. Protobuf and JSON only.
    Schemaverse OverviewSchemas ManagementPartners

How To Avoid Schema Drifting Using Confluent Schema Registry

Avoid schema drifting means that you want to enforce a particular schema on a topic or station and validate each produced data. This tutorial assumes that you are using Confluent cloud with registry configured. If not, here is a Confluent article on how to install the self-managed version (without enhanced features).

To make sure you are not drifting and maintaining a single standard or schema structure in our topic, you will need to

1. Copy ccloud-stack config file to $HOME/.confluent/java.config.
2. Create a topic.
3. Define a schema. For example:

JSON
 
c{
 "namespace": "io.confluent.examples.clients.basicavro",
 "type": "record",
 "name": "Payment",
 "fields": [
     {"name": "id", "type": "string"},
     {"name": "amount", "type": "double"}
 ]
}


4. Enable schema validation over the newly created topic.
5. Configure Avro/Protobuf both in the app and with the registry.
Example producer code in Maven:

Java
...
import io.confluent.kafka.serializers.KafkaAvroSerializer;
...
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
...
KafkaProducer<String, Payment> producer = new KafkaProducer<String, Payment>(props));
final Payment payment = new Payment(orderId, 1000.00d);
final ProducerRecord<String, Payment> record = new ProducerRecord<String, Payment>(TOPIC, payment.getId().toString(), payment);
producer.send(record);
}


Because the pom.xml includes avro-maven-plugin, the payment class is automatically generated during compile.

Compile
In case a producer will try to produce a message that is not struct according to the defined schema, the message will not get ingested.  

How To Avoid Schema Drift Using Memphis Schemaverse

Step 1: Create a New Schema (Currently Only Available Through Memphis GUI)

No Schema Found

Create Schema

Step 2: Attach Schema

Head to your station, and on the top-left corner, click on “+ Attach schema.”

Attach Schema

Marketing Prod

Step 3: Code Example in Node.js

Memphis abstracts the need for external serialization functions and embeds them within the SDK.
Producer (Protobuf example):

JavaScript
 
const memphis = require("memphis-dev");
var protobuf = require("protobufjs");

(async function () {
    try {
        await memphis.connect({
            host: "localhost",
            username: "root",
            connectionToken: "*****"
        });
        const producer = await memphis.producer({
            stationName: "marketing-partners.prod",
            producerName: "prod.1"
        });
        var payload = {
            fname: "AwesomeString",
            lname: "AwesomeString",
            id: 54,
        };
        try {
            await producer.produce({
                message: payload
        });
        } catch (ex) {
            console.log(ex.message)
        }
    } catch (ex) {
        console.log(ex);
        memphis.close();
    }
})();


Consumer (Requires .proto file to decode messages):

JavaScript
 
const memphis = require("memphis-dev"); var protobuf = require("protobufjs"); 
(async function () {    try {        await memphis.connect({            host: "localhost",            username: "root",            connectionToken: "*****"        });         const consumer = await memphis.consumer({            stationName: "marketing",            consumerName: "cons1",            consumerGroup: "cg_cons1",            maxMsgDeliveries: 3,            maxAckTimeMs: 2000,            genUniqueSuffix: true        });         const root = await protobuf.load("schema.proto");        var TestMessage = root.lookupType("Test");         consumer.on("message", message => {            const x = message.getData()            var msg = TestMessage.decode(x);            console.log(msg)            message.ack();        });        consumer.on("error", error => {            console.log(error);        });    } catch (ex) {        console.log(ex);        memphis.close();    } })();


Schema Enforcement With Data Versioning

Now that we know the many ways to enforce schema using Confluent Or Memphis, let us understand how a versioning tool like LakeFS can seal the deal for you.

What Is LakeFS?

LakeFS is a data versioning engine that allows you to manage data, like code. Through Git-like branching, committing, merging, and reverting operations, managing the data and, in turn, the schema over the entire data life cycle is made simpler.

How To Achieve Schema Enforcement With LakeFS?

By leveraging the LakeFS branching feature and webhooks, you can implement a robust schema enforcement mechanism on your data lake.

LakeFS

LakeFS hooks allow automating and ensuring a given set of checks and validations run before important life-cycle events. They are similar conceptually to Git Hooks, but unlike Git, LakeFS hooks trigger a remote server that runs tests, so it is guaranteed to happen.

You can configure LakeFS hooks to check for specific table schema while merging data from development or test data branches to production. That is, a pre-merge hook can be configured on the production branch for schema validation. If the hook run fails, it causes LakeFS to block the merge operation from happening.

This extremely powerful guarantee can help implement schema enforcement and automate the rules and practices that all data sources and producers should adhere to.

Implementing Schema Enforcement Using LakeFS

  1. Start by creating a LakeFS data repository on top of your object store (say, AWS S3 bucket, for example).
  2. All the production data (single source of truth) can live on the `main` branch or `production` branch in this repository.
  3. You can then create a `dev` or a ‘staging’ branch to persist the incoming data from the data sources.
  4. Configure a LakeFS webhooks server. For example, a simple Python flask app that can serve HTTP requests can be a webhook server. Refer to the sample flask app that the LakeFS team has put together to get started. 
  5. Once you have the webhooks server running, enable webhooks on a specific branch by adding the `actions.yaml` file under the `_lakefs_actions` directory. 

Here is a sample actions.yaml file for schema enforcement on the master branch:

Master Branch

6. Suppose the schema of incoming data is different from that of production data, the configured LakeFS hooks will be triggered on the pre-merge condition, and hooks will fail the merge operation to the master branch. 

This way, you can use LakeFS hooks to enforce the schema, validate your data to avoid PII columns leakage in your environment, and add data quality checks as required.

To understand more about configuring hooks and effectively using them, refer to our LakeFS, git for data — documentation.

Schema Hook Node.js REST

Published at DZone with permission of Yaniv Ben Hemo. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Building and Integrating REST APIs With AWS RDS Databases: A Node.js Example
  • Utilizing Database Hooks Like a Pro in Node.js
  • Node.js REST API Frameworks
  • Creating a Secure REST API in Node.js

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!