DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Unlock AI Power: Generate JSON With GPT-4 and Node.js for Ultimate App Integration
  • Exploring JSON Schema for Form Validation in Web Components
  • Datafaker Gen: Leveraging BigQuery Sink on Google Cloud Platform
  • Migrating MuleSoft System API to AWS Lambda (Part 1)

Trending

  • Enforcing Architecture With ArchUnit in Java
  • How To Build Resilient Microservices Using Circuit Breakers and Retries: A Developer’s Guide To Surviving
  • The Future of Java and AI: Coding in 2025
  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Reducing Integration Hassles With JSON Schema Contracts

Reducing Integration Hassles With JSON Schema Contracts

See how to reduce integration hassles with JSON schema contracts.

By 
Chris Bedford user avatar
Chris Bedford
·
Jun. 27, 19 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
16.1K Views

Join the DZone community and get the full member experience.

Join For Free

Image title

I recently worked on a project where the 'contract' between service consumers and providers consisted primarily of annotated mock-ups of the JSON responses one would obtain from each of a given service's end-points. A much better way of expressing the contract for a service is to use a standard schema format. If you're stuck with XML, use XML schema. If you are using JSON, there are tools and libraries (presented below) that will help you use JSON schema to express a service's contract.

This article will assume that you have gone through the available JSON schema documentation and have a basic idea of how to use it. It assumes that you are developing on a JVM-based platform, and most of the recipes will be helpful for Java developers (although our example of dynamic schema validation is presented using a bit of Scala.)

Why Use JSON Schema as Your Contract?

Suppose you are supporting a JSON-based service with your contract expressed in some type of "by-example" format rather than the JSON schema standard. Now one of the components consuming your service throws an exception while parsing a response. The developer of said client service comes to you and says, "Your service has a problem." Well, both of you then have to pour over the examples that define your service's responses and figure out if the response sent in this instance honors or violates the implicit contract. This is a very manual process with room for mistakes, and at the worst, can lead to finger pointing and debates about whether the response is correct. Not fun.

However, if the server and client teams on your project come to an agreement on a schema for each JSON response, then the task of figuring out if a given response is correct boils down to simply running a validation tool where the inputs are the response document in question, and the schema to which it must conform. If the validator reports no errors then you are off the hook, with no debate.

JSON Schema Tools

This section describes how to install and use various tools for auto-generation of JSON schema from sample documents, generation of sample instance documents from schema, and schema validation. As long as your environment is configured with Java 1.8, Python 2.7+, and the pip installer, then the provided set-up instructions should work on either Linux or Mac (at least they worked for me!)

Auto-Generating JSON Schema From Instance Documents

genson is a utility for auto-generating JSON schema from instance documents. It can be installed via the command:

 sudo pip install genson==0.1.0 # install it

Next, try generating a schema for a simple document:

echo '{ "foo": 100 }'  > /tmp/foo.json
    cat /tmp/foo.json | genson | tee /tmp/foo.schema

foo.schema should contain the following content:

{
      "$schema": "http://json-schema.org/schema#",
      "required": [
        "foo"
      ],
      "type": "object",
      "properties": {
        "foo": {
          "type": "integer"
        }
      }
    }

Sometimes you will be generating multiple schemas from a related set of JSON documents (e.g., you might be starting from a set of sample responses from a legacy service with no defined schema, which you plan to retrofit .) In this case, you will definitely want to familiarize yourself with the $ref keyword, which lets you refactor commonly occurring fragments of schema code into one place (even a different file.)

Generation of Sample Instance Documents From Schema

Once you have a schema, you can feed it into a tool — such as this one from Liquid Technologies — to facilitate the generation of mock data that you can use for testing.

Command Line Tools for Schema Validation

The best command line tool I have found for JSON schema validation is json-schema-validator. Its current documentation indicates support for JSON Schema draft v4, which is a bit behind the latest draft (7, at the time of this writing.) So, if you need the latest spec-supported features in your schemas, you should take extra care to ensure this tool is right for your needs.

Assuming you have gone through the previous step of installing and testing genson, you can download and verify the validator via the commands below (if you are on a Mac without wget, then please try curl):

wget 'https://bintray.com/fge/maven/download_file?file_path=com%2Fgithub%2Ffge%2Fjson-schema-validator%2F2.2.6%2Fjson-schema-validator-2.2.6-lib.jar' -O /tmp/validator.jar

# now validate your sample document against the schema you created above

cd /tmp ;  java -jar validator.jar /tmp/foo.schema /tmp/foo.json

You should see:

validation: SUCCESS

Now let's see how the tool reports validation failures. Deliberately mess up your instance document (so it no longer conforms to the schema) via the command:

cat /tmp/foo.json |  sed -e's/foo/zoo/' > /tmp/bad.json

cd /tmp ; java -jar validator.jar /tmp/foo.schema /tmp/bad.json

You should see error output which includes the line:

"message" : "object has missing required properties ([\"foo\"])",

On the Fly Schema Validation at Run-Time

When previously discussed, the json-schema-validator was shown in command line mode. As a bonus, you can also embed this project's associated Java library into any of your services that require run-time validation of arbitrary instance documents against a schema. The code snippet below (available as a project is written in Scala, but you could easily use this in Java projects as well.

import com.fasterxml.jackson.core.JsonParser
import com.fasterxml.jackson.databind.JsonNode
import com.github.fge.jackson.JsonLoader
import com.github.fge.jsonschema.main.{JsonSchema, JsonSchemaFactory}
import com.fasterxml.jackson.databind._
import com.github.fge.jsonschema.core.report.ProcessingReport

object SchemaValidator {
  lazy val mapper: ObjectMapper = new ObjectMapper
  lazy val jsonSchemaFactory: JsonSchemaFactory = JsonSchemaFactory.byDefault
  lazy val schemaNode: JsonNode = JsonLoader.fromResource("/schema.json")
  lazy val schema: JsonSchema = jsonSchemaFactory.getJsonSchema(schemaNode)

  def validateWithReport(json: String): Boolean = {
    val bytes: Array[Byte] = json.getBytes("utf-8")
    val parser: JsonParser = mapper.getFactory.createParser(bytes)
    val node: JsonNode = mapper. readTree( parser)
    val validationResult: ProcessingReport = schema.validate(node)
    if (validationResult.isSuccess) {
      true
    } else {
      val errMsg = 
            s"Validation error. Instance=$json, msg=$validationResult"
      System.out.println("errMsg:" + errMsg)
      false
    }
  }
}

object FakeGoodWebService {
  def getJsonResponse =   """{ "foo": 100 }"""
}

object FakeBadWebService {
  def getJsonResponse =   """{ "zoo": 100 }"""
}
import com.fasterxml.jackson.core.JsonParser
import com.fasterxml.jackson.databind.JsonNode
import com.github.fge.jackson.JsonLoader
import com.github.fge.jsonschema.main.{JsonSchema, JsonSchemaFactory}
import com.fasterxml.jackson.databind._
import com.github.fge.jsonschema.core.report.ProcessingReport

object SchemaValidator {
  lazy val mapper: ObjectMapper = new ObjectMapper
  lazy val jsonSchemaFactory: JsonSchemaFactory = JsonSchemaFactory.byDefault
  lazy val schemaNode: JsonNode = JsonLoader.fromResource("/schema.json")
  lazy val schema: JsonSchema = jsonSchemaFactory.getJsonSchema(schemaNode)

  def validateWithReport(json: String): Boolean = {
    val bytes: Array[Byte] = json.getBytes("utf-8")
    val parser: JsonParser = mapper.getFactory.createParser(bytes)
    val node: JsonNode = mapper. readTree( parser)
    val validationResult: ProcessingReport = schema.validate(node)
    if (validationResult.isSuccess) {
      true
    } else {
      val errMsg = s"Validation error. Instance=$json, msg=$validationResult"
      System.out.println("errMsg:" + errMsg)
      false
    }
  }
}

object FakeGoodWebService {
  def getJsonResponse =   """{ "foo": 100 }"""
}

object FakeBadWebService {
  def getJsonResponse =   """{ "zoo": 100 }"""
}


object JsonSchemaValidationDemo extends App {
  import SchemaValidator._

  val goodResult = 
    validateWithReport(
      FakeGoodWebService.getJsonResponse)
  System.out.println("result:" + goodResult);

  val badResult = 
    validateWithReport(
      FakeBadWebService.getJsonResponse)
  System.out.println("result:" + badResult);
}





object JsonSchemaValidationDemo extends App {
  import SchemaValidator._

  val goodResult = validateWithReport(FakeGoodWebService.getJsonResponse)
  System.out.println("result:" + goodResult);

  val badResult = validateWithReport(FakeBadWebService.getJsonResponse)
  System.out.println("result:" + badResult);
}

We have stashed the 'foo' schema from our previous discussion into src/main/resources and the object constructor for SchemaValidator loads that schema into the 'schema' variable. We then call validateWithReport from JsonSchemaValidationDemo first with a valid response from a mock of a nicely behaving web service, and then we feed validateWithReport a JSON response from a misbehaving web service. The resultant output is shown below.

result:true
errMsg:Validation error. Instance={ "zoo": 100 }, 
    msg=com.github.fge.jsonschema.core.report.ListProcessingReport: failure
--- BEGIN MESSAGES ---
error: object has missing required properties (["foo"])
    level: "error"
    schema: {"loadingURI":"#","pointer":""}
    instance: {"pointer":""}
    domain: "validation"
    keyword: "required"
    required: ["foo"]
    missing: ["foo"]
---  END MESSAGES  ---

result:false

Conclusion

Miscommunication and incorrect assumptions are most likely at what formally trained project managers call "interface points at subsystem boundaries" (you can read up more here.) But now you have some tools for minimizing the thrash and churn that can occur around these interface points.

License

This work is licensed under the Creative Commons Attribution 4.0 International License. Use as you wish, but if you can, please give attribution to the Data Lackey Labs Blog.

JSON Schema Integration

Published at DZone with permission of Chris Bedford. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Unlock AI Power: Generate JSON With GPT-4 and Node.js for Ultimate App Integration
  • Exploring JSON Schema for Form Validation in Web Components
  • Datafaker Gen: Leveraging BigQuery Sink on Google Cloud Platform
  • Migrating MuleSoft System API to AWS Lambda (Part 1)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!