DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Java and MongoDB Integration: A CRUD Tutorial [Video Tutorial]
  • MongoDB to Couchbase for Developers, Part 1: Architecture
  • The First Annual Recap From JPA Buddy
  • Improving Backend Performance Part 1/3: Lazy Loading in Vaadin Apps

Trending

  • Tired of Spring Overhead? Try Dropwizard for Your Next Java Microservice
  • How to Use AWS Aurora Database for a Retail Point of Sale (POS) Transaction System
  • Exploring Intercooler.js: Simplify AJAX With HTML Attributes
  • Security by Design: Building Full-Stack Applications With DevSecOps
  1. DZone
  2. Data Engineering
  3. Databases
  4. Setting up MongoDB for Bi-Temporal Data

Setting up MongoDB for Bi-Temporal Data

Let's see how to integrate BarbelHisto with MongoDB to have a properly audit-proof document management solution.

By 
Niklas Schlimm user avatar
Niklas Schlimm
·
Mar. 06, 19 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
12.1K Views

Join the DZone community and get the full member experience.

Join For Free

In my last article, I introduced a new library I've developed as open source called BarbelHisto. The library eases the use and management of bi-temporal data for Java developers. In this tutorial, I will demonstrate how to integrate BarbelHisto with MongoDB to have a properly audit-proof document management solution.

Image title

1. Get Your Version of BarbelHisto

If you want to make MongoDB work with BarbelHisto, you need two maven dependencies to get started.

  • BarbelHisto Core

  • BarbelHisto persistence for MongoDB

2. Develop a Client Pojo

Implement a POJO like this one:

public class Client {

    @DocumentId
    private String clientId;
    private String title;
    private String name;
    private String firstname;
    private String address;
    private String email;
    private LocalDate dateOfBirth;

}

Notice that we use the @DocumentIdannotation to tell BarbelHisto that the clientId uniquely identifies the client, and this should be the document id. BarbelHisto will maintain document journals for each document id.

3. Create an Instance of BarbelHisto With MongoDB Listeners

Like so:

MongoClient mongoClient = SimpleMongoListenerClient.create("mongodb://localhost:12345").getMongoClient();
// update listener
SimpleMongoUpdateListener updateListener = SimpleMongoUpdateListener.create(mongoClient, "testDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// pre-fetch listener
SimpleMongoLazyLoadingListener loadingListener = SimpleMongoLazyLoadingListener.create(mongoClient, "testDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// locking listener
MongoPessimisticLockingListener lockingListener = MongoPessimisticLockingListener.create(mongoClient, "lockDb", "docLocks");
// BarbelHisto instance
BarbelHisto<Client> mongoBackedHisto = BarbelHistoBuilder.barbel().withSynchronousEventListener(updateListener)
                .withSynchronousEventListener(loadingListener).withSynchronousEventListener(lockingListener).build();

You can use your own MongoClient settings if you like. The BarbelHisto mongo package does provide a client creation class for convenience. There are three listeners registered synchronously with BarbelHisto. The SimpleMongoUpdateListener will forward updates saved against BarbelHisto against the mongo shadow collection. The SimpleMongoLazyLoadingListener listener ensures that data is fetched into the local BarbelHisto instance if clients perform queries using the BarbelHisto.retrieve() methods. The MongoPessimisticLockingListener will lock journals in mongo if the client performs an update using the BarbelHisto.save().

4. Do Some Bi-Temporal save() Operations

With this setup, you can now store and retrieve bi-temporal data with a MongoCollection as a remote data source.

Client client = new Client("1234", "Mr.", "Smith", "Martin", "some street 11", "somemail@projectbarbel.org", LocalDate.of(1973, 6, 20));
mongoBackedHisto.save(client, LocalDate.now(), LocalDate.MAX);

5. Retrieve Bi-Temporal Data From MongoDB

Later, in other sessions of your web application, you can retrieve the client using the BarbelHisto.retrieve() from MongoDB.

Client client = mongoBackedHisto.retrieveOne(BarbelQueries.effectiveNow("1234"));

Use the BarbelQueries to make queries against the MongoDB backed BarbelHisto instance. You can also combine BarbelQueries.

List<Client> clients = mongoBackedHisto.retrieve(QueryFactory.and(BarbelQueries.effectiveNow("1234"),BarbelQueries.effectiveNow("1234")));

Be careful if you don't specify document IDs in your query. This might cause a full load of the mongo collection.
That's it. There is nothing more to do than this.

Let's access the version data for the client object we've just added. You've retrieved the client previously in this tutorial. You can cast every object received from BarbelHisto to Bitemporal to access the version data.

Bitemporal clientBitemporal = (Bitemporal)client;
System.out.println(clientBitemporal.getBitemporalStamp().toString());

If you'd like to print out what MongoDB knows about your client, then pretty print the document journal like so:

System.out.println(mongoBackedHisto.prettyPrintJournal("1234"));

This should return a printout that looks similar to this one:

Document-ID: 1234

|Version-ID                              |Effective-From |Effective-Until |State   |Created-By           |Created-At                                   |Inactivated-By       |Inactivated-At                               |Data                           |
|----------------------------------------|---------------|----------------|--------|---------------------|---------------------------------------------|---------------------|---------------------------------------------|-------------------------------|
|d18cd394-aa62-429b-a23d-46e935f80e71    |2019-03-01     |999999999-12-31 |ACTIVE  |SYSTEM               |2019-03-01T10:46:27.236+01:00[Europe/Berlin] |NOBODY               |2199-12-31T23:59:00Z                         |EffectivePeriod [from=2019-03- |

You can get the complete code from this tutorial in this test case.

Configuring for Performance

In the previous paragraphs, I've used the lazy loading and update listeners to integrate a MongoCollection with the synchronous event bus. There are advantages and some drawbacks of this configuration, especially in scenarios where high performance is one of the key requirements.

Registration with the synchronous service bus eases error handling because clients can react immediately when an exception is thrown in the listeners. On the other hand, synchronous means waiting(!) for a response, which isn't always necessary, especially with update operations.

Also, the lazy loading listeners require the user to pass the journal ID in queries to work properly. In some situations, this isn't enough. Clients may want to define complex custom queries against BarbelHisto, which combine many attributes, but no document IDs are included. In fact, these complex queries should have the document IDs as a result, not as a parameter. For instance, when an address or client name is known and the user needs to find the corresponding client record or a policy number (which is the document ID in many cases).

To address the complex query and performance requirements, users can configure BarbelHisto in advanced performance setups. One option is that you can resign lazy loading listeners. Instead, you use disk persistent indexed collections as object query pool. At the same time, you register a SimpleMongoUpdateListener to the asynchronous service bus. In such a setup, complex queries can be defined without restrictions and the data is still shadowed to a MongoCollection of your choice. But everything works considerably faster than in the synchronous scenarios described above.

// define primary key in POJO class -> versionID !
SimpleAttribute<Client, String> primaryKey = 
   new SimpleAttribute<Client, String>("versionId") 
      {
        public String getValue(Client object, QueryOptions queryOptions) {
            return (String) ((Bitemporal) object).getBitemporalStamp().getVersionId();
       };
// define the update listener
SimpleMongoUpdateListener updateListener = SimpleMongoUpdateListener.create(client.getMongoClient(),
                            "testSuiteDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// make BarbelHisto backbone persistent and register 
// Mongo update listener with asynchronous bus
BarbelHisto<DefaultPojo> histo = BarbelHistoBuilder.barbel()
        .withBackboneSupplier(() -> new ConcurrentIndexedCollection<>
            (
            DiskPersistence.onPrimaryKeyInFile(primaryKey, new File("test.dat"))
            )
        ).withAsynchronousEventListener(updateListener);

In the first step, you define the primaryKey, which is mandatory when you use persistent disk space. Use the versionId as the primary key as demonstrated above. Then, define the update listener and register that with BarbelHistoBuilder. Also, register a DiskPersistence backbone using the builder. In the above example, your data is kept in the test.dat file and also in the shadow MongoCollection.

Look into this test case to see the complete scenario, and let us know your thoughts and questions in the comments below. 

Data (computing) MongoDB Database code style Lazy loading

Opinions expressed by DZone contributors are their own.

Related

  • Java and MongoDB Integration: A CRUD Tutorial [Video Tutorial]
  • MongoDB to Couchbase for Developers, Part 1: Architecture
  • The First Annual Recap From JPA Buddy
  • Improving Backend Performance Part 1/3: Lazy Loading in Vaadin Apps

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!