Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Obsidian Maintenance Jobs

DZone's Guide to

Using Obsidian Maintenance Jobs

· Java Zone
Free Resource

Learn how our document data model can map directly to how you program your app, and native database features like secondary indexes, geospatial and text search give you full access to your data. Brought to you in partnership with MongoDB.

When we began development on Obsidian Scheduler, one of its key objectives was transparency. Quoting my own post,

transparent means letting us know what is going on. We want to have available to us information such as when was the job scheduled to run, when did it start, when did it complete, which host ran it (in pool scenarios). We want to know if it failed, what the problem was. We want the option to be notified either being emailed, paged, and/or messaged when all or specific jobs fails. We want detailed information available to us should we wish to investigate problems in our executing jobs. And we want all of this without having to write code, create our own interface, parse log files or do extra configuration. The scheduler should just work that way.

We’re very happy that we achieved that goal and Obsidian gives us the transparency we sought.

As you likely know, Obsidian stores all this information in a database. This allows you to retain as much of this history as you desire or are required by your business group’s policy. In theory, you could retain all of this history indefinitely in the live database ensuring this information would be available at any time simply by accessing the Obsidian UI. In practice, this is often not required. Many organizations simply maintain an indefinite archive of database backups and can retrieve and restore these to an environment they would use specifically to perform any desired historical investigations.

Obsidian is bundled with two maintenance jobs that can be used to keep the Obsidian database trim and holding only the necessary recent history. These maintenance jobs take advantage of Obsidian’s built-in ability for parametrization allowing you to choose the desired retention period. These jobs are not scheduled by default, so let’s take a look at how we can configure them to run.

JobHistoryCleanupJob

This job cleans up the execution history of your jobs, retaining the most recent history. When scheduling a new job in Obsidian, you will find the Job Class com.carfey.ops.job.maint.JobHistoryCleanupJob in the selection list. Once chosen, you will see that it supports a single required parameter – maxAgeDays. Assuming you want job execution history retained for the most recent 90 days, set this value to 90 days. Here’s a screenshot of what it would look like.
JobHistoryCleanupUsage

LogCleanupJob

This job cleans up the audit and information logs that track all activity in Obsidian. As described in our wiki, these logs contain everything from scheduler system activity such as spawning and execution to user activity such as changing a schedule, its configuration or adding new jobs. These logs are classified by severity level and the Job Class com.carfey.ops.job.maint.LogCleanupJob is configured using the same required parameter maxAgeDays, but has an additional required parameter – level. It defaults to ALL and allows multiple values, but you can also retain this log data for different retention periods by severity by scheduling and configuring this job for each classification. Notice these samples:
LogCleanupUsage1
LogCleanupUsage2

That’s all there is to it! In both cases, the absence of a configured maintenance job or in the case of the LogCleanupJob, configured for a given severity level, means that data will be retained indefinitely. Of course, you can always decide later on to configure such a job and at that time Obsidian will take care of it for you.

Is there something else you’d like to see Obsidian do? Drop us a line or leave a comment below. We listen carefully to all our customers’ feature requests and give priority to customer’s needs in our product roadmap. We also appreciate hearing how Obsidian is helping you.

 

Discover when your data grows or your application performance demands increase, MongoDB Atlas allows you to scale out your deployment with an automated sharding process that ensures zero application downtime. Brought to you in partnership with MongoDB.

Topics:

Published at DZone with permission of Craig Flichel, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}