Reusable Jobs with Obsidian Chaining
Join the DZone community and get the full member experience.
Join For Freea lot of obsidian users write custom jobs in java and leverage their existing codebase to performs things like file archive or file transfer right inside the job. while it’s great to reuse that existing code you have, sometimes users end up implementing the same logic in multiple jobs, which isn’t ideal, since it means extra qa and potential for inconsistencies and uncaught bugs.
fortunately, obsidian’s configurable chaining support combined with using job results (i.e. output) lets developers write a single job as a reusable component and then chain to it where required.
to demonstrate this, we will go through a fairly common type of situation where you have a job which generates zero or more files which must be transferred to a remote ftp site and which also must be archived. we could chain to an ftp job which chains to an archive job, but for the sake making this example simpler, we will bundle them into the same job.
file generating job
first, we’ll demonstrate how to save job results in our source job to make them available to our chained job. here’s the
execute()
method of the job:
public void execute(context context) throws exception { // grab the configured ftp config key for the job // and pass it on to the chained ftp/archive job context.savejobresult("ftpconfigkey", context.getconfig().getstring("ftpconfigkey")); for (file file : togenerate) { // ... some code to generate the file // when successful, save the output // (multiple saved to the same name is fine) context.savejobresult("generatedfile", file.getabsolutepath()); } }
pretty simple stuff. the most interesting thing here is the first line. to make the chained ftp/archive job really reusable, we have configured our file job with a key which we can use to load the appropriate ftp configuration used to transfer the files. we then pass this configuration value onto the ftp job as a job result, so that we don’t have to configure a separate ftp job for every ftp endpoint. however, configuring a separate ftp job for each ftp site is another option available to you, in which case you wouldn’t have to configure the file job with the config key or include that first line.
next we’ll see how to access this output in the ftp/archive job and after that, how to set up the chaining configuration.
ftp/archive job
this job has two key features:
- it loads ftp config based on the ftp key passed in by the source job.
- it iterates through all files that were generated and deals with them accordingly.
note that all job results keep their java type when loaded in the chained job, though they are returned as
list<object>
. primitives are supported as output values, as well as any type that has a public constructor that takes a
string
(
tostring()
is used to save the values).
public void execute(context context) throws exception { map<string, list<object>> sourcejobresults = context.getsourcejobresults(); list<object> fullfilepaths = sourcejobresults.get("generatedfile"); if (fullfilepaths != null) { if (sourcejobresults.get("ftpconfigkey") == null) { // ... maybe fail here depending on your needs } string ftpconfigkey = (string) sourcejobresults.get("ftpconfigkey").get(0); ftpconfig config = loadftpconfig(ftpconfigkey); for (object filepath : fullfilepaths) { file f = new file((string) filepath); // ... some code to transfer and archive the file // note that this step ideally can deal with already processed files // in case we need to resubmit this job after failing half way through. } } }
again, this is pretty simple. we grab the saved results from the source job and build our logic around it. as mentioned in the comments, one thing to consider in an implementation like this is handling if the job fails after processing only some of the results. you may wish to just resubmit the failed job in a case like that, so it should be able to re-run without causing issues. note that this isn’t an issue if you only ever have a single file to process.
chaining configuration
now that the reusable jobs are in place, here’s how to set up the chaining. here’s what it looks like in the ui:
we use conditional chaining here to indicate we only need to chain the job when values for
generatedfile
exist. in addition, we ensure that an
ftpconfigkey
value is set. the real beauty of this is that obsidian tracks why it didn’t chain a job if it doesn’t meet the chaining setup. for example, if the
ftpconfigkey
wasn’t setup, the ftp/archive job would still have a
detailed history record
with the “chain skipped” state and the detailed message like this:
note that in this example it’s not required that we do conditional chaining since our ftp/archive job handles when there are no values for
generatedfile
, but it’s still a good practice in case you have notifications that go out when a job completes. it also makes your detailed history more informative which may help you with troubleshooting. if you don’t wish to use conditional chaining, you could simply chain on the completed state instead.
conclusion
obsidian provides powerful chaining features that were deliberately designed to maximize productivity and reliability. our job is to make your life easier as a developer, operator or system administrator and we are continually searching for ways to improve the product and provide values to our user.
if you have any questions about the examples above, let us know in the comments.
Published at DZone with permission of Craig Flichel, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
What Is mTLS? How To Implement It With Istio
-
Developers Are Scaling Faster Than Ever: Here’s How Security Can Keep Up
-
RBAC With API Gateway and Open Policy Agent (OPA)
-
A Data-Driven Approach to Application Modernization
Comments