Putting the Ops Back in Mainframe DevOps
Putting the Ops Back in Mainframe DevOps
Development and operations are more sharply separated for the mainframe. See why this happened and how to bring them back together.
Join the DZone community and get the full member experience.Join For Free
Unlike DevOps in open systems implementations, where Development and Operations blur in practice, mainframe Dev and Ops are rather sharply defined and uniquely separate. How did this occur?
In days of old, when the mainframe initially made the power of computing commercially available to businesses, Dev and Ops were tightly coupled. Programmers would actually operate the machines themselves, mounting tapes, loading the card reader and hanging paper in the printer.
The machines were primitive then, and knowledge of both the hardware and the programming was required to extract business value from this new thing called software. Operational problems were the developer's problems.
As compute power increased and multi-programming capabilities evolved, the mainframe became more like a factory, capable of processing many concurrent workloads. New systems software, created to automate and replace the need for hands-on operation, evolved.
This complexity begat management structures to control, optimize and protect this valuable cost-saving resource. Continued growth saw new threats arise and management constructs like Separation of Duties evolved to provide protection of assets.
Thus, Dev and Ops became separate departments with separate missions, which had certain advantages. Dev could focus on creating and maintaining applications. Ops could focus on providing operational services, effectively taking responsibility for the timely execution of application programs.
Over time, largely out of self-defense, Ops expanded this divide to prevent conflicting interests from compromising their mission, taking the time to carefully and methodically combine application programming with system resources to ensure reliability. This is an utter necessity in large, shared environments. Ops would not rush into a mistake; it became inherently offensive to consider trading speed for stability.
The Need for Speed
But, today, the need for speed has upset this historic standoff between disparate participants, particularly in mainframe installations. Computers are no longer simply reducing costs, handling the drudge work and heavy lifting of business. Computers are now differentiating businesses within their markets, and wholly define many businesses.
Now, speed to market is paramount; no time can be wasted. So, DevOps emerged to address this prevalent, immediate need for business-defining software. And new startups have used it to spectacular effect.
Companies running mainframe-based systems need this same capability. The question becomes: How to get there from here with all the entrenchment? The onus is largely on Ops now.
Dev has and is doing its part to create and update application code in record time. Developers can quickly and safely build, test and deploy mainframe code with Agile source code management and release automation solution ISPW. As we add more capabilities to ISPW and the rest of our toolset and build more relationships with ancillary, best-in-class providers of tools and services, we bolster the capabilities available for customers to achieve the speed they need across the full length, width and breadth of their application installation and operation requirements.
The inertia of Ops is, however, the long pole in the tent. Whoever's task it is, code must be tested-thoroughly. Functional, volume and stress tested. Ops certainly has a role here. In addition, Ops often assists in other facets of development, like providing reporting from system instrumentation data to help Dev debug and tune program code.
Further, finished code must be turned over to production where the complexities of interweaving new and changing programs with batch schedules, CICS MRO architectures, Db2 data sharing structures, MQ shared queues, WebSphere environments, disaster recovery planning, capacity planning and myriad other technical intricacies still all must be handled, and handled properly.
Aren't we doing this now as fast as we can? Can we possibly accelerate? We know mistakes in these areas today get corporations "quality time" on the evening news.
Solutions for Accelerating Ops
Compuware is working for you and with you to solve these problems. A good example is a recent function addition to our comprehensive batch management product, ThruPut Manager. ThruPut Manager is a rules-based and policy-driven batch control system that manages and automates all facets of batch from job submission through execution. As an aside, all Compuware products are improved, enhanced or extended like clockwork, via our code drops every quarter.
In our January 2018 release, we added a new feature to our rules-based Detail Action Language (DAL). We now enable customers to replace their static job accounting report, currently coded in an assembler system exit, with our new DAL support.
Replace your static job accounting report, currently coded in an assembler system exit, with ThruPut Manager’s DAL support.
With this facility, the customer can insert job and job step reporting into the JOBLOG, the system of record, so to speak, for the job, where, as it happens, it is most convenient and immediately useful. Reporting can be built from any of the hugely comprehensive SMF Type 30 record variables. This provides additional information for the programmer to understand the performance and capacity impact of the programs executed, against all system resources.
Without this facility, this information must be reported sometime later, usually the next day, after the day's SMF data has been post-processed. Further, being programmable, different types of reports can be produced depending upon context. Production jobs may receive a report with emphasis on audit requirements, while QA jobs may get another version including performance data, and unit testing, still another. There could be a terse version versus a verbose version depending upon user specification. Should needs change, DAL is easily and quickly changeable, so the new need can be fulfilled quickly, bringing agility to batch reporting.
This new DAL variant provides facilities that imaginative batch processing engineers can leverage for many useful purposes. And replacement of another assembler-based system exit helps negate the impact of the increasing loss of deep technical expertise due to retirements.
In summary, regular updates to ThruPut Manager exemplify how Compuware partners with customers to meet their needs along their DevOps journeys. This is one more way we can increase the speed of Ops deployment, without sacrificing accuracy or increasing risk. And more is coming in the complementary arena of batch management. Remember: every quarter, we add value; you can set your watch by it.
Published at DZone with permission of Kelly Vogt , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.