Rebooting ALM, Part II: Power
Rebooting ALM, Part II: Power
In part two of the "Rebooting ALM" series, we'll look at power: using automation.
Join the DZone community and get the full member experience.Join For Free
Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies.
This is the 2nd part in the Rebooting ALM series. Check out the first part “Evolution," to see how we got here. (See what I did?).
The first tool, I think, that started the ALM tool chain, was source control. There were compilers, and some IDEs, but source control systems were solutions for team-work. If you think about it, the need to “manage” is really not suited for one-person projects. Source control was the first foray into team-work, and therefore an opportunity.
As developer teams started working together, the ALM toolset grew more where it made sense. Whenever we needed to go faster, it was there. Automation is key to ALM tools. Automated builds and automating tests, collecting information and reporting it, and these days, automation of deployments. ALM tools are really good at removing the manual and turning it into automatic.
But not just automation. Code and test repositories are the core of development. They are parts of the knowledge about the product. As ALM tools have encompassed more development operations, like requirement and test management, they became (or aspired to be) one-stop-shop for the product development knowledge base. No tool has mastered that (yet), but people still go to look for branches and tags signaling different releases, commits with comments, requirement evolution and change requests. The tools now include project management and tracking reports. Every piece of information is cross-correlated with all the needed items: Requirement to code to test to bugs to status and so on.
In the first years there were only tools. But as vendors learned what their customers wanted, they improved the tools to support them. IBM, Microsoft, HP and others have a large customer base, and the information collected identified patterns of work. Some patterns emerged as dominant. At first, templates appeared for change requests and bug management. Later as they encompassed more parts of the development cycle, we started seeing project recipes. At first these were waterfall ones; today they are more agile-centric, and we start seeing “scaled’ templates. There’s a sound (although not always correct) assumption here, that if so many people use those, then logic says that they should be suggested to other users as well.
The final piece of the puzzle was, that with tackling items within a known template, tools could now offer not just suggestion on how to do the work, but also how to measure it. Indeed, it is very easy to track the time between bugs and fixes, how many items are produced in a sprint, or the coverage of code by the test, when the system holds all this data. All is left is to report it, and even make suggestions based on the collective knowledge that was put into the development of the ALM tools.
Once vendors had productivity enhancing tools, recipes, metrics and all inside one big box, they had a winning product. Who wouldn’t want this? However, with great power come great annoyances. We’ll talk about those in the next part.
Published at DZone with permission of Gil Zilberfeld , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.