HPE Software Testing Tools Changed Hands: Why It Doesn’t Matter
If your current testing solution isn’t addressing your challenges, it doesn’t matter what company owns the tool, it's time to change platforms.
Join the DZone community and get the full member experience.
Join For FreeBy now everyone knows that the Micro Focus-HPE split is a done deal. We’re not going to bore you with yet another vendor-driven blog using this news to stir fear, uncertainty, and doubt in the hearts of the many software testers who’ve become accustomed to using HPE.
Why are we holding back? Because the spinoff news really doesn’t matter.
Ultimately, the current owner of the Mercury> HP/HPE> Micro Focus testing tool platform is irrelevant. What matters is how it’s working for your organization.
Does it truly help you to deliver quality feedback at the speed, scope, and level that your organization expects today? Will it help you advance your organization’s top digital transformation initiatives? Or will it be the dead weight holding you back as the organization continuously edges towards faster, leaner processes?
Not sure? Consider the following:
- Does your organization view testing as a necessary evil that impedes velocity?
- Does your testing platform guide you to the testing activities that deliver the biggest business impact in the least amount of time?
- Are you constantly scrambling to provide consumable quality feedback that’s not rife with false positives and redundancies?
- Are you consistently delivering test results that directly influence go/no-go decisions at the business level?
If you’re not satisfied with the effectiveness of your testing platform, you’re not alone. Software testing tool vendors have been tempting enterprises with the promise of test automation for more than two decades now, but few companies have achieved the desired business results from their automation initiatives. Recent studies report that test automation rates average around a dismal 20%.
The most commonly-used software testing tools today are predicated on old technology, but enterprise architectures have continued to evolve over the years. Development no longer focuses on building client/server desktop applications on quarterly release cycles — with the luxury of month-long testing windows before each release.
Almost everything has changed since test automation tools like those by Mercury, HP, Micro Focus, Segue, Borland, and IBM were developed. Retrofitting new functionality into fundamentally old platforms is not the same as engineering a solution that addresses these needs natively.
Moreover, scripts are cumbersome to maintain when developers are actively working on the application. The more frequently the application evolves, the more difficult it becomes to keep scripts in sync. What does this mean for Continuous Delivery? Teams often reach the point where it’s faster to create new tests than to update the existing ones. This leads to an even more unwieldy test suite that still (eventually) produces a frustrating number of false positives as the application inevitably continues to change. Exacerbating the maintenance challenge is the fact that scripts are as vulnerable to defects as code—and a defect in the script can cause false positives and/or interrupt test execution.
The combination of false positives, script errors, and bloated test suites creates a burden that few QA teams can overcome. It’s a Sisyphean effort—only the boulder keeps growing larger and heavier.
This is the #1 problem that software testing teams are facing today. If your current testing solution isn’t addressing this challenge, it doesn’t matter what company owns the tool, how your licensing fees are changing, or who’s responsible for its evolution. The bottom line is that you need to look elsewhere.
Opinions expressed by DZone contributors are their own.
Comments