Measure Your Test Automation Maturity
Feel free to grade your team's maturity through my research. By the end of the article, you'll have your Test Automation Maturity Level.
Join the DZone community and get the full member experience.Join For Free
I'm a Developer Advocate and one of the things I love most about my role is that I travel all of over the world — meeting and consulting with engineering teams, and discussing the challenges that they face.
One thing that I've realized about building quality software is...the struggle is real!
Everyone is trying to figure out how to rapidly-produce software that is yet of high quality. So we did some research (shout out to Moshe Milman who helped with this effort) and gathered best practices from some of the top companies in software, financial services, healthcare, gaming, and entertainment verticals and I'll share with you what these innovative development teams are doing to achieve great levels of success with their test automation initiatives.
As I go through the points of research, feel free to grade your team's maturity in that respective area. By the end of the article, you'll have your Test Automation Maturity Level.
For starters, 100% of the companies we researched employ automated tests to expedite their release cycles. When the goal is to release software on a continuous cycle, test automation is a must-have. There simply isn't enough time to manually test every new feature as well as manually execute regression tests to make sure existing functionality isn't broken. So these teams invest an extensive amount of effort into automating their tests so that they are confident in their product each time they deploy.
I know from personal experience how difficult it is for developers to find the time to write tests and also how difficult it is to have test teams write the code to automate tests, so we inquired about this a bit more to determine how are teams overcoming these challenges.
Every single one of these companies has its developers involved in writing tests. Many of them said their developers take care of the unit tests, while the QA team is responsible for writing the integration and end-to-end tests.
A whopping 60% of the teams shared that they no longer have the distinction between development and QA engineers, and instead have hybrid engineers. Their goal here is to have developers own ALL the testing of their code, as well as the triaging and maintenance of those tests.
What they discovered is what I already know — developers aren't the best at this. There's not much time, and frankly not much interest from developers to go beyond writing their unit tests. So, many of these teams have had to bring in qualified experts to help out.
I dug a bit more to learn how exactly the Quality Advocates are assisting here. We got a variety of answers but here were some of the common ones:
- Write test infrastructure
- Coach developers on how to write better tests
- Develop a testing strategy
Let's discuss each of these...
Write Test Infrastructure
The Quality Advocates find the best testing libraries, create the test automation codebase, and all of the utility functionality the developers will need to write their tests. That way it's not much overhead for the developers. The developers can just focus on cranking their tests out.
Unfortunately, many of the Computer Science and Bootcamp programs that graduated your developers did not teach them how to test. This is a huge hurdle for developers who may have good intentions and want to test their code. They may not ever share this with you, but a LOT of the developers that I speak with simply don't know how to test. These quality advocates specialize in this stuff and can help the developers think of scenarios, as well as teach them how to write good tests. If you think this may be a problem for your developers and you don't have a quality advocate just yet, send them to Test Automation University which is an online learning platform that provides free courses on this very thing.
Develop Testing Strategies
Finally, quality advocates develop testing strategies for the team. They help them assess risk and come up with a plan of attack on what should be tested and how thoroughly.
They also have a big picture view which is greatly needed because your developers are zoned in on their features and their tests. Someone needs to consider how these features interact with one another so that more sophisticated tests can be developed.
Someone also needs to strategize on which tests automatically run given certain pull requests. The advocate can help with that.
The quality advocates also help keep the test suites relevant by pruning out tests that are no longer of high business value.
Criteria: Does Your Team Automate Any Test?
If your team automates any tests at all, go ahead and give yourself 10 points!
Types of Automated Tests
We wanted to make sure that we were talking about more than unit tests here, so we inquired about which tests the companies automated.
- Every one of these companies automated their unit, web, and API tests.
- 80% of the companies who develop mobile apps automated their mobile tests.
- 80% of companies create reusable web design components and automated tests for those.
- However, there was very little effort by the core development teams to automate non-functional tests. Areas like security, performance, and accessibility testing were mostly handled by separate groups, like the Center of Excellences.
Criteria: Which Type of Tests Do You Automate?
Give yourself 10 points for each of the types you automate: unit, web/mobile, API, security, performance, and accessibility. If your team does not develop mobile apps, just give yourself 10 points so that you don't have a deficit. The same goes for if your company does not develop web apps or APIs.
I found this interesting because it aligns with what some thought leaders have been preaching for years. I'll admit, I've been a bit stubborn over my career and I tend to go for the language that the automation will be most comfortable in but it seems perhaps I need to rethink that when I want the developers to contribute.
For web automation, Cypress had the most adoption with 60%. Followed by Selenium and WDIO with almost 40%. A tiny percentage use their in-house libraries — these were mostly seen by the gaming companies.
Mobile was split evenly across Appium and the native tools Espresso for Android and Apple's XCUITest for iOS.
Applitools was used by all of the top dog companies for visual testing: 100% of the companies use Applitools along with their testing framework for web tests, and 60% also use Applitools to test their design components.
None of the teams are using a codeless record and playback approach — which is not surprising considering their developers are writing the tests.
It was really interesting to see such a high percentage of these companies using Cypress considering (until a couple of months ago) it only supported Chrome. So we asked the companies about their strategies when it comes to testing across multiple browsers, and viewport sizes.
A few said they no longer do it, citing flakiness and wasted time and effort, as well as a lack of browser diversity since many of the browsers, are using Chromium.
The others use device farms from cloud providers and many are using Applitools' new Ultrafast Grid.
Criteria: Do You Do Cross-Platform Testing?
Continuous Integration and Continuous Deployment (CI/CD)
All of the elite teams we spoke with are practicing continuous integration or continuous deployment. This enables them to release features to their users faster and again test automation is a key enabler for this.
But none of them have a "set and go" type of process. They aren't running their thousands of tests on every pull request.
They are using more sophisticated practices to execute the tests which are related to the area being modified. Some of the practices cited were:
tagging the tests by feature area. So if a "Shopping Cart" feature is checked in, then all of the tests marked shopping cart are executed
using code coverage tools then map between tests and source code.
The rest of the tests are still executed but are done so periodically throughout the day (e.g, every 3 hours).
The vast majority of the elite companies that we spoke with are using Jenkins. They did mention that a few one-off teams in their companies are using other tools like AWS, TeamCity, Kubernetes, and Google Cloud.
Criteria: Are Your Automated Tests Executed as Part of a CI/CD Pipeline?
For this one, you get 15 points! It's higher than the other ones because tests that are used to gate integrations and deployments have to be reliable. It shows a great level of maturity.
Another cool technique a lot of the successful companies are using is feature flagging. This allows them to push to prod but hide the feature from their customers. With this in place, they can do a more thorough job of testing it, or even release it to a small subset of their customers and then monitor to see if there are any issues.
Some other benefits cited were the ability to deploy faster as they didn't have to wait for other dependent components to be done.
Criteria: Do You Use Feature Flagging?
What's Your Maturity Level?
Below you can see your maturity level based on the number of points you've accrued.
While many of the successful companies we researched were Advanced, most of the companies I visit and consult with are Average and below — and that's ok. As you continue improving your automation initiative, hopefully, the research and metrics here can help you on that journey.
Published at DZone with permission of Angie Jones. See the original article here.
Opinions expressed by DZone contributors are their own.