10 Tips to Improve Automated Performance Testing Within CI Pipelines (Part 2)
Automation, small testing and smoke tests are in the next set of CI pipeline-enhancing tips.
Join the DZone community and get the full member experience.Join For Free
The following is part two of our three-part series, "10 Tips to Improve Automated Performance Testing within CI Pipelines." This week's installment focus is on tips 3-6: keeping tests small and targeted, testing of segments before the whole, automating things that are not flaky, and saving time through smoke testing.
You may also enjoy: 10 Tips to Improve Automated Performance Testing in CI Pipelines (Part 1)
3. Keep Tests Small and Targeted
It's cheaper to fix a little problem immediately than a bigger one later on, ask any homeowner who has let a minor hole in the roof go unaddressed for years only to wake up one day to a squirrel-infested attic.
Perhaps I've painted too vivid a picture or even one that you've experienced firsthand, but the reality is the same holds for testing. While large-scale testing has a place in the software development lifecycle, the trick for ensuring a cost-efficient testing process is to conduct small tests as early as possible (with each test targeting a particular product component under development). Small tests are easy to create, manage, and resolve provided that you're using the right test management tool.
When I say "small" tests, I don't just mean unit tests. Integration, performance, security, and production verification testing comes in all shapes and sizes. Each "testable moment" should have a right-fit to the time and value that critical feedback should produce, and be visible to all contributors. I often find that even small load tests in a continuous manner provide longitudinal input on how the performance of a critical business transaction is trending over development cycles accomplishes a lot, provided your teams are looking at these trends.
4. Test the Segments Before the Whole
Just as small, well-focused tests save time and money, so will a testing process that front-loads testing parts of a system before end-to-end testing is executed.
System segment testing before the whole makes sense. Think about it. General Motors doesn't perform a test run on an assembly line until it can ensure that every station involved works as it ought to. I'm not aware of any plant manager who will allow this to happen. The risks are many — none of which are worth the sacrifice.
The same is true for enterprise-grade software development, more so now that artificial intelligence is powering more of the day-to-day activity. Companies with a history of software delivery success understand that it's a complex undertaking both for engineering and management. These companies recognize that a system is only as good as its constituent parts. Therefore, to save time and money, and reduce risk, experienced companies test the segments of a system first. Such testing includes performance, functionality, security and regulatory compliance. Once the segments have successfully passed, the same types of tests are conducted on the system overall. Anything less is impractical.
Yes, you can test the whole system before testing its parts. But why would you want to?
5. Automate Things That Are Not Flaky
The days of manual point-and-click testing have long passed. The same goes for the group of war room-based sysadmins setting up web servers for assault on traffic and security penetration testing. To meet the demands of the modern marketplace, businesses need to release code as fast as possible, at rates exceeding human capability. The only way to do this is through software development automation.
Today, automation touches every phase of code creation. On the development side, we see a growing number of automated code generation tools freeing programmers from having to write typically re-used code. In terms of quality assurance, automation is applied from unit to integration testing, performance to end-to-end testing. Of course, the entire deployment process has been automated using CI/CD tools such as Jenkins and TeamCity.
While automation can/should be incorporated as much as possible, it's not a panacea. Specific situations still need human attention. Many of these are one-off episodes, like conducting a performance test on a scenario that simulates web traffic to an advertiser's site during Super Bowl Sunday for instance. Other cases tend to be the ones that are just plain flaky and by the suggestion, defy logic. Like when the code that absolutely should work, doesn't. Troubleshooting this goes way beyond the capabilities of scripted automation.
Companies who get it can identify with these situations and how the most cost-effective way to address them is through human intelligence. As a result, the forward-thinkers know to automate all that can be automated, yet not waste any time shoehorning automation into situations better handled by human execution.
6. Save Time, Use Smoke Tests
There's a difference between making sure that all the code intended for a production release is thoroughly tested and testing all the code designed for a production release all the time. I'm sure I don't have to remind you that each test run costs time and money, regardless of the level of automation included. It takes a certain strategic vigilance to decide whether to pull the trigger on full-scale testing or not. Having CI/CD automation drive the unit test execution immediately after a programmer checks in the code is a responsible use of time and resources. It's always better to identify code-level issues as close to the developer as possible. Allowing unit testing to go further into the release cycle is an unwise risk.
However, once codes start moving along the deployment path onward to component, integration, and performance testing, you'll do well to be judicious as to when to execute a full-blown test regimen. This is not to say that you never conduct a set of comprehensive tests against the entirety of the system that's on its way to production. Instead, I want to make sure that when you're performing "expensive" testing, you're getting the most out of the investment.
One of the best ways to reduce the time and money associated with large scale testing is to use smoke testing. A smoke test's value us that it's only looking for show-stopping problems before diving deeper into finer grain testing. For example, if your company has a demonstrable history of continuously having to fix data validation errors while only having to address memory shortage errors every so often, then smoke testing validation approach makes perfect sense. However, if your applications have a history of pegging out CPUs (as much as you wish they didn't), then the prominent place to smoke test is CPU utilization.
Smoke testing is never a substitute for comprehensive full-scale testing. If your applications are going to have issues, failure will likely occur in places that already known to be problematic. Therefore, smoke tests the obvious matters first.
Smoke testing saves time as it focuses on addressing the apparent issues upfront. If the code gets through the smoke test, it then makes sense to go into more detailed testing. If the code's smoke test is unsuccessful, there's no benefit to moving it forward.
Next week, in the last of the 10 tips three-part blog series, we will dive into tips 7-10.
Learn More about Automated Performance Testing
Published at DZone with permission of Sam Kent. See the original article here.
Opinions expressed by DZone contributors are their own.