Catching Bugs Too Late
It's hard work, but keeping everything green is essential to making sure you don't pay huge costs down the line.
Join the DZone community and get the full member experience.Join For Free
Putting quality first is critical. Teams must take ownership of quality, but to do so they have to create an environment that allows them to build quality in, instead of testing it out much further down the road to delivery. Finding bugs late is too costly if you aren’t yet to the point of being able to prevent them (implementing BDD). Ensure you can find them early.
Staying Green is Hard Work!
I’ve seen many things change this year. My daughter began kindergarten. (How did THAT happen so fast?!) I also began blogging, and our department is trying to shift from Waterfall to Agile and Continuous Delivery, with teams shifting to own quality instead of tossing code over to QA… all great changes. But one thing has remained the same. We were still finding bugs late.
I’ve written many times about the importance of quality first. But how did our team take action on that? First, we HAD to have automation. Purely manual testing was just not going to cut it anymore. Don’t get me wrong, I still very much value human-based testing. But frankly, it can catch things too late. So, enter our automated tests. We began with what we called our pre-commit tests. These must be run — you guessed it — before you commit code! Yes, they are slower than unit or integration tests. But they take around 7-8 minutes (allowing time to go grab some coffee, stretch, whatever). They are our most critical features and workflows. Aside from running locally before committing, they are also scheduled and running many times over during the day with all the commits going on. Once we established that set of tests, we began our work on more user acceptance tests – still hero workflows, but trying to keep in mind the fine line between useful UI tests and too many tests (think of the testing pyramid).
Unfortunately we entered what I call the “dark period” where our once green tests were failing. The reasons are many. (That’s another story for another day.) Resources were shifted (or flat-out gone), and priorities changed. Long story short, we had no one available to either write tests or tend to them. It felt like we were going back to square one. People didn’t trust the tests. If you can’t trust the tests, what’s the point?
Fast forward several months, and everyone recognizes we need the automated tests. We are in the process now of stabilizing our tests. We focused on those pre-commits first and got them green again – yay! They are so green now, that when there’s a failure, we know it is something in the code (and we don’t automatically assume that it’s the test). Now we are moving on to the other tests.
It Works! It Really Works!
Once we were stable on those critical tests, we had to figure out how to get people to care. I was suddenly in the business of sales!
First, we had to show the tests were stable – show everyone they weren’t flaky, suffering from timing issues, etc. We had about twenty solid builds of GREEN. Pretty! But even better than the nice soothing green on our Jenkins dashboard, we had stability.
Then, we had to show they were catching things. (It seems counterintuitive, wanting to see your test suite fail, but stay with me a minute). My team (consisting of one other person) was constantly running the tests – even locally, between the scheduled runs on Jenkins. We recruited a few engineers to run these tests prior to committing their code. Then came the bugs – and our tests caught them! At first, we held our breath as we debugged to see if it was the test before alerting the engineering managers. (It wasn’t! We found a bug!) Since teams had originally deemed the workflows as critical, these bugs were prioritized quickly, fixed, and we were back to green.
Don’t Find Them Too Late — Or You’ll Pay
Automation has been critical to our success. While we are still working on it, having a set of useful tests (even a small set) has proven its worth. We have caught several bugs that otherwise would not have been found until up to two weeks later. Why does this matter? (Greg Sypolt discusses the cost of a bug depending on when it is found in this blog post, based on research presented by IBM at the AccessU Summit 2015.) Say you find a bug when running locally as you’re still working on that feature – That’s $25. Wait until a test cycle? $500. Find the bug in production? You’re looking at a cool 15 grand. That’s right. $15,000.
As we stabilize our tests, we are reducing the risk of finding bugs in later cycles (whether testing or in production). That adds up. The reality is that you will introduce bugs as you code. It happens. But WHEN you find them is the game changer. Sure, it takes a lot of effort — an effort that many underestimate — but it will save you in the end.
Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and will preach the value of Test-Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.
Published at DZone with permission of Matt Werner. See the original article here.
Opinions expressed by DZone contributors are their own.