Code Complete Doesn't Mean You're Done
Code Complete Doesn't Mean You're Done
Join the DZone community and get the full member experience.Join For Free
Take 60 minutes to understand the Power of the Actor Model with "Designing Reactive Systems: The Role Of Actors In Distributed Architecture". Brought to you in partnership with Lightbend.
Joe Coder runs through his feature in the UI. It works. The doodad renders beautifully on the screen, and when he clicks the button, all the right things happen on the server. He checks his code in, writes a quick comment in Jira, changes the issue status to “Completed & Checked-in”, and goes to his next task. Lo and behold, his To Do list is empty! Joe’s done coding!
Or is he?
Configuration Management cuts a branch off the trunk code. Joe’s code goes through QA.
Some QA departments try to break developers’ code, others just test the “go path” to insure their test scripts work. In Joe’s case, the QA department has a test script that is a step-by-step use case for how the feature should work. QA signs off on his feature. The doodad worked exactly as specified, which is to say, the select/combo box was correctly filled with test data from the database.
Joe’s code goes to production … and takes down an entire server.
How can this be? It went through QA! Testers verified that his code did what it was supposed to do!
Most software organizations use the term robust wrongly. I generally hear it used in a context that implies the software has more features. Robustness has nothing to do with features or capabilities! Software products are robust because they are healthy, strong, and don’t fall over when the table that populates a select/combo box has a million rows in it.
Joe Coder never tested his feature with a large result set. Michael Nygard — in his excellent book Release It! from the Pragmatic Press — calls this problem an unbounded result set. It’s a common problem and an easy trap to fall into.
Writing code for a small result set enables rapid feedback for a developer. This is important, because the developer’s first task is to write his program correctly. It seems, though, that this first step is oftentimes the only step in the process! Without testing the program against a large result set, the new feature is a performance problem waiting to happen. The most common consequences are memory errors (have you ever tried filling a combo/select box with a million rows from the database?) or unscalable algorithms (see Schlemiel the Painter’s Algorithm).
In our case, testing with big numbers revealed concurrency issues that we did not and could not find when developing with simple, smaller tests. Our multi-threaded, multi-node messaging system would routinely deadlock whenever we slammed it with lots and lots of messages. It didn’t do this when we posted simple messages to the endpoint during development. Similarly, we noticed that we held on to objects for too long during big batch runs. They all got garbage collected when the batch completed, so it wasn’t exactly a memory leak, but there was no need to hold on to the references. After we fixed that, we noticed that our servers would stay within a tight and predictable memory band, as opposed to GC’ing all at once at the end of a batch. Terracotta server expertly pages large heaps to disk, so we were never at risk of running out of memory. Still, it’s nice to see our JVMs running lean and mean.
We’re still stress testing our system today. This past weekend, we pumped over one million real-world messages through our message bus. Our concurrency issues are gone, memory usage is predictable, and we stopped our test only to let our QA department have at our servers. There are zero technical reasons why our test couldn’t have processed another million messages. Terracotta Server held up strong throughout.
But we’re still not done testing yet. We still have to see what happens if we pull the plug (literally) on our message bus. We still have to throw poorly written messages/jobs at our bus to see how they’re handled. We need to validate that we’ve got enough business visibility into our messaging system so that operations folks have enough info at runtime to do their jobs. We need canaries in the coal mine to inform us when things are going wrong, and for that we need better metrics and reporting built into our system that shows performance over time.
We’re code complete, but we’re not done yet.
Opinions expressed by DZone contributors are their own.