There is something kind of magical about being a software developer. You see a problem, you deconstruct it, you work out how the pieces fit together, and then you cast your cast your spell on the machine sitting in front of you. And with little more than a few (thousand) phrases, which look like gibberish to non-developers, this previously inert machine takes on a life of its own.
As developers it’s tempting to see software as a golden hammer that can be used to solve all kinds of problems. As Marc Andreessen commented in his now famous post Why Software Is Eating The World:
More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.
With so many business processes being defined in software, why not use that same software to improve the processes?
Unfortunately, this kind of mentality often falls down when applied to software developers themselves.
Developers commonly trade off pesky expectations, like writing unit tests and documentation, implementing proper security, getting user feedback about UX and a whole host of other boring tasks in order to get the shiny out the door.
(I was a little shocked recently to see security described as a boring part of a technology stack that underpins a lot of network applications I personally use.)
So how to fix this and ensure that tests are done, documentation is written, and security is addressed? The answer is often “run these checks at compile time”. If you run your static analysis tools on check in, scan for vulnerabilities at compile time, and do a word count on documentation as part of a monthly report, then surely these issues will just be solved?
This is a trap I have fallen into before, and one that I see others falling into all the time. Because you can’t fix culture at compile time.
If developers have never written unit tests, the notion that some faceless report generated by your CI system will instil this new behaviour is wishful thinking. Those who don’t write tests have never experienced that “ah-ha” moment when you catch some edge case in a test long before you spend days tracking down the issue it would have caused in production. Without a compelling reason to change behaviour, what meaning does a report have that says “0/0 tests passed”?
It has no meaning at all.
Developers can’t be programmed as easily as the machines they so effortlessly command. As much as I wish it weren’t true, changing culture takes people skills, not check in scripts.