I know, it‘s kind of like saying ‘Puppies are Evil.‘ And, I am a LONG time TDDer. Unrepentant, still a believer that it is a great way to code, however, like all other things, it must be done within practical limits. Yet, we‘ve entered the phase now where testing is becoming a pale joke of its former self. I have argued this for a long time. When the Ruby cult went nuts about 100% coverage, the age of the testing phantasm was herein declared. Folks, things have gotten worse, not better.
I want to weave in another argument: I was starting to think that Apple was just test-washing their tools and had no real interest in providing serious TDD capabilities. But now, in a turn that feels almost like I should volunteer to run and fetch some of the kindling for my own stake roasting pyre, I have kind of come over to their way of thinking. What way is that? It‘s the Magnum Force way: navigating by a constant reminder of one‘s limitations (Dirty Harry stole this from the Greeks). In other words, Apple will adopt things, but then use just enough of it so that the use still makes sense. Part of my seduction into this ‘just enough tests‘ cult was accomplished by realizing that Apple is quietly advancing on other fronts. Sorry, another tangential point here: this is the sign of a healthy organism where individuals, or small groups, can innovate. Want to see a bloated stupid organism? Look at that long thread about whether Linux should support the new Window 8 boot sequence. Fascists screaming at each other about minutae after 2 decades.
Lately, I‘ve been using Logging Breakpoints a lot. I whined on here about how sending integrals to the console with casts is a hideous mess and I wish Apple would clean that up. But the OO purist in me has found in actual practice, I rarely face this. Instead, I am almost always doing po [object] and getting just what I need, or if not, providing a description implementation.
When I was at WWDC, I attended the session on the new debugger. Folks, that‘s a big huge horse that‘s waiting to unleash some serious whoop ass on competing platforms. Seriously, while I was watching, I was thinking ‘who in the hell is behind this? did Apple make some kind of limited time deal with the Prince of Darkness?‘ It‘s actually quite entertaining. The narration is like ‘oh, remember how having a pale shadow of the typing system always sucked in the debugger, yeah, we brushed that aside….‘ I tried to watch it again last week. There‘s python scripting in it. There‘s some seriously ingenious stuff in there for testing parts of the app that have to be navigated to. Really interesting… again, having long ago subscribed to the debuggers are evil treatise, it‘s pretty hard to reconcile that thinking anymore. Code stuffed to the gills with logging statements about stupid amounts of sonar pinging silliness is hardly a treatise in efficiency.
One of the things that you feel happening when you wake up into this world, is the notion appears again that there is more than one tool in the box. And there‘s not a single-minded foreman on the job waiting to take a digit off with shears if he finds you not writing a unit test as you make a piece of something work.
The really dark side of tests is that in the non-Apple world they have been transmogrified from works into articles of faith. Group think often goes toward things becoming more a way of demonstrating conformance to a protocol or credo, than as a conveyance to another place. Of all the things on earth that should not have their meaning drained from them, tests should surely rank near the top. Would be like saying ‘we have a new test for cancer, doesn‘t tell us if you have the disease, but it lets us say we performed it later when you turn up with it.‘
The main thread I have seen in places that not only produce useless tests, but seem to not realize, or accept, that their tests are not actually performing their sole function, is that they are usually also places where no one really knows what they are building. And this is the reason why I think this tryke flips over so quickly: tests can only really test behaviors and have any value. Most applications have no behavior. Yes, I know how absurd that sounds. But it‘s true. Now let me add a tiny caveat: or the modicum of behavioral complexity that they do have, is simply grafted from other sources. Which is why so often tests turn into witless circle jerks: ‘call this, verify that this was called.‘ Speaking of princes of darkness, Nietzsche has something to say on this (and every other) matter:
When someone hides something behind a bush and looks for it again in the same place and finds it there as well, there is not much to praise in such seeking and finding. Yet this is how matters stand regarding seeking and finding “truth“ within the realm of reason. If I make up the definition of a mammal, and then, after inspecting a camel, declare “look, a mammal‘ I have indeed brought a truth to light in this way, but it is a truth of limited value. That is to say, it is a thoroughly anthropomorphic truth which contains not a single point which would be “true in itself” or really and universally valid apart from man. At bottom, what the investigator of such truths is seeking is only the metamorphosis of the world into man.
Actually, I think he way overshoots the landing pad of most projects, where the reason for the programmer standing at the event horizon and noting the back of his own head, is that there is no light escaping as it were, because though things are spinning and the center is taking on mass, there is no there there. What the testing world desperately needs is a qualitative measure of tests, and a performance metric. Neither is forthcoming. The only attempts at test innovation from the outside in the last 10 years has been aimed at making it easier to just create a whole crapload of them. Surely a qualitative test that could measure two things would be of immense value:
- Does this test in fact actually assure us of anything? Not perhaps in the Meyer sense of a mathematical proof, but is anything more than just a way of checking that our made up protocols are being conformed to?
- Over some amount of time, does the value of this test bear out?
On point #2, I watched a screencast a week or two ago by Jon Reid about unit testing controllers. That dude is awesome, and I really appreciate what he‘s doing, but sorry, that screencast convinced me very quickly that he was doing scrimshaw. What do I mean by that? That finishing work was being applied to parts of the app that are so unlikely to cause failures that it can only be seen as a way to make oneself feel better. As I was watching I thought ‘how often do wirings in iOS apps just come undone, and would a test save my ass? yeah, never, alrighty then!‘ Ok, that‘s a tad harsh, but… sometimes Interface Builder, if you do a wiring then shuffle around controls, will end up with a reference to something that doesn‘t exist. I just open the file in text mode and hunt it down.
On the plus side of the #2 argument, one way that a test‘s value might be demonstrated is how many times it failed. That‘s right: if a test never fails, then it never ended up saving you from anything. And this is one of the really crazy things about tests: they are the ultimate expression of rational desire: ‘I want to be sure that what I know to be true holds in the future, as things change around me.‘ But in there, is also the kernel of madness, as, of course, I know not what the future holds.
This is my point: the great irony of this thing is that while we take a million things on faith, we jump into our own apps and write a bunch of tests that scream out the message ‘I can‘t even convince myself that the number I just assigned to this int is still there!‘ It‘s a form of madness. Brought on by a lack of real things to test. When you have things that really need to be tested, you will probably know. Until then, keep an open mind about the quickest way to make something work, is my beatnik/Apple-ratchit philosophy on these matters….