Why It Will Always Be Hard To Write Useful Software
History teaches us how hard it is to write software that stays useful. It has little to do with code, so AI is not going to fix it.
Join the DZone community and get the full member experience.Join For Free
I spent my previous two posts on the difference between efficient versus effective software engineering, and then how it ties in with accidental versus essential complexity. I am curious how AI will change the programming profession in the coming decades, but I am critical of any hype-infused predictions for the short term. AI won’t dream up software that remains valuable over decades. That remains the truly hard problem. It can help us out fine at being more efficient but does a poor job at effectiveness.
Better rephrase that as an unreliable job. Effectiveness is about building the right thing. A thing that is aligned with our human interests and doesn’t harm us. Self-driving cars, designed not to crash into other cars or pedestrians, are unreliable at best. It’s easier to specify safeguards, but fiendishly hard to implement. And it gets even harder. Once we have millions of them on the road, every day some of these will make life-or-death decisions between the lesser of two evils. The machine needs to judge what’s best for other humans, in a split second and with Vulcan detachment. The needs of the many outweigh the needs of the one, it will argue. When it comes to such existential decisions, we should remain firmly in the driver’s seat to shape the kind of machine future we want.
Current AI is much better equipped to handle efficiency improvements. It can swap out alternatives, weigh their relative merits, and suggest the combination that leads to the most efficient solution. But the smarter it gets, the less we should trust it with controversial topics that require judgment. Because things might take a scary turn. Nick Bostrom’s famous paperclip maximizer is an amusing thought experiment with an important warning: AI will optimize for whatever you instruct it. If that happens to be making paperclips and provided it is infinitely powerful and infinitely selfless, it will strip entire galaxies of their metal to make more useless stationery.
Even if AI were to become self-conscious, with or without a dark agenda, it would still be alien, and by definition so (it’s in the word artificial). Isaac Asimov predicted that a human creation with individual agency should probably have some hardcoded safeguards in place. His three laws of Robotics predated the ENIAC by only three years. But he couldn’t have predicted the evil genius who added some private exceptions to the “do no harm” principle through a sneaky firmware upgrade, like in the first Robocop movie.
Enough gloomy gazing in the palantír. What I do predict (having no stock in any of the major stakeholders) is that the art of programming will transform into the art of expressing what you need clearly and unambiguously. Developers will become AI-savvy business analysts, accustomed to speaking to an AI, using the ultimate high-level programming language, i.e., English. It will always build working software, and if we’re lucky it will even be useful.
Working Software Is Not Good Enough
Isn’t it strange that the Agile Manifesto called for working software? As if broken software were ever an acceptable alternative! Is it too much to ask that prompt-generated code is also useful and valuable? Yes, it probably is too much to ask. The gap between working and valuable software is huge because the value is intangible and unpredictable. Perfectly fine software can lose its relevance through no fault of your own and in ways that no upgrade can fix. Here are a few examples.
It's not the first time I mentioned the long-forgotten OS project Chandler. Its rocky path to version 1.0 is beautifully told in Scott Rosenberg’s 2007 book Dreaming in Code. It’s an enduring reminder that the best intentions, a team of dedicated top-notch developers, and a generous sponsor (Mitch Kapor, who created Lotus 1-2-3) are not guarantees for success.
Chandler set out to be a free alternative to Microsoft Outlook and Exchange. It promised a radically different user experience. It was going to disrupt how we handled messages, agenda items, and to-do lists. And it meant to do so in a desktop app, communicating through a peer-to-peer protocol. Power to the people!
But the team had taken too many wrong turns in their architectural roadmap. Like Icarus, they flew too close to the sun. The world caught up with them. More powerful browser features made a Python-based desktop app a poor choice. Cheap and easy hosting of your own server removed the need for a peer-to-peer protocol, a design choice that unleashed a torrent of accidental complexity. Now, all those could have been remedied if the community had wanted to. But it didn’t. The essential problem lay in the user experience. The ideas were too radical. They were not what the average office worker needed. I haven’t seen any of them implemented in other products (but I’ll gladly stand corrected). People are still using mail and agendas the way they did in 1995, only now on their phones and without beveled corners.
The Unplanned Obsolescence of GWT
Keep Coding for Coding’s Sake
Let none of this discourage you from writing code, by the way. There is no need for serious software to be effective in a commercial sense, or to have any practical benefit at all. I’m talking about amateur Open Source. I have written software that I’m proud of, but that had no business plan, no roadmap, and no other motivation than my own education and enjoyment. It was effective to the extent that it taught me new concepts, but I had zero appetite for my own dog food. There are many such projects on GitHub. I mean no disrespect. I speak from personal experience. There’s nothing wrong with coding for coding’s sake, but that’s like playing in a band that never performs for an audience: hard to keep up.
Opinions expressed by DZone contributors are their own.