Thoughts On the Software Crisis
Do you agree that we are still in the early stages of understanding how to produce high-quality software in a controllable environment?
Join the DZone community and get the full member experience.
Join For FreeIf you’re an experienced developer or a seasoned computer user, you might recall a time when having 128MB of RAM was considered a luxury. For those who don’t know, are too young, or started using computers more recently, let me put this into perspective: the original Max Payne video game required just 128MB of RAM, as did GTA: Vice City. This meant that a computer needed only this amount of memory to run the game alongside all other programs.
But that’s not the point. The point is that these games were, and still are, considered significant milestones in gaming. Yes, their graphics are now dated. Yes, they don’t offer the same level of immersion or the rich gameplay mechanics found in modern titles. However, from an algorithmic perspective, they are not fundamentally different from modern games like Assassin’s Creed or Red Dead Redemption.
The latter, by the way, requires 12GB of RAM and 120GB of storage. That is 100 times more resources than Max Payne or GTA Vice City required! And that's a lot.
This isn’t just about games, it applies to nearly all software we use daily. It’s no longer surprising to see even basic applications, like a messaging app, consuming 1GB of RAM. But why is this happening? Why would a messenger require ten times more memory than an open-world game?
The Problem: Software Crisis
The problem isn’t new — it’s been around for more than 50 years. In fact, there’s even a dedicated Wikipedia article about it: "Software Crisis." Just imagine, that as far back as the 1960s, people were already struggling with the gap between hardware capabilities and our ability to produce effective software. It’s a significant chapter in the history of software development, yet it remains surprisingly under-discussed.
Without getting into too much detail, two major NATO Software Engineering Conferences were held in 1968 and 1969 to address these issues. These conferences led to the formalization of terms like "software engineering," "software acceptance testing," and others that have since become commonplace. The latter had already been extensively used in aerospace and defense systems before gaining broader recognition.
Just imagine that this issue was already highlighted even before a UNIX operating system started its development!
Despite the fact that billions and billions of lines of code have been written since then, we still see people often disgruntled with certain approaches, programming languages, libraries, etc. Statements like "JavaScript sucks!", "You don't need Kubernetes!", and "Agile is a waste of time!" are often heard from frustrated developers. These claims are usually lavishly accompanied by certain advice and personal preferences, such as "You should always use X instead of Y," or "Stay away from X, or you'll regret it."
Another popular recommendation is to apply certain design patterns and principles. They are sure that if you follow them, your code will be cleaner, leading to more maintainable, more readable code for others and more stable releases. I agree that there is a certain truth to these claims. Design patterns, in particular, have really improved the "language" of how developers talk to each other. For instance, mentioning "adapter" to another developer immediately conveys a common understanding of its purpose. It is even possible to convey this understanding to developers coming from different backgrounds. Without this shared vocabulary, explaining concepts would require much more effort.
But why, despite having an abundance of design patterns, concepts, numerous architectures, and best practices, are we still struggling to deliver high-quality software in time?
Thoughts About "Where Does Bad Code Come From?"
In his interesting video, "Where Does Bad Code Come From?", Casey Muratori claims that there is still a lot of software to be made because of how low-quality software is nowadays.
It seems that while hardware development has advanced significantly, we haven't advanced in software practices much. Over the past 30 years, we’ve seen the rise of countless new programming languages, sophisticated frameworks, engines, libraries, and third-party services. Yet, for an individual developer, programming is still as challenging as it was back then. A developer needs to read the existing code (often a substantial amount), modify it or write a new piece of software, execute, debug, and, finally, ship it.
While we have improved the execution (with modern programming languages) and the debugging experience, and there are certainly a lot of tools to ship the code (thanks to CI pipelines and DevOps), the tasks of reading and writing the code remain major bottlenecks. The human brain can only process so much information, and these limitations are a significant obstacle in software development.
Referring back to Casey's video, he compares the programming process to navigation. It’s like starting at a known point with a defined final destination, but the journey itself is uncertain. You have to go through the entire process to reach the final point. You don't necessarily know what the thing would be in the end, but you will know once you get there. So the quality of the result becomes your reference point. The process itself is complex and usually full of unknowns.
We see real-life evidence of this all the time. Think about how often you read the news of a company working on certain software, only to decide in the middle of the process to rewrite everything from scratch in a different language. This highlights how, during development, teams can end up getting so lost and overwhelmed by challenges that even the idea of throwing all the work out of the window and going back to the starting point seems like a good option.
Conclusion
I agree with Casey that we are still in the early stages of understanding how to produce high-quality software in a controllable environment. And the key to achieving this is to focus on our abilities as humans; specifically, how we read and write software. Our main focus should be on reducing the cognitive load of programming. That means we should be producing software with a whole different approach than we do today. Instead of reading and writing pieces of text in an editor, we should aim for a higher level of abstraction. Instead of dealing with raw text and attaching pieces of code like stapling the paper, we should have some kind of different environment that will let us modify the software in a predictable and controllable way aligned with the intent of a developer.
At the same time, I don't think we will completely refuse to edit a raw text; instead, we should move between different levels of abstraction with ease. This is similar to how we moved from punched cards to assembly languages and to high-level programming languages. It is still possible to descend to the lowest level of machine code and write the software on that level. That would be a waste of time and energy for the majority of cases, but what matters is the ability to move to a lower level of abstraction.
In certain cases this ability is invaluable. In the future, this ability will be a core ability for every developer.
Opinions expressed by DZone contributors are their own.
Comments