More Efficient Software Development Means More Need for Devs
AI isn't replacing developers but transforming software development by dramatically increasing efficiency. It eliminates "blank page syndrome."
Join the DZone community and get the full member experience.
Join For FreeI think we need to be realistic when we talk about AI's role in software development. It's not "hit a button and generate code." For me, it's best positioned to maximize efficiency. It's not just a tool for getting rid of developers.
Whenever I start a new project, I often get stuck on "blank page syndrome." Half of the battle in software development is getting 90% of the knowledge I need in order to start writing code — which is a big part of engineering solutions.
Previously, researching libraries, understanding code, learning what the best practices are, and gotchas of the thing I'm about to do could take days to figure out. You'd go to the library, maybe look at the docs, read the docs, and even then, you don't know which ones you need to read. So then you maybe play with it for a bit, figure out what you don't know, go back to the docs, maybe you run into a problem that isn't clear there. It’s convoluted, frustrating, and very trial-and-error.
AI removes this guesswork and abstract-ivity. With iteration speeds hitting ridiculously high multipliers in the development process, it is now much easier to A/B (or even A/B/C/D) test. You can try a few implementations in parallel and see how your users or customers respond to them.
For small teams that don't have the resources to be experimenting like that pre-AI, this has completely changed our approach. It supports what Sam Altman recently tweeted — when GPT4 first came out, it cost more, and now it's much cheaper, but usage is actually much higher. They make much more money now even though it's cheaper.
A New Level of Trust
The thing about speed in AI development is it all comes down to trust, right? Like, you can only move as fast as you're willing to trust the code that's being produced. I like to compare this to the self-driving car paradox where people will happily get in a car with a human even though the human has a, say, 1 in 1,000 chance of making a mistake. Compare that to a self-driving car that might have a 1 in 10,000 chance of making a mistake, and many people will still go with the human. The problem is with accountability more than anything.
On the topic of accountability, for AI-generated code, my feeling is that you'd need something like 99% accuracy before you could just approve it without looking. That's a much higher bar than we set for human developers, but it makes sense. If you're generating billions of lines of code and you've now redirected all your human efforts to reviewing that, that doesn't seem like a true efficiency gain.
The bottleneck always comes back, I think, to context. AI can write code that looks perfect on the surface, but that doesn't always mean it's right. Both humans and AI make mistakes but in different ways. When a human developer has a particular blind spot or habit, you'll see that same pattern show up in their PRs — it's predictable and specific to that developer. But with AI, the issues come from fundamental limitations in how LLMs understand context and codebases. That's a much broader, systematic problem.
Reshaping the Developer Journey
"Developers" might not even be developers anymore. Daily, I see people on various social media channels who aren't even traditionally technical jumping into development. They might not be junior developers in the classic sense, but they're tech-adjacent people with ideas who can fire up Claude or ChatGPT, get a basic app running, and suddenly, they're making thousands in monthly recurring revenue. That just wasn't possible before.
One thing about junior engineers is they're famous for diving straight into coding. And, with the safety net of AI to proofread everything and hold your hand, that impulse often remains unchecked. They completely overlook the thinking phase before writing any code. It’s convenient because juniors often don't even know what questions they should be asking.
Now, with AI, you can do what we call "rubber duck debugging," where you talk through your problem out loud like you're explaining it to a rubber duck. Except instead of a duck, you've got an AI that can actually talk back and help you understand the problem better.
This new path from junior to senior developer isn't just about getting better at writing code. The real skill that takes you to the next level is being able to take a big, messy problem, whether it's technical or coming from the business side, and break it down into actionable pieces. That's what engineering is really about. When you look at top-level engineers, they might barely write any code at all. Their value comes from breaking down complex problems. That's the true meaning of being an engineer.
The Road Ahead
I get the pushback from experienced developers — just check out the ExperiencedDevs subreddit, and you'll see how negative the reaction gets whenever AI comes up. It's understandable. But this is no different from any other industry where veterans resist new approaches. There's always this mix of valid concerns and just resistance to change, especially around what skills are considered "the basics." The thing is, as the industry evolves, what we consider fundamental keeps changing, too.
I'm not saying AI-generated code is better than human code yet. But that shift is coming, especially with the recent breakthroughs in reinforcement learning. We're moving beyond just mimicking human developers — now we're heading toward something potentially superhuman. Instead of showing AI exactly what to do, you give it a goal and a way to measure its progress. Then, you let it experiment and figure things out on its own. Through this process, it discovers solutions that might never occur to a human developer.
We're not heading toward fewer developers — we're heading toward different types of development work.
Opinions expressed by DZone contributors are their own.
Comments