Evolution of a programmer
Evolution of a programmer
Join the DZone community and get the full member experience.Join For Free
[Latest Guide] Ship faster because you know more, not because you are rushing. Get actionable insights from 7 million commits and 85,000+ software engineers, to increase your team's velocity. Brought to you in partnership with GitPrime.
As a software developer, it's common to learn new practices every day. Although there are jokes about how the more a programmer ages, the more his lines of code counter goes sky high even to accomplish simple tasks, usually this process results in an overall improvement. People that start using source control don't go back; the ones that start using distributed version control do not go back to centralized systems; and so on.
The issue with learning supposed best practices it's that they are not best by definition, but only better than the ones we were using before. Continuing the example of source control systems, Subversion is an improvement over CVS, but Git surpasses both (or, as Linus Torvalds says, tarballs are superior to CVS and Git is superior to tarballs... matter of taste.) It's common for a programmer like me and you to encounter a new tool or a new methodology and think that it is the best thing since sliced bread, but this is far from the truth in the majority of cases.
The most plausible model of this improvement process features different stages, where usually to arrive to stage N you have to pass from all the M < N stages. It's not that the upper stages are somewhat difficult: some of them are a simplification of the lower ones; but a typical programmer won't see the need to grow to stage N until he is arrived at stage N - 1 and experienced its limitations.
The example of Subversion and Git is enlightening. Back in the 1990s, CVS was all the rage and open source projects rely on it for collaboration (Eclipse Foundation still uses it today.) It was a pain to use, as merges were complicated and to compute the differences introduced in a file the client has to communicate with the CVS server. And network speed towards remote hosts wasn't as high as today's...
Then came Subversion, and its motto CVS done right. In the Subversion development model, you have a local, hidden copy of the revision you're working on. The result is svn diff does not take ages as cvs diff did: disk space is cheap today and doubling the space occupied on my machine is better than having to wait for a server on the other side of the world (as it is the case with open source projects hosted on SourceForge) for a diff. For the average programmer, Subversion is (was?) beautiful, and it was the closest thing to a time machine for code.
But then, with disk space becoming even cheaper, came Git. In Git, you do not have a local copy of the current revision to perform diffs: you have a local copy of the entire repository. Seriously: the first operation to start contributing to a project is git clone. For a while, I considered the Git movement something that will eventually fade or stabilize; instead, more and more open source projects are switching to Git. I guess it is the next stage.
Probably a junior programmer won't see Git as a savior, and he will start from Subversion (I hope not from CVS.) But eventually, he will grow to the next stage. I wonder what is the stage after Git and distributed systems in source control management, but I can't see it for now, as I'm still living the Git stage, and see the next ones only as buzzwords.
However, what I've said in regard to source control systems is often true for other practices we employ every day in coding, and progress towards some stages in a field may not imply similar progress in other ones (you can use CVS and still implement Test-Driven Development, although the overall result may not be very nice.) I collected here some typical evolutions that occurs in object-oriented programmers. Follow an hypothetical programmer in its progress towards enlightenment...
First stage: he does not comment his code. Comments are not necessary to get a working application, right? So why wasting time over something the compiler just strips away? The flow is so simple. Of course after two weeks he does not understand anymore what the code does.
Second stage: he comments everything, and although it helps the comprehension of the code for other programmers, his comments are often redundant and fall inevitably out of sync with the code.
Third stage: he writes self-documenting code, and makes use of docblocks for inline Api documentation. Variables and data types have meaningful names, along with routines.
First stage: he exposes public properties over all classes. Why write a method when you can simply access them directly?
Second stage: he plainly says that public properties are wrong and fix the situation by using getters and setters over private variables. Although this improves the flow by allowing him to intercept accesses and modifications of the private properties, it still exposes data subject to changes to other classes.
Third stage: he favors encapsulation and makes fields private by default, without any getters nor setters. Fields become accessible only if the computation that involve them cannot be moved to the class itself.
First stage: he puts everything in the same class, which has a nice main() method.
Second stage: he factors out responsibilities in other classes, and creates collaborators in the constructor of the mediator object.
Third stage: he injects collaborators via init*() methods or via the constructor, and breaks dependencies via small and cohesive interfaces.
First stage: all his methods are static, so he does not have to instantiate an object, an operation that is usually considered a treat to performance.
Second stage: he recognizes the static keyword as a marker for code artifacts that belongs to the class instead of the objects, like factory methods and collections of instances.
Third stage: he recognizes the static keyword as a procedural one, and factors out collective methods in a first-class object.
First stage: he does not test. This code is so simple it should work.
Second stage: he performs manual testing of the application, covering edge cases. He even writes test main() methods sometimes.
Third stage: he automates testing in unit and acceptance suites via frameworks that serve this goal. He gets to write the test ahead of the code and uses them as a design tool.
Of course some of this stages may be broken up in several sublevel ones, but I wanted to keep the list short.
Feel free to add superior stages if you have already reached them. We all have something to learn from each other. :)
Opinions expressed by DZone contributors are their own.