The programmer's information diet
The programmer's information diet
Join the DZone community and get the full member experience.Join For Free
[Latest Guide] Ship faster because you know more, not because you are rushing. Get actionable insights from 7 million commits and 85,000+ software engineers, to increase your team's velocity. Brought to you in partnership with GitPrime.
The concept of information diet is a response to the overhelming quantity of knowlegde and content available over the Internet. Like calories in the Western world, information can easily overcome the capacity of a person when consumed without limits.
The Internet is a wonderful place for matching the producers of content - such as the author of this article - with consumers browing it now and their future Google searches. However, reading every article that seems interesting is unsustainable and is comparable to buying every book that you encounter; indeed this is what happened to many of us when we first had access to the Internet and the plethora of free e-books on it dedicated to programming topics.
The following can be applied to any source of content that comes from the endless web stream: articles and blogs, e-books (free and not), essays and so on.
The concept of book porn originated with Jeff Atwood, considering book porn all the content describing something that could never be done in real life. The reason for that is a very different career (such as reading about Python APIs while you're working in a .NET shop) or because it's out of reach for most of us (papers describing how the AWS architecture on behalf of Obama for America).
Consider every article as a possible instance of content porn and decide if you're reading because it's cool or for your own skills advancement.
Every technical information has an half-life, since it becomes more relevant with time, going from cool to outdated or obsolete (in some cases being superceded by similar documentation). Even for ascending technologies such as MongoDB, it is likely the same information will change with time and the new versions (new hashing capabilities, the aggregation framework introducing an alternative to map-reduce...)
I don't mean the obsolescence of information follows an exponential model, since obsolescence it's not even a binary state (say our friends who code Cobol). But consider the half-life of a topic before studying too much about it - I knew the inside out of Zend Framework 1 just 3 years ago, while now it is deprecated for a new version and this information is only useful to legacy code scavengers. Meanwhile, knowledge of Unix tools such as grep, find, xargs has only increased.
It does not matter how cool a technology is, just how antifragile it is, in Taleb's terms. Technologies lifetimes follow a power law, which means the longer something has stayed around, the longer it is likely to exist in the future. This is why C has an higher probability than Java and PHP to being around in 40 years, while in turn Java and PHP have an higher survival probability with respect to Node.js.
Of course, the antifragility of technologies is usually inversely correlated with their specificity. Learning a new language is good for your long-term programming skills; learning a framework or a library is less useful if you don't immediately apply those skills; learning a testing framework or a build automation system (highly standardized tools) in a language different from the ones you're using now is a total waste of time.
The only time when you really master something is when you apply it in a production environment. That means technologies and practices go into a development cycle into your head:
Researching - Reading and studying - Applying in katas - Applying at work
By no means this means you're finished after applying a practice once, but there can be blocks to your improvement in the left of this board. For example, no matter how many book you buy and articles you send to your Kindle, you have a finite amount of time and energy to study and exercising your new skills.
What happens when there are capabity limits (for good reasons) in the earlier stages of the pipeline? Like Goldratt would say, subordinate everything to the constraint of the studying phase. Practically, this means keeping a disciplined WIP limit in the researching phases: I decided not to buy or download more than one book at a time.
Context switches on studying are much alike the ones that happen in software development: they increase cycle time making your new skills late to the party, and consume energy that could go into focusing on a single topic.
Batches, via RSS
Taleb's book Antifragile says, again: the information that is mostly valuable after 1 day and much less valuable after 7 days will be gargabe after 30 days. So you don't want to waste your time reading it: while batches are inefficient for cycle time in a team environment, they are ideal for filtering out everything that will die out on its own before having to be absorbed (most news).
That's one of the reasons an iteration cannot change its priorities before two weeks are elapsed.
RSS works very well for organizing batches, as it has a good recall of information (not missing out on your favorite blogs) while letting you tag huge sources of articles with a "High volume" label. Twitter tries this with lists, to no avail.
Beware of the automated streams RSS, such as Hacker News of a full DZone Links tag; they have such an high volume that they will drown single authors. The "High volume" label or the unsubscription for them is the best solution.
Opinions expressed by DZone contributors are their own.