5 Ways to do Source Control Really, Really Wrong
Join the DZone community and get the full member experience.Join For Free
Last week, with the help of the good folks at Red Gate, I set up a little competition to give away 5 licenses of their very excellent SQL Source Control product. The entry criteria was simple – share your most painful experience which could have been avoided by using source control.
Many painful stories emerged but I thought it worth sharing and commenting on the 5 winners as I’ve felt this pain time and again in years gone by. So enjoy these stories and hopefully take away some nuggets of knowledge that might help you avoid the same pitfalls in the future.
To the winners: hopefully those licenses will help ease the painful memories of mistakes gone by! I’ll be in touch with your prizes shortly.
1. Source control via CTRL-Z
The first story comes courtesy of MyChickenNinja and it literally hurts my head to read! In this particular case, apps were sabotaged by an ex-employee which is always going to be painful, but at least there are normally multiple means of restoring the code, if not the data.
The first problem was backups that were 3 weeks old which in itself is a lesson – are your environments actually being backed up? We’ll come back to that in other stories shortly, the real pearler was the quasi-source control via way of CTRL-Z:
They ran their code, which they updated constantly, as their production environment and used Ctrl-Z to undo bad changes.
Now this is seriously mindboggling – what happens if the app that did the edit is shut? Or even the PC turned off? And hang on – did he just say they used the environment in which they edited code as their production environment?! UNDO IS NOT SOURCE CONTROL!
2. Multiple databases and integration woes
The second story is via Brandon Thompson where he had the extreme displeasure of working in an environment with many sources of the database under active development and no real integration process beyond hard yakka. This meant dealing with multiple database backups including a bunch located offshore:
Our development team was offshore, so they also had their own set of databases that I never saw, but they would send over change files to apply to our development environments.
What I find most painful about this is the manual labour involved simply to get everyone playing nice together. This is effort which doesn't go into innovation or any sort of value-added activity such as building new features, it’s effort which results in absolutely nothing to show for it beyond code that was already written actually working!
Source control is one of those greases which simply keep everything working smoothly together. It’s the same sort of thing with a continuous integration environment and the ability to automatically deploy. These are the “bread and butter” of software development and must be the foundation on which any successful team writes code.
3. Depending on untested backups
Next up is Barry Anderson who realised a pain many of us have experienced before; not being able to restore from backup. In fact in Barry’s case, backups hadn’t even been happening for the last few months which in itself, was bad, but clearly those who depended on the backups also weren’t aware of such a critical oversight.
Surely there is a good reason for such oversight? Barry explains:
Our manager (not the storage team's(!)) then told us that there was neither the time nor the space to test restores!!!
Backups is one thing, but just as importantly, being able to restore from those backups is equally important. I recently had an experience in configuring a number of new environments where backups were supposed to be happening but simply weren’t. Only by insisting on a dry-run restore was this problem surfaced. For many other people, the problem is surfaced when they’re actually trying to recover from a serious data loss problem. Test your backup and restore folks and trust nobody!
4. The human merge tool
From Graham Sutherland comes a story of man doing machine’s job:
We only had a few devs, and each had a copy of the whole project on their disk. Every time a change was to be pushed, we downloaded a copy of the lead dev's source (his live development code) and used diff to identify the changes, then updated them manually. Line by line. All by hand.
As bizarre as this might sound, I’ve actually seen this done before when source control did exist! Perplexed as to why only one offshore team member appeared to be committing, a broken English discussion ensued during which the rationale was explained: the lead developer needed to verify the other devs’ work before being committed. A one-way “discussion” in clear, direct Australian English quickly followed.
This is really analogous to that earlier point on having multiple databases to be integrated; we have the technology to solve these problems! Every time a human engages in any labour intensive, repetitive process during software development, you really have to stop and ask “Is there a better way”? There usually is.
5. Cut and paste versioning
Robin Vessey’s story resonated with me because it’s the most common form of quasi-VCS going; cutting (or copying) and pasting into new locations. Often times the pattern involves effectively duplicating the directory holding the code then naming the copy with a date or some other form of identifier to indicate a time frame.
In Robin’s case, he was trying to move a directory structure across a network:
It is simple but effective. I did a cut and paste of an entire directory tree, everything, across the network ... the files left my side, they never arrived on the other side, I still don't know why.
I must admit, I approach any cut and paste file action extremely cautiously because I’ve seen this happen so many times before even on a local file system let alone across the idiosyncrasies of a network.
The sting in the tail of Robin’s story was that there were no backups being taken to restore from because they’d “stopped backing up a while ago because we didn't have any more space”. Anyone see a pattern emerging here?!
If working without everything being under source control is not both a scary thought and a distant memory – STOP IT RIGHT NOW! Seriously folks, we’re well and truly beyond this as a profession and there are now so many great VCS products, hosted services and integration tools available there’s really no excuse not to have all your code – including your database – under source control.
Most of these are free and some come with very minimal financial and effort outlays. If you’re told there’s not the resources to do this (or test your backups, for that matter), then someone just doesn’t get it. Honestly, these tools are no more negotiable than giving developers a chair to sit on and the five stories above (plus the ones form the runners up) should serve as evidence of that.
Published at DZone with permission of Troy Hunt, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.