Elastic Provisioning of New Environments
Elastic Provisioning of New Environments
Join the DZone community and get the full member experience.Join For Free
Discover how quick and easy it is to secure secrets, so you can get back to doing what you love. Try Conjur, a free open source security service for developers.
A high functioning Agile development team is going to be able to new environments casually. Those could be permanent environments, but are more likely to be temporary and for a discrete purpose. A two-week performance measurement project could be one.
Why not use a pre-existing environment though? There are two major reasons, and one subtle one:
You want to be able to rebuild the environment at will, with exactly the same versions binaries within it, as well as configuration and starting data. Indeed, part way through the period of time you’ve set aside for the project, you may want to move the versions of everything back to something previously deployed, for the sake of a comparison. Adept teams will want to go back and forth casually.
You want the environment to be dedicated to whatever it was set up for. Specifically you don’t want other people using it if that activity is not supporting the purpose it was set up for. Dedication also means separation in terms of TCP/IP from other sibling environments, and having an immunity to cross-environment configuration accidents as they result in a fast failure because of firewalls.
Pre-existing environments can often end up being hand crafted
What I mean by that is that despite initially scripting an environment (say “QA2”) may have received a series of manual configuration changes that as time has passed are essentially undocumented. Hand-crafting is the antithesis elastic environment provisioning.
Getting to elastic environments
Here are the things you’ll have to check off to be able to deliver elastic environment provisioning. Some should be obvious.
Use source control for the creation of binaries.
I personally mean Trunk Based Development of course. I’ll even go further and say one trunk for all software you build even if service-separated or there are different distributables. I’ll get back to that.
Use source control for environment configuration.
This is totally separate source control for Java/Python/Ruby etc. Ideally one branch per environment where you can casually see the diffs between two arbitrary environments. You’d authorized to see those branches of course, and regular developers may not be.
Quite often enterprises have configuration (XML, YAML, properties) in the same source control as the source for the binary pieces. This I think is a mistake as it means you have environment names codified in file names:
dev.properties, qa2.properties. You might also design some form of extensibility into this, and it all gets a bit second-class compared to actual branches and tooling around that. I blogged before on that specifically.
Source control for the actual ‘create-environment’ scripts.
This set of scripts will be owned by Release engineering, although it may be used and maintained by DevOps folks too. While it could be a separate repo, you could make a case that it is co-located with the Java/Python/Ruby (etc) source that would make binaries (whatever that means for your programming languages), although there are downsides to that around bugs in the scripts in question.
Whether you are using physical or virtual machines, the scripts start from a baseline starting point machine image. On that they will make all app/service/library/package installs. Puppet and others are idea for this. If you’re using VMs, then you could well be scripting the allocation of new images from a central provisioning system (OpenStack, Ec2 etc).
As well as create environment, you’ll have a script or two to decommission environments.
So where does trunk based development come in?
If you have a production stack that comprises 100 micro-services and a few public human-facing HTTP web applications, then you need to have environment scripts that can provision all of those in a single go. That’s true even if you have 120 VMs for these dissimilar processes in an unscaled configuration (typical of modern non-production environments).
In order to have the repeatability of deploying to that environment, you are going to want all 100 micro services and associated webapps at known versions. Having the ability to checkout a single release-style branch, and build binaries from there is much more simple than having 100+ checkouts of separate repos, and stressing about whether you have them all, or have them all at the right version/tag/branch. Remember you want to experimentally push out older releases (and expect them to work) for the purposes of some experiments, and that’s hard too with N source control repos involved.
There’s sometimes things in your enterprise that there has to only be one of. LDAP / ActiveDirectory could be that thing. DNS servers for obvious subdomains of that testing place
*.testing.mycompany.com is another.
Things too expensive to have N of
You may have licenses for only a few instances of one thing. One for prod, and one for development are common. Oracle and F5 are things typically in this category. CIOs should strive to sign deals with vendors that have a price for production deployments, and $0 for non-production deployments, however many of those there are.
Things too specialist to script.
F5 figures here again: The Agile/ OpenSource industry has not reverse-engineered the scriptable know-how to make these things easy to script without involving highly specialist skills.
Published at DZone with permission of Paul Hammant , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.