CD Pipeline Implementation: Tracer Bullet (Trail Marker)
Join the DZone community and get the full member experience.Join For Free
on my current project we’re developing an essentially green field application, albeit one that integrates a fair bit of data managed in existing systems, in conjunction with the implementation of a new hosting infrastructure which will be used for other applications once it is established. we want to have a solid continuous delivery pipeline to support the team developing the application, as well as to support the development and maintenance of the infrastructure platform.
in order to get the team moving quickly, we’ve kicked this all off using what we’ve called a “tracer bullet” (or “trail marker”, for a less violent image). the idea is to get the simplest implementation of a pipeline in place, priortizing a fully working skeleton that stretches across the full path to production over a fully featured, final-design functionality for each stage of the pipeline.
our goal is to get a “hello world” application using our initial technology stack into a source code repository, and be able to push changes to it through the core stages of a pipeline into a placeholder production environment. this sets the stage for the design and implementation of the pipeline, infrastructure, and application itself to evolve in conjunction.
this tracer bullet approach is clearly useful in our situation, where the application and infrastructure are both new. but it’s also very useful when starting a new application with an existing it organization and infrastructure, since it forces everyone to come together at the start of the project to work out the process and tooling for the path to production, rather than leaving it until the end.
the tracer bullet is more difficult when creating a pipeline from scratch for an existing application and infrastructure. in these situations, both application and infrastructure may need considerable work in order to automate deployment, configuration, and testing. even here, though, it’s probably best to take each change made and apply it to the full length of the path to production, rather than wait until the end-all be-all system has been completely implemented.
when planning and implementing the tracer bullet, we tried to keep three goals in mind as the priority for the exercise.
- get the team productive . we want the team to be routinely getting properly tested functionality into the application and in front of stakeholders for review as quickly as possible.
- prove the path to production . we want to understand the requirements, constraints, and challenges for getting our application live as early as possible. this means getting everyone involved in going live involved, and, using the same infrastructure, processes, and people that will be used for going live, so that issues are surfaced and addressed.
- put the skeleton in place . we want to have the bare bones of the application, infrastructure, and the delivery pipeline in place, so that we can evolve their design and implementation based on what we learn in actually using them.
things can and should be made simple to start out with. throughout the software development project changes are continuously pushed into production, multiple times every week, proving the process and identifying what needs to be added and improved. by the time the software is feature complete, there is little or no work needed to go live, other than dns changes and publicizing the new software.
“do’s” and “do not do’s”
do start as simply as you can
don’t implement things that aren’t needed to get the simple, end to end pipeline in place. if you find yourself bogged down implementing some part of the tracer bullet pipeline, stop and ask yourself whether there’s something simpler you can do, coming back to that harder part once things are running. on my current project we may need a clever unattended provisioning system to frequently rebuild environments according to the phoenixserver pattern. however, there are a number of issues around managing private keys, ip addresses, and dns entries which make this a potential yak shave , so for our tracer bullet we’re just using the chef knife-rackspace plugin.
don’t take expensive shorcuts
the flip side of starting simply is not to take shortcuts which will cost you later. each time you make a tradeoff in order to get the tracer bullet pipeline in place quickly, make sure it’s a positive tradeoff. keep track of those tasks you’re leaving for later.
examples of false tradeoffs are leaving out testing, basic security (e.g. leaving default vendor passwords in place), and repeatability of configuration and deployment. often times these are things which actually make your work quicker and more assured - without automated testing, every change you make may introduce problems that will cost you days to track down later on.
it’s also often the case that things which feel like they may be a lot of work are actually quite simple for a new project. for my current project, we could have manually created our pipeline environments, but decided to make sure every server can be torn down and rebuilt from scratch using chef cookbooks. since our environments are very simple - stock ubuntu and a jdk install and we’re good to go - this was actually more trivial than it would have been later on once we’ve got a more complicated platform in place.
don’t worry too much about tool selection
many organizations are in the habit of turning the selection of tools and technologies into complicated projects in their own right. this comes from a belief that once a tool is chosen, switching to something else will be very expensive. this is pretty clearly a self-fulfilling prophecy. choose a reasonable set of tools to start with, ones that don’t create major barriers to getting the pipeline in place, and be ready to switch them out as you learn about how they work in the context of your project.
do expect your design to change
put your tracer bullet in place fully expecting that the choices you make for its architecture, technology, design, and workflow will all change. this doesn’t just apply to the pipeline, but to the infrastructure and application as well. whatever decisions you make up front will need to be evaluated once you’ve got working software that you can test and use. taking the attitude that these early choices will change later lowers the stakes of making those decisions, which in turn makes changing them less fraught. it’s a virtuous circle that encourges learning and adaptation.
don’t relax the go-live constraints
it’s tempting to make it easy to get pre-live releases into the production environment, waiting until launch is close to impose the tighter restictions required for “real” use. this is a bad idea. the sooner the real-world constraints are in place, the quicker the issues those constraints cause will become visible. once these issues are visible, you can implement the systems, processes, and tooling to deal with those issues, ensuring that you can routinely and easily release software that is secure, compliant, and stable.
do involve everyone from the start
another thing often left until the end is bringing in the people who will be involved in releasing and supporting the software. this is a mistake. in siloed organizations where software design and development is done by separate groups, the support people have deep insight into the requirements for making the operation and use of the software reliable and cost effective.
involving them from the start and throughout the development process is the most effective way to build supportability into the software. when release time comes, handover becomes trivial because the support team have been supporting the application through its development.
bringing release and support teams in just before release means their requirements are introduced when the project is nearly finished, which forces a choice between delaying the release in order to fix the issues, or else releasing software which is difficult and/or expensive to support.
doing what’s right for the project and team
the question of what to include in the tracer bullet and what to build in once the project is up and running depends on the needs of the project and the knowledge of the team. on my current project, we found it easy to get a repeatable server build in place with chef configuration. but we did this with a number of shorcuts.
- we’re using the out of the box server templates from our cloud vendor (rackspace), even though we’ll probably want to roll our own eventually.
- we started out using chef-solo (with knife-solo), even though we planned to use chef-server. this is largely due to knowledge - i’ve done a few smaller projects with knife-solo, and have some scripts and things ready to use, but haven’t used chef-server. now that we’re migrating to chef-server i’m thinking it would have been wiser to start with the opscode hosted chef-server. moving from the hosted server to our own would have been easier than moving from solo to server.
starting out with a tracer bullet approach to our pipeline has paid off. a week after starting development we have been able to demonstrate working code to our stakeholders. this in turn has made it easier to consider user testing, and perhaps even a beta release, far sooner than had originally been considered feasible.
Published at DZone with permission of Kief Morris, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.