Most of the processes actually involved in CD are familiar to experienced programmers and administrators. The topic has collected a number of specialized terms, though, to help identify or emphasize a particular perspective:
Computer developers and managers use “agile” to emphasize “customer satisfaction through early and continuous delivery of useful software … close, daily cooperation between business people and developers …” and related principles. The two quotes here are among the dozen of tenets of the original Agile Manifesto written in 2001. Note that “continuous delivery” appears right at the very beginning of the manifesto, and therefore of agile culture.
Several Refcardz are already available that explain various aspects of Agile development:
- Design Patterns
- Getting Started With Lean Software Development
- Software Configuration Management Patterns
The initialism CD stands for either “continuous delivery” or “continuous deployment.” The practices remain novel enough that practitioners don’t yet use the labels consistently. Generally, though, “continuous delivery” means that all the steps from programmer-commit through integration tests are automated. “Continous deployment” does everything “continous delivery” does, plus automatically delivers the latest tested code changes to customers.
The understanding of the words automation, validation, and delivery has changed throughout the years. In 2001, continuous deployment might have implied a daily release to a customer. In 2018, it generally brings to mind that end-users see an update a few seconds or minutes after a programmer checks in new code.
“Continuous integration” (CI) is the first step of CD. CI automatically builds or compiles a usable application from the latest code changes in a software repository and ensures no changes overlap with each other. Grady Booch first wrote about CI in 1991, when the emphasis was on consolidation of the output of an entire programming team into a single authoritative source image. Discussions of CI now generally focus on optimizations or automations valuable for large bodies of code with complex build procedures.
A container virtualizes an application at the level of the operating system. While Docker is the market leader, many other container technologies are in widespread use, including OpenVZ, Hyper-V, and so on.
Traditional computing differentiates development from operations. Different people work in the two domains, and communication between them is limited.
DevOps aims to bring these perspectives together. If a problem turns up in production in a traditional model, operations has the responsibility for diagnosing and correcting it, often without help. DevOps aims for problems to turn up sooner, before they reach production, and, when production is the location of a symptom, programmers pitch in to find and fix it. Presumably, this leads to programmers with better perspective on what can go wrong in operations and operators with more understanding of how programmers put the applications together.
Docker is a Linux-based container implementation in widespread use.
Programmers often focus on a minimal example of a technique or tool under the label of a “Hello, world.” A model introductory program in the C language appeared in an influential book printed “Hello, world” to the screen, and halted. As over-simplified as that might seem, experience proved that even such a minimal program usefully illustrates important principles, and therefore is good practice for beginners.
Digital solutions that emphasize ease of use, minimalism, or simplicity are often called “lightweight.”" The contrast is with “heavy” technologies that give more capabilities at the cost of more restrictions, computing resources, or procedural investment.
The software development lifecycle (SDLC) is the comprehensive process an organization uses to plan, budget, develop, validate, maintain, replace, enhance, support, and eventually retire specific software assets.
If we built houses like software programs, we wouldn’t tell the roofers how big the house is until after the electrical wiring had passed inspection, let alone give them a chance to purchase their materials.
Shifting left rearranges the work of different computing specialists so that they can do more in parallel and cooperation, rather than isolation.
“Shift left” aims to begin acceptance testing far earlier, and more nimbly: left-shifted testing is far more likely to yield insight that informs fundamental architectural improvements. Left-shifted testing also involves deeper and more frequent communication between specialists in testing, operations, and development.
While “shift left” was first applied to testing, it now applies with even more force to operations and other segments of the SDLC.
The sequential development methodology is labeled “waterfall.” First, all requirements for a project are gathered and written; then, the completed requirements are delivered to architects, who design a solution. Programmers implement the design until they believe the application exhibits all necessary functionality. After the completed application passes to the hands of testers, the testers identify all defects they can find. These handoffs continue through stages of correction, delivery, operation, and maintenance. Waterfall communication focuses on “deliverables,” with a minimum of interactions between teams or stages.