Continuous Delivery - DevOps (Hybrid cloud) Architecture
Continuous Delivery - DevOps (Hybrid cloud) Architecture
Continuous delivery is becoming more popular among developers. Evgeni Kostadinov thinks all developers should know about QA and operations to understand CD.
Join the DZone community and get the full member experience.Join For Free
Planning to extract out a few microservices from your monolith? Read this free guide to learn the best practice before you get started.
I can't not notice that lately I see found more and more companies turning their attention to the concept of the Continuous delivery. Since I'm into the Automation, I really liked the idea of everything going around it. In simple words Automation-Continuous integration-Continuous delivery(-Continuous deployment).The pros are described in so many articles that I don't want to turn this is the just another. To me it's preferred because of the quality-centrist approach.
I've searched and red a lot of resources regarding both already mentioned topics. In time the concept of mixing them became really attractive. To know your processes is just not enough these days. So in the whiteboard bellow I've refined an abstraction level (architecture) that I find to be a good candidate.
Lets first begin with the main phases of the Continuous delivery that I've extended .. just a little bit in order to achieve the goal of granularity I need.
Yes, I admit...not the best drawing (schema) in the world, but it's easy to see what is in it. The vertical boxes represent the multi-threading execution of the Tests. To be more specific it's more like processor parallelism. Since we rely on thesuccess of all tests. We expect them to pass and by doing so - achieve faster execution (and feedback). In case they fail - nothing is lost. But we found a bug. The Functional tests are run sequentially, because of one main idea of preserving computing resources needed for the other test types. Even more - the less expensive, faster and simple ones are executed first.
It's a task for every team on company level to decide - should the successful changes be promoted further in the pipeline or not. If you have enough computing power and environments (the ideal Virtualized Test Lab) - run all concurrently ("in parallel").
Enough talking...this is the big picture:
The CI Server is split in one Master and two Slaves. Each caring on it's own role according to the needs of the flow. TheMaster is responsible for updating the (latest) tests scripts in the Test harness. Also enforce the synchronization in theRepository. Last thing is to update and push the Slave #1 (Build Manger) jobs, so the build can be executed after all code is ready. This CIS slave have a orchestration responsibility as well. After getting the latest version from the Repository it starts a job and notify the Constructors (will discuss later). The last task for our Slave manager is to promote the build to the Cloud via the Distribution server. Finally - the Slave#2 plays it's core role....regression in the Test harness.
The purpose and responsibilities of the Repository are vital and yet trivial - management of changes.
Module factory can be considered both as repository for configuration scripts (DB, mail or web servers) and automating tasks for them. They are vital for the Test harness execution role. E.g. clean up DB for each new regression tests run.
Constructors after being notified form the CIServer_Slave#2 (Build manager) starts the process of developing the “Programmable Infrastructure” / “Infrastructure as code”. The goals is clear - pass it to the Distribution server.
The Distribution server main responsibility is to assist the build being promoted to the final destinations - Cloud and Test harness. Well actually to provide a layer that can serve as container for the build content, that will be needed mostly from the Service providers #1 and #2. It's important to note here, that Service provider #1 will receive a package with [Infrastructure as code] + [DEV code]. Service provider #2 will receive a package with [Infrastructure as code] + [QA code]. And the test harness will receive full package - [Infrastructure as code] + [DEV code] + [QA code].
Test harness serves as both Testing and Manual phases provider. For the first one will be a test engine by itself, but in the other phase will give parameters for the remote execution and set up in the Service provider #2. Here (in the Test harness) the functional testing will be faster, since all the code and services are "local".
Service provider #1 works with the package provided from the Distribution server. Main purpose of this Cloud computing services is tho carry on the Performance testing. This provider sets up a Production clone environment (via it's own Storage service) and Elastic computing. The main purpose of the previous step is to pave the way for a 3rth party distributed load testing utility for creating many elastic computing instances to load test the targets (web applications).
Service provider #2 also works with the package provided from the Distribution server. But with the purpose to carry on the UAT. It provides a round trip from our test process (e.g. cucumber or something else using web-ui-automation-client) to it, where they run a Remote server and a browser, then back to our machine where our webserver and database are. All this communication is through SSH tunnel and of course slows down the test execution. But when you consider the issue of cross-browser testing, this starts to look a lot more attractive. During our continuous integration (CI) testing, we like to test against several different browsers. In the past this has meant complex setups involving virtual machines running Windows and other OSes. With Service provider #2, switching OS and browser is simply a matter of tweaking a parameter. For CI builds, speed is not of the utmost importance, so we are willing to trade the cost of maintaining complex and fragile build setups, for a speed penalty.
Opinions expressed by DZone contributors are their own.