The Modern Software Factory Series, Part II: Platform Tooling
The Modern Software Factory Series, Part II: Platform Tooling
In this article, Patrick Gryzan describes the types of tooling needed to achieve Continuous Delivery, starting with what he considers platform tools and concepts.
Join the DZone community and get the full member experience.Join For Free
On to Tooling
In the previous post, The Modern Software Factory Series, Part I: Overview and Process, we discussed the modern approach to creating applications and why people are adopting Agile enterprise frameworks. We also talked about the term “Continuous Delivery.” Continuous Delivery implies that both process and tooling are needed to achieve the desired effect of creating or improving application functionality in short time periods while reliably releasing quality products anytime.
In this post, we’ll start to understand the types of tooling needed to achieve Continuous Delivery, starting with what I consider platform tools and concepts. What the heck does that mean? These are concepts and tools that provide the basis of how you’re going to approach Continuous Delivery. Now, this can get pretty hairy because sometimes vendors combine many concepts and tools into a service. I guess the best place to start is with some definitions that will give context to some of the platform tools and concepts.
In the previous post, I defined four key terms: Agile, SDLC, DevOps, and Continuous Delivery. This list is a little longer, but don’t go all “my eyes glazed over” on me. I’ve done my best to keep the verbiage to the point. Most of the buzzwords refer to a category of tooling needed and its purpose. As I mentioned above, we’ll need some high-level knowledge of the main ones to understand the effect of combining them into services:
- Data Center: A facility used to house computer systems and associated components such as telecommunications and storage systems.
- Virtual Machine: In computing, an emulation of a physical computer system. VMs are based on computer architectures and provide the functionality of a physical computer.
- Server: In computing, a computer program or a device that provides functionality for other programs or devices called clients. In our case, we’re going to be referring to a server as a physical or virtual machine.
- Computer network: A computer network or data network is a telecommunications network that allows computers to exchange data. I’m referring to a computer network when I write “network” in the post.
- Cloud Computing: A type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand.
- Software Containers: A server virtualization method in which the kernel of an operating system allows the existence of multiple isolated user-space instances instead of just one. This is what Docker is.
- Application Framework: An application framework consists of a software framework used by software developers to implement the standard structure of an application.
- Microservices: A specialization of and implementation approach for service-oriented architectures (SOA) used to build flexible, independently deployable software systems. In a microservice architecture, services should have a small granularity, and the protocols should be lightweight.
I know, I know — I either thoroughly confused you or insulted your intelligence. I’ll start using some of this stuff in context and I think you’ll be reading my mind.
Time for an Image
This might be a little bit of an eye chart; hopefully, you can see the labels. We’ll be using Image A as the base for building out how the tools fit within the SDLC and how they might interact with each other. Consider the platform tools as the foundation of building something like a house. You need a stable, reliable base on which to build the structure.
Platform tools seem to be more akin to application and software- or infrastructure-centric strategies. There are a couple that kind of fall between the two groupings, but for the most part, the major players fall in one or the other strategies. Most of the buzzwords that we used are represented. However, there are a few additional acronyms thrown in there as well. We’ll get to those in a bit.
So what does application or infrastructure-centric really mean? Well, in the old days, we used to decide on an application and then buy a dedicated physical server to run it. This would be considered an infrastructure-centric strategy. Physical servers and manual management everywhere. While this is a solid way to go, administrators realized that the utilization of the physical servers was low and expensive. This caused a consolidation of servers into racks and optimization of server usage. However, it caused the infrastructure to become brittle and inflexible. Changes were difficult.
Then, along came virtual servers. Hooray! Now, through software, we could run multiple virtual servers on one physical machine, increasing the efficiency of the infrastructure. Unfortunately, this caused an explosion of virtual servers, which made it very difficult to manage, hence the advent of application-centric infrastructure strategy. You have ubiquitous software running the loading of many servers that can be dynamic.
This would be considered an elastic infrastructure, meaning as your load increases, you can automatically add more computing power from the server farm as needed. Good examples of elastic infrastructures are many of the current cloud computing platforms that exist today. Most companies that I work with use some combination of public and private or on-premise cloud platforms.
I know what you’re thinking: “Okay, but what are IaaS, PaaS, and SaaS? They’re spelled weirdly.”
Everything as a Service (EaaS)
Everything as a service? Is this like string theory? Will I open another dimension into vaporware? All valid, but incorrect assumptions and questions. The funny thing is that I once owned a company called Vaporware, which is a whole different topic. Anyway, when you’re about <X> as a service in terms of the software factory, you’re describing who is managing what. Image B seems to be a commonly used visual cue to describe how a typical software stack is managed. On the left, you have a stack that we all started with. On the right, you have common applications that you configure for your business needs but are controlled by the vendor. A typical SaaS application may be something simple like Gmail or complex like SalesForce. Either way, you configure the application to meet your business needs and the vendor does the rest.
In the middle, you have infrastructure as a service (IaaS) and platform as a service (PaaS). I tend to think of IaaS as base services like AWS or Azure. Once you have your contract, you can spin up nearly any operating system and use it like the server sitting under your desk. I personally like that kind of control, as it allows me to customize every aspect of the application, but it does require more setup time and attention to details. For small applications/teams, this isn’t a bad way to go.
PaaS, on the other hand, offers a development paradigm for complicated and large numbers of teams. The concept is that you take away all of the setup-type stuff a typical developer may go through in order to create an application. For instance, connecting to a database may require connection information about the database, server, and protocol for connection. With PaaS, the database may be a preconfigured resource for all developers and you simply plug it in. It's like electricity that is always available. Pretty cool! While this has many advantages, there are some drawbacks. Let’s suppose you want to use a language that the vendor doesn’t support, or you want to change out a base technology like containers; you may be out of luck or find it difficult at best.
I personally prefer using IaaS in combination with tools needed to continuously deliver applications instead of relying on a vendor partner to provide PaaS. In essence, I like to roll my own. The reason is this is the stuff that could be considered your “secret sauce” or “intellectual capital” that may give your company a competitive edge. I find that it gives the company the opportunity to enhance our position with the best tools available as they become available. Also, it allows us to remove tools that have become stagnant in order to reduce costs.
I Want to Ship Something Small
I wanted to finish this post with a few topics in the platform tooling post that are very hot right now, namely, software containers, microservices, and serverless computing.
If you’re in development, then chances are you’ve heard the term “Docker.” What is this and why does it matter? Docker is a platform tool used to spin up multiple operating system kernels to run applications in a very small efficient space. So what? Well, this gives you the ability to break down your complex application into many small manageable chunks that run independently. That means it’s easier for you to make changes and update the system. The name Docker came from the idea of shipping containers. Each shipping container has the same dimensions so they can be easily put together and loaded or released. The concept is that you create little pieces of application functionality in well-defined containers and then move them easily through your SLDC into production.
This is a cool concept, but it may require some re-engineering of your application architecture. The latest trend is to design a consumable service architecture or microservices. Each microservice may have one job to do and that’s it. Let’s suppose the microservice lists employees by anniversary date. The microservice may hit a few other microservices to collate the information and then respond to the client with the list. The advantage of this approach is that you can make changes rapidly without breaking the application architecture.
Finally, taking this concept to the next step and breaking down the problem even further is serverless computing. This concept gets rid of the need for servers. What? Well, a service does a few basic things: it gets the request, does the business logic, and then responds. Why would you need an entire OS to do these simple things? Good question. In serverless computing, all the application flow is controlled by the client and only functional pieces are executed on web-based third-party systems. Currently, the server handles most of the long or hard application processing.
Stay tuned for the third installment in this series, in which I’ll cover planning strategies.
Opinions expressed by DZone contributors are their own.