While large segments of the community this debate is intended to enlighten don’t actually care about the ongoing Kubernetes vs. CoreOS vs. Mesosphere shenanigans, what they do care about is what happens to their business once they adopt containers. They want to know if it’s all hot air, or whether there’s a path to increasing the performance of their applications and achieve better utilization rates from their infrastructure through adoption.
The Way to San Jose
If you want to be elastic, if you want to be flexible on both financial account as well as technological reasons, the Cloud has created a means to achieving this like no other. Adding the containerization layer into your cloud infrastructure does indeed make it all about performance, as you’re breaking down apps components into small parts that can run independently, faster, with minimal overhead and in a reusable manner.
While we’ve done a good job at figuring out how to run containerized applications once they’re in the Cloud with the help of the aforementioned schedulers, the part where we need to get code off dev machines, into containers and to the Cloud is where the process starts to fragment. There’s a whole series of dependencies you need to take into consideration as part of that pipeline delivery process, which we’ve previously documented to raise awareness. And even more considerations you’ll need to think about if you’re stitching together a whole host of open source tools to achieve your containerization goals.
How Open Is Your Open Source?
Our CEO Khash covered this brilliantly in his InfoWorld blog article. An open source container build tool or orchestration engine may still be a solution for you. It really comes down to the resources you have and whether the economics of using open source add-up.
Would you want to use open source when it’s more expensive to build, orchestrate and deploy your containers than buying from a vendor that has the same exact functionality for a small monthly fee? If someone else has put in the work and the product does what you want for a cost that's better than what it would take your organization to build and maintain a solution in-house, then that should be reason enough to buy it.
What It All Means
A recent IDC report entitled “DevOps and the Cost of Downtime: Fortune 1000 Best Practice Metrics Quantified” found that — on average — infrastructure failure issues cost large enterprises $100,000 per hour. Critical application failures exact a far steeper toll, from $500,000 to $1 million per hour. If you’re spending time and energy building your container solution on open source, it makes sense to be informed about what using open source could mean considering:
- Open source rarely has the same investments as commercial software.
- There’s usually less effort given to usability, documentation, and even development (it’s not in the vendors interest!).
- Bear in mind how your expensive technical resources will be burning their time fighting with installations and dealing with the learning curve.
- Random changes between releases, which in a multi-layered setup, can make it difficult to track which components need to be updated/checked.
- Questionable "community" support where popular software support forums can end up just being the blind leading the blind; users throwing back and forth suggestions, guesses, rumors and hear-say.