DockerCon17: The Good, the Bad, and the Ugly
DockerCon17: The Good, the Bad, and the Ugly
DockerCon is over, and while there was interesting news, like Visa forgoing VMs for containers on bare metal, the container management field still remains murky.
Join the DZone community and get the full member experience.Join For Free
Now that DockerCon 2017 is over and a few days of reflection and introspection have passed, I wanted to share some candid thoughts on the conference in general. Of course, I could just summarize Docker announcements, like multi-stage builds, LinuxKit, and The Moby Project, but you could read about those in a plethora of articles and blogs about the event. Instead, I’m going to go a little deeper and share some concerning developments based on what I heard and saw at the show.
First of all, the number of attendees was announced at 5,500. At DockerCon 2016, the announced number of attendees was over 4,000. Up until last year, the show was growing 100% in attendance year over year, so to see only 37.5% increase in attendance was a bit disappointing. Interestingly enough, at one of the keynotes, I heard a little bit of history revision when the 2016 attendee number mentioned was 3,500. Hmmm… I can’t help but wonder if the container community’s overwhelming support for Kubernetes over Docker’s commercial products might be a factor.
I was also surprised at the demos at the keynotes. Tons of scripting, even usage of pre-existing scripts (“Let me just quickly cut and paste a Docker Compose file I already have…”) to show the value of Docker’s enterprise offerings. Where did that Docker Compose file come from though? Do I have to have an expert in Docker on every one of my dev teams? My enterprise environment is very complex, so how do I ensure image consistency across apps? What if something changes in the app — how do I ensure changes to those scripts are within my IT policy? Clearly, there is still a reliance on a developer-focused mechanism of delivering enterprise software. It seems like there is a disconnect here. There is no way that containers will be rolled out at scale by IT Ops in large enterprises (especially for existing applications) if the only mechanism for moving applications to containers and managing changes to them over time is Dockerfiles and Docker Compose files.
Visa was part of the keynote on Day 2, and I thought there was an interesting discussion in their presentation that stood out. After a careful evaluation, Visa decided to use containers on bare metal instead of VMs. I think this is significant, as I’m seeing a trend of running containers on bare metal as the most efficient option. Visa said its decision to run on bare metal was driven by server utilization and efficiency. This trend, if it continues, will have a significant impact on who the datacenter players of the future will (and won’t) be.
For me, the best hashtag moment was when the CEO of Docker, Ben Golub, declared that the best way for enterprises to containerize their existing applications was to use #MuscleMemory. Docker’s position is that you get a few folks trained on Docker, get them to containerize one app, have them share their experiences with others, and then those folks will start containerizing their apps. Before you know it, the organization will have #MuscleMemory. I can just imagine the poor IT guys who will then have to manage all these apps that have been containerized in a non-uniform way getting beat down by this #MuscleMemory.
The exhibit hall was crowded much of the time, and there were some interesting companies there for sure. Some of the storage companies (Nimble Storage immediately comes to mind) and SysDig and DataDog’s booths seemed busy all the time. Back to “bread and butter” storage, monitoring and alerting, I guess.
However, a walk around the exhibit hall showed how muddy the waters are in terms of container management, automation, etc. Many companies vying for a piece of this pie were touting containers for legacy applications, but when you asked how they converted existing applications to containers, the answer was a confused look and, “Well… you give us your Docker Compose file and...” Unfortunately, this is not a great answer for large enterprises with hundreds (even thousands) of existing applications in ongoing development, nor for smaller ISVs that don’t have skills in container technology like Docker.
Some exhibitors took it even further though. There was one company at the show (who shall remain nameless) with a large booth and large sign that read, “Ask me to demo how we manage containers.” So, I asked for that demo and the guy who was to be the sacrificial lamb did his best to script his way out of the question (presumably so I would go away). When that failed, of course, I persisted and insisted that I wanted to see the demo promised on the sign. He finally confessed that they had no such product or features and the marketing guys just came up with the sign to lure attendees in. Disappointing, to say the least, but I certainly appreciated his candor. This was not a small company by any means, and their one-word name is quite recognizable, but it goes to show not only how muddy the waters are in this market, but how far some are willing to go with their marketing messages in lieu of an actual product.
So, who best to navigate these murky waters than a “Docker Captain” from a consultancy with expertise in container technology? There were plenty of these folks around as well who are capitalizing on the current complexity and confusion around containers. As long as containerization is thought of as endless script writing, and running containerized applications is dealt with on a one-off basis, these Captains will be happy to show you the way. Just be careful which barge you end up on during your journey. In fact, it’s probably better to stay on dry land and try something different than scripting, non-uniformity, and one-offs.
Published at DZone with permission of Mazda Marvasti , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.