How Containers, Microservices, and Hybrid Cloud Enable Innovation
How Containers, Microservices, and Hybrid Cloud Enable Innovation
This interview covers how a mix of cloud native technologies, like microservices and containers, speed development and deployment times.
Join the DZone community and get the full member experience.Join For Free
At Velocity West 2017 in San Jose, Mesosphere CTO Tobi Knaup sat down with Mike Hendrickson, vice president for content strategy at O’Reilly Media. The two discussed how companies can take advantage of data and encourage innovation by building cloud-native apps — and the ways DC/OS helps you build those apps on any infrastructure. Read the transcript or watch the video below.
Mike Hendrickson, O’Reilly Media: So, you’re the co-founder [of Mesosphere]. How did you think about founding a company like Mesosphere?
Tobi Knaup, Mesosphere: Yeah, well, myself and my two co-founders, we were the people that were building infrastructure at Airbnb and Twitter, two large, web-scale companies. And we were using an open-source software called Apache Mesos to do that. And, you know, it was a very powerful technology that had really solved a lot of problems for us, so we thought, you know, let’s start a company around this and package it, turn it into a product that many other companies can use as well.
Mike: So containers have kind of exploded, and they’re really hot right now. But do they present trouble or potential challenges for technology people?
Tobi: Containers have really started on developers’ laptops, right? We have great tools for building containers on our laptops or packaging any application into a container. The challenges are really when you put them in production, and it’s no different than any other production software. You need to make sure that the platform that you’re running the containers on is highly available. That it doesn’t go down, that you monitor things the right way, that you get the right access to the logs and you have troubleshooting tools available to you.
Containers really change how we use these tools, right? A lot of the monitoring tools, for example, have been built prior to the cloud, so they’re very machine-centric, whereas containers really create this abstraction that is more application-centric. So a lot of the tools don’t really work that well, and we need to rethink how some of these operational tools work. Other challenges, you know, containers are great for packaging applications, for running 12-factor apps, but these 12-factor apps, these stateless 12-factor apps also need to connect to databases, to backing services, and so another challenge is how do you run these backing services in the container world?
Mike: So, microservices and containers. Are they synonymous, or do people put them together artificially? They’re not dependent on each other, but… microservices and containers. Are they synonymous, or do people put them together artificially? They’re not dependent on each other, but…
Tobi: Yeah, they’re not dependent on each other, but they often get used together because containers really make microservices very easy. But you can use them independently, too, so a lot of folks also take, you know, legacy three-tier enterprise applications like a tomcat application server, put that in a container and run it. So, they’re often used together, but they’re not, you know, married together.
Mike: You add in cloud, and cloud native. Is that another piece to this larger puzzle?
Tobi: Yes, cloud native really expresses how the largest web companies have built their infrastructure and how they’ve optimized it for fast innovation and fast iteration. So cloud native really means getting all your software stack ready for your development teams, your product teams, to put out software quickly.
Mike: To deploy software?
Tobi: To deploy quickly — and this is really something that every enterprise, every company, in any industry should care about, right? How can I deploy software daily instead of twice a year? How do I make it so a deploy happens in seconds instead of hours? How do I make it so deploys are not scary things, but they’re actually, you know, things I enjoy? I want to get my code out faster, because that’s how I can test things with users faster and iterate on the product faster. And so cloud native is really a way of designing infrastructure to enable this fast iteration.
Mike: And so what are the inherent risks when someone is looking at delivering a cloud-native app on a continuous deployment basis? And then what are also the opportunities?
Tobi: The risks, of course, are, you’re making changes to your production infrastructure very often, so you want to make sure that these changes are sort of minimal and controlled. And, you know, container platforms like DC/OS really provide the tools — and an abstraction workflows like continuous integration and continuous deployment — provide the right tools so that these things are low risk. So you run all the tests before things go out, you can do blue-green deployments to start, you know, your new version of your app in parallel to the old one, so you have a way to rollback. So it’s really important to use these tools to kind of de-risk it.
And the opportunities are, really, it encourages innovation, right? It encourages your developers to put things out there faster and test it with real users to get feedback from those real users earlier. And that’s how the big web companies like Facebook and Twitter and Airbnb are testing new features and are iterating fast.
Mike: Those big companies are also using something that’s real time and streaming data. How do containers and how do microservices and everything work together — and cloud native — with the need for real, up-to-date streaming data?
Tobi: Collecting, making sense of, and analyzing the data is one of the most important things that any company has to do. It’s often that the data in a company is the main piece of intellectual property, right? Whether you’re a search company that uses that data to show ads or you’re an IoT company that’s learning from all the devices in the sensors and using that to make the product better, the data is really where it’s at. So to collect data in real time, and analyze it in real time, and get insights in real time, you really need to run a lot of infrastructure.
The SMACK Stack is a pattern that’s becoming popular there, that stands for Spark, Mesos, Akka, Cassandra, and Kafka. So Kafka is a message queue to ingest data in real time, Spark to process it, Cassandra to store it, Akka to build applications and show that data back to the user. Now, these are all fairly complex infrastructure pieces. They’re distributed systems. So setting them up, operating them, upgrading them, making them highly available — all the operations around it — it’s pretty involved. So if you take the traditional IT model of one app per server and training people for each one of those technologies, it’s very ineffective, right? It’s hard to find the talent to operate these things, and if you put them into silos on different servers you’re also wasting a lot of resources.
Mike: So a public cloud would be a better solution?
Tobi: Public cloud would be a better solution. It makes that really easy because the public cloud automates the operations of these complicated softwares.
Mike: But there could be some risks with that.
Tobi: There are some risks, exactly. So if you’re using [these services] on a public cloud, they’re often behind proprietary APIs. And if you do that, you end up locking yourself into one particular cloud provider, and you’ve got to be careful with that. A lot of CIOs and CTOs I talk to, the majority of them pick a hybrid cloud model to architect. And so if you’re locking yourself into one cloud at the level of the databases and the message queues that you’re using, that means your application is no longer portable. So if you want to go to a different cloud provider because it has better performance or you want to also leverage your data center, you have to re-architect. It’s not truly portable.
Platforms like DC/OS provide truly portable data services. DC/OS takes leading open source and commercial software like Cassandra or Kafka and their commercial versions, and automates the operations of it the same way that a public cloud does it. So it’s kind of the public cloud secret sauce, but it’s available to you on any infrastructure, whether you actually want to run it on a public cloud or on bare metal in the datacenter.
Mike: And is there a public version of DC/OS and a enterprise edition of DC/OS, or how does that work?
Tobi: That’s right, so you know we were born out of open source Apache Mesos, and so we have an open core model. There is an open source version of DC/OS that anybody can use and run anywhere, and also an enterprise version which adds more features that enterprises need to go to production, and Mesosphere provides commercial support for that as well.
Mike: So Tobi, if you and I sit down 12 months from now and have the same conversation next year at this time, what would you like to say changes for Mesosphere in those 12 months?
Tobi: What Mesosphere is really focused on is bringing customers a public cloud-like experience on any infrastructure and with choice of workloads from an open ecosystem. So what we’re focused on is bringing customers more databases, more message queues, more big data tools to DC/OS. We really think it’s a platform for running legacy apps, today’s apps, and even tomorrow’s apps.
So, to give you an example: Serverless or function-based programming is something that’s picking up in a lot of places. And, again, you need to run a lot of infrastructure for that. The public clouds make that really easy, but what about other infrastructures? And so, we already have in the DC/OS App Store, you know, two products from from other vendors that allow you to do function-based programming. So a lot more of these things, and also a lot more investment from Mesosphere to make all of this work in a hybrid cloud environment.
Mike: So it sounds like DC/OS lets me scale up and scale out, is that a fair description?
Tobi: I think that’s fair to say, yeah. I think one of the big advantages of a cluster manager like DC/OS is it’s very easy to scale elastically. So especially with workloads that are very spiky — and everybody has those, you know, data processing jobs we talked about. Streaming data, you might have more data flowing in at different times of the day than other times, so you need to be able to elastically scale workloads up and down, and DC/OS really automates that nicely.
Mike: Excellent. Tobi, we look forward to that conversation.
Tobi: Thank you very much.
Published at DZone with permission of Tobi Knaup , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.