{{announcement.body}}
{{announcement.title}}

Voxxed Days Microservices: Guillaume Laforge on “Cloud Run, Serverless Containers in Action”

DZone 's Guide to

Voxxed Days Microservices: Guillaume Laforge on “Cloud Run, Serverless Containers in Action”

An architect and DZone MVB sat down with a fellow microservices expert to discuss serverless containers and his Voxxed Days 2019 presentation.

· Microservices Zone ·
Free Resource

Hi Guillaume, tell us who you are and what lead you into microservices?

Hello, I’m Guillaume Laforge, I’m a developer advocate for Google Cloud Platform. I focus on the serverless products like Google App Engine, Cloud Functions, and Cloud Run. On the side, I’m also a Java Champion after years working with Java and co-creating the Apache Groovy programming language. And I’m also one of the co-hosts of the French tech podcast Les Cast Codeurs.

So after years of consulting on big monolith projects, sometimes moving towards communicating macro-services, I’ve become more and more convinced that smaller services, focused on their tasks made more sense going forward. However, it is not without its problems, as it can also be harder to get dozens or more of microservices to work together, to build and deploy them independently, etc. So it’s an interesting point in time in our software architecture history to see who best practices and tools evolve to support this new era.

What will you be talking about at Voxxed Days Microservices?

I’ll be talking about Cloud Run, a new product from Google Cloud, that allows to run containers in a “serverless fashion”, on a managed infrastructure (no need to provision instances, servers, clusters yourself, security patches applied transparently, etc.), where auto-scaling allows you to scale your microservices down to zero (effectively paying 0 as well), up to hundreds or more when your service is heavily used. Cloud Run also runs on Google Kubernetes Engine, or any Kubernetes cluster where the Knative open source project is installed, as Cloud Run’s technology builds upon the Knative APIs.

Here, I’m highlighting both “serverless” and “containers” buzzwords, but what’s the relationship with Microservices, which is the theme of the conference. I posit that serverless containers are one of the simplest and best ways to run your microservices in the cloud, hence why I’m focusing on this technology stack. You don’t necessarily need to be a top-notch expert on Kubernetes to be able to reap the benefits of containers for your microservices.

The term “serverless” is getting more and more confusing as cloud providers exhibit serverless characteristics (managed infrastructure, auto-scaling capabilities, pay for usage, etc.). Cloud, containers, serverless… aren’t we talking about the same things here? What are your definitions?

Microservices are an architectural style, where we decompose an overall application into smaller intercommunicating components that focus on one key task. We don’t always need to split everything into small microservices, and it’s probably an antipattern to start head-first into a microservices-based architecture when a good old monolith could well do the job.

Serverless denotes some characteristics that some products have.

As you said, they’re usually about a managed infrastructure, where developers don’t have to provision servers or clusters before being able to deploy anything, but instead, just deploy directly to that infrastructure with some tooling (command-line interfaces, CI/CD platforms, console UIs…). Another important aspect of that managed infrastructure is that the provider of that infrastructure (whether a cloud provider or an on-prem IT team) takes care of the underlying layers: applying security patches to the OS, for example.

The auto-scaling capability and pay-for-usage usually go hand in hand, but it’s not mandatory, as we could imagine pay-as-you-go with tools that wouldn’t be serverless and requiring a provisioned infrastructure, or we could imagine a product that requests fees for usage even if the provider doesn’t actually provision the servers or clusters for your apps.

If your service costs you 1 euro for operating, and at some point, you need twice the capacity, you usually pay twice as much, ie. proportionally to your usage. But it’s also interesting to have the scale-to-zero capability, where your service is not running (no servers are provisioned and running anymore) and then the cost also goes down to zero. That’s important for workloads that are not necessarily running 100% of the time: for instance, think of monthly or quarterly reports, or any task that is run at regular intervals of time, but not necessarily receiving a daily flow of incoming requests.

Good, see you soon then

I’m looking forward to meeting speakers and attendees at the conference, to start fruitful discussions around microservices, understand the needs of startups and enterprises concerning microservices and serverless solutions.

DZoners! Visit Voxxed Days Microservices here to check out ticket info for the show, and use the DZone reader exclusive code: VXDMS19_COM_DZONE to get 20% off for the conference and workshop.

Topics:
microservices ,serverless containers ,voxxed days

Published at DZone with permission of Antonio Goncalves , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}