Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Autonomy or Authority: Why Microservices Should Be Event-Driven

DZone's Guide to

Autonomy or Authority: Why Microservices Should Be Event-Driven

Some of the recommendations we hear for building a microservices architecture go against our sensibilities. Autonomy, not authority, is a key piece to the puzzle.

· DevOps Zone
Free Resource

Download “The DevOps Journey - From Waterfall to Continuous Delivery” to learn learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.

This article is featured in the new DZone Guide to DevOps: Continuous Delivery and Automation, Volume IV. Get your free copy for more insightful articles, industry statistics, and more!

We’ve been discussing microservices in the mainstream for over two years now. We’ve heard many a blog or conference talk about what microservices are, what benefits they bring, and the success stories for some of the companies that have been at the vanguard of this movement. Each of those success stories centers around the capacity for these companies to innovate, disrupt, and use software systems to bring value to their respective customers. Each story is slightly different but some common themes emerge:

  • Microservices helped them scale their organization.
  • Microservices allowed them to move faster to try new things (experiment).
  • The cloud allowed them to keep the cost of these experiments down.

The question then becomes, “how does one get their hands on some microservices?” We’re told our microservices should be independently deployable. They should be autonomous. They should have explicit boundaries. They should have their own databases. They should communicate over lightweight transports like HTTP. Nevertheless, these things don’t seem to fit our mental model very well. When we hear “have their own databases,” this shatters our comfortable safety guarantees we know and love. No matter how many times we hear it, or even if we carved these postulates into stone, they’re not going to be any more helpful. What might be helpful is trying to understand some of this from a different mental model. Let’s explore.

In many ways, IT and other parts of the business have been built for efficiencies and cost savings. Scientific management has been the prevailing management wisdom for decades and our organizations closely reflect this (see Conway’s law). To use a metaphor as a mental model: our organizations have been built like machines. We’ve removed the need for anyone to know anything about the purpose of the organization or their specific roles within that purpose and have built processes, bureaucracies, and punishments to keep things in order. We’ve squeezed away every bit of inefficiency and removed all variability in the name of repeatable processes and known outcomes. We are able to take the sum of the parts and do the one thing for which the machine was designed.

How does all this translate to our technology implementations? We see our teams organized into silos of development, testing, database administration, security, etc. Making changes to this machine requires careful coordination, months of planning and meetings. We attempt to change the machine while it’s running full steam ahead. Our distributed systems reflect our organization: we have layers that correspond to the way we work together. We have the UI layer. The process management layer. The middleware, data access, and data layers. We’ve even been told to squeeze variability and variety out of our services. Highly normalized databases. Anything that looks like duplication should be avoided. Build everything for reuse so we can drive costs down.

I’m not advocating for doing the opposite per se, but there are problems with this model in our new, fast-changing competitive landscape. The model of a machine works great if we’re building physical products: a car, a phone, or a microwave. If we already know what the outcome should be, this model has shown to be amazingly successful (see Industrial Revolution). However, this has existentially changed. Companies are building value these days through services, not through product alone. Service design is key. This model of a purpose-built machine is not a good model for building a service organization. Let’s look at a different model.

What we really want is to nd new ways to bring value to our customers through service and stay ahead of our competitors. To do that, we need to listen to our customers. To be able to react and fulfill their needs, we need to deal with the fact that customers don’t know what they want. We need to explicitly deal with variety (law of requisite variety) and variability. We need to build “slack” into our systems to account for the “I don’t know” factor. In many ways, innovation is about admitting “I don’t know” and continually figuring out the right set of experiments to ask the right questions and then learn from the outcomes. Since software has eaten the world, this translates into building systems that have variability, feedback loops, and speed of change built into them. This looks very different from a machine. The model I like to use is a city.

Cities have many types of systems that co-exist, co- evolve, and exhibit a lot of the same behaviors we want. Emergent innovation through experimentation. There isn’t top-down central control and planning and highly optimized silos. We have lots of independent, autonomous “agents” (people, families, businesses, etc.) and the operating environment (laws, physical geography, weather, and basic city services like roads, power, water, waste disposal, etc.). These agents interact in cooperative and also competitive ways. They interact with each other by asking each other to do things or responding to events to which they’re exposed. These agents are driven by purpose (survival, personal/spiritual/monetary fulfillment, curiosity, etc.), and what emerges through these simple elements is an amazingly rich, resilient, and innovative ecosystem. Cities scale amazingly (see NYC). They innovate (see San Francisco, Seattle, etc.). There are no single points of failure. They recover from catastrophic failures (see natural or human-made catastrophes). And out of all of this, there is no single authoritative figure or set of figures that dictate how all of this happens.

This model of a city fits our microservices description a little better. Now let’s start to translate to distributed systems. First, each agent (service) has its own understanding of the world. It has a history (series of events) that gives its current frame of reference from which it makes decisions. In a service, this frame of reference is implemented in its own database potentially. Services interact by asking each other to do something (commands) or responding to some given fact (events).
It may observe an event and update its current understanding of the world (its history or its state). In this world, time and unreliability should be modeled explicitly.

A good way to do this is through passing messages. Things may not show up on time. Things might get lost. You may not be available to take commands. Other services may not be around to help you. You may favor autonomy (doing things yourself with the state you have) versus authority (asking someone else who may have the authoritative answer). To implement this, you may have to distribute knowledge about events and state through replication (in cities, we do this with broadcast mechanisms like newspapers, the internet, social networks, etc.).

When you do this, you may have to consider different state consistency models. Starbucks doesn’t do two-phase commits. Not everything can/should expect a two-phase commit, consensus-approved consistency. You may need something more relaxed like sequential/causal/eventual consistency. You’ve got to make decisions with the state you have and potentially be in charge or resolving conflicts. You’ve got to equip your service to deal with these scenarios. If you cannot deal with them you need to learn from them. This requires changing behavior. In a microservices world, this translates to quickly making changes to your service and getting them back out there via a CI/CD pipeline.

Microservices architectures are a type of complex-adaptive system. We need new models to reason about this. Thankfully, a lot of the distributed system theory, research, and practices have been around for 40+ years and innately take this type of system into account (as a distributed system is itself a complex-adaptive system!). We don’t have to reinvent the wheel to do microservices, just re-orient our mental model and be a catalyst for change throughout our organizations.

More DevOps Goodness

For more insights on implementing unambiguous code requirements, Continuous Delivery anti-patterns, best practices for microservices and containers, and more, get your free copy of the new DZone Guide to DevOps!

If you'd like to see other articles in this guide, be sure to check out:

Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs

Topics:
devops ,microservices ,autonomy ,architecture

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}