Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library – for a fee. That was before companies like Expedia standardized such things.
We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would “do it all.” Soon, we’d “evolved” into a handful of über services, black boxes like createBookingFromScratch (not a real name). In one call, it could create an account, all the passengers, reserve multiple flight legs, seats, hotel, you name it. It even submitted payment.
Sounds great, right? When they worked, yes. But often the caller got a return code like “-181” with no real way of knowing:
- What went wrong – the input, the back-end service, which back-end service?
- What the exception even means?
- Did any of the transaction go through?
The same thing has happened to web services. What started off as createCustomer now processes a whole order. Inside are so many points of failure the user has to code to handle. Customer already exists, invalid address, credit card cancelled, item out-of-stock, and a hundred others – not to mention server and database errors.
Now we’re going back to smaller, more-transparent methods that each perform a single function and perform it well. Microservices.
But how does this framework affect impact various teams in the delivery cycle?
What changes for your team when implementing microservices, when a “macroservices” approach (so to speak) is so ingrained? There are three key audiences to consider; here’s how Microservices architecture affects each.
Of all people, architects understand the evolution of software design. Knowing this, they’ll inherently understand why sometimes an old approach makes sense again and the implications of moving to a Microservices structure.
Think of microservices as analogs to the getters and setters in object-oriented programming languages. Unlike their procedural predecessors, those methods each performed a tiny task on an object. Using them, developers controlled the object’s behavior at a fine-grained level. If an error or exception occurred, a simple return code revealed the problem.
Later we used those methods to build up more complex objects and APIs, to make the end user’s life easier. Before long, though, those mega-methods APIs became the new black boxes, hiding all the details. If an error occurs down deep, the exception code is almost meaningless to the caller – or the tester.
That’s what our web APIs are like today. Here’s why this approach matters to the architect:
The input object is the primary object acted upon. For example, createCustomer should act on a new Customer object, not on an Order or PaymentHistory. Those items might be related to a Customer, but they are separate from the Customer.
The service does what it says, not 100 other actions the consumer has no idea about. A microservice can use other APIs, of course. It just shouldn’t stray into performing operations not directly relevant to the task at hand. (Again, createCustomer shouldn’t go off and create an Order or making a Payment… it should create a Customer.)
Risk isn’t linear with respect to code size: it’s exponential. The less code executed in each testable unit, the easier it is to isolate bugs when they happen.
Any architect who has documented a system design with too few entry points can appreciate these benefits.
Many of the same benefits also apply to developers. But most programmers are more pragmatic. Or they’re just set in their ways and want to know what’s in it for them.
That’s normal. We all do that when comparing “the way we’ve always done it” with something new – especially if we think it will create more work. In some respects, embracing microservices does appear to create more work. Smaller chunks of functionality mean more methods to create and test.
More methods don’t always mean more time and effort, though. Here are some ways that microservices can benefit the developer:
Most software groups today use – or wish they used – an agile methodology. Whether it’s SCRUM, XP or another variant, developers like the agile approach. It breaks the work into small, focused and manageable chunks, to deliver of real value in short, iterative cycles. That’s just what microservices do. They provide small well-defined pieces of functionality, instead of monolithic kitchen-sink APIs. Microservices are a perfect fit for agile sprints.
Each microservice is independent of others, so development can proceed in parallel. Programmers don’t have to sit around waiting for someone else to finish, or waste time creating stubs. This means less unproductive time – which can often shorten a schedule rather than lengthen it.
If the testers are testing huge black boxes, they’ll simply report that “it failed.” To the developer, “it” might be thousands of lines of code, doing dozens of barely-related functions. With microservices, the testers can report 99% worked fine, but this one microservice failed. Because the method is small, the developer knows exactly where to look and can fix it with ease.
Most developers will agree that using agile sprints and parallel development are key to quality, on-time delivery. But since microservices also makes debugging easier, you’ll should get a thumbs-up.
The first complaint you’ll hear from testers about implementing Microservices Architecture is: “There’s so many more methods to test! More work for me.”
True, there are more methods. Instead of one service to create-a-user-account-profile-accept-and-validate-new-payment-method-check-inventory-and-place-order, they’ll have to test a couple dozen smaller services. That’s going to mean a lot more time and effort from a team already strapped for resources, right?
Not always. Here are some ways Microservices benefit the tester.
While you might have fewer methods to test for a mega API, the number of exceptions you have to test for is huge. Remember that a mega-service can call dozens of other services, and failure can happen anywhere. You’d better account for every possible error condition on every object touched in the API – if you even know what they are. (And good luck interpreting the exception codes.)
There may be more APIs to test, but you needn’t worry trapping all those 157 “known” error codes as with mega services. (And what about the undocumented ones?) Each microservice has only a few failure points and limited error codes for you to test.
As mentioned before, risk grows exponentially with the size of the code being tested. Would you rather have one out of four monoliths fail after an upgrade – with no way to isolate why – or one out of 95 microservices?
One more thing for test teams to remember. Once these extra tests are set up, they can be reused again without modification. Further, using an automated tool to create and execute the tests means the actual testing needn’t take more people or time at all.
Now that everyone’s happy – who gets this task?
The architects, developers and testers should agree with you now: Microservices are the way to go! In the end, they can make life easier for everyone – including the consumers. The one remaining question is: Who orchestrates the bigger picture?
With a monolithic API, the designer and developer orchestrate all the actions that happen within it. The API consumer doesn’t need to worry about it – until it fails.
With microservices, the caller consumes each one as part of a set of services, to collectively perform a larger task. That means someone has to plan, test and document the recommended sequences – to guide the end user in his own orchestration. (This subject is large enough that we’ll save it for a later article or two.)
In the end, though, understanding API results is half the battle. Neither testers nor developers can easily interpret a failure within a monolithic service. And the API consumers shouldn’t have to. Microservices is an old concept applied to the new world of web services APIs. It addresses the drawbacks of unwieldy black-box services.
Adopting microservices presents a few challenges, but there’s no doubt that in this case, what’s old is new again. And that’s a good thing for everyone.