From Request/Response to Events
From Request/Response to Events
Learn how to transition from the request/response model to event-based flows in your microservices.
Join the DZone community and get the full member experience.Join For Free
Deploy commerce faster and keep pace with the demands of your customers and executives. Read this blueprint to learn how to create your own microservices-based commerce foundation so you can quickly move onto building innovative and unique shopping experiences for your customers.
How to Transition From One to the Other Within Your Microservice Architecture
Generally, the internet is based on the traditional request/response (r/r) pattern and, for the most part, this is fine. However, within a deep microservice architecture you may want to have a message based system to ensure proper autonomy, isolate dependencies, and handle failures gracefully.
On the other hand, taking the message model all the way to the edge - the layer that serves the front end - you end up forcing the front-end, and your gateway, to adopt something like web sockets to send information back to the front-end. An architecture that demands coupling is not healthy.
In most cases the traditional Request/Response model works just fine for the front-end when it requests pages, assets, or data, so adopting websockets is not a natural strategy. However, some activity, especially when many microservices are involved, may risk timing out on the browser and even if they don't (because we are forcing the connection to stay open), the wait times become unnaturally long, timing out the user.
First, Think Differently
Maybe you think this a cop-out, but it is important that we shift our focus from the application to the user. When we were looking at the r/r model, the front end needed the response in order to complete the request and close the connection. In more fluent user experiences, however, we are more concerned with notifying the user, not the application - we are responding to the user's request.
Imagine a workflow where you are applying for an insurance product and the application has to go through identity verification, fraud detection, policy analysis etc. The user could sit in front of the screen and wait for the waiting icon to disappear, or we could tell the user that his application is being processed and we will let him know as soon as processing is completed in the next few minutes.
The user can get up and continue with his life and then get an email, or if we have an app, a push notification.The goal is to let the user know, whichever way seems appropriate, and forcing the user to only be notified via the browser on the front end we developed is limiting.
Now, Let's Go Through the Flow
For simple services like things that do data lookups, etc., we should continue with the traditional r/r workflow. There is no need to complicate matters here (although you may want to look at Digital Osmosis).
For complicated, multi-step flows, the first step should respond immediately after placing the request into a message queue, and a notification identifier in a database. The initial request completes and both the front-end application and the users are freed.
Once this disconnect happens we can design our internal architecture to fit our expected needs and freely choose between event-based flows and direct request/response flows.
Once the incoming payload has been fully processed we place a message in a final message queue which then will get picked up by a notification service, which, with help from the ID we had stored in the DB, figures out how to notify the user.
This can still be via a websocket (but now our API Gateway doesn't need to support a websocket), or via email or device push for processes that take more than a minute.
Why Even Bother?
Now, naturally, this complicates matters, and if you are a good engineer, you want to keep complexity down. But message based workloads have advantages.
When we have multistep processes, which in turn may depend on external processes, we can have deep variations in completion times. Even while our system may be experiences regular loads, a downstream system may be experiences high loads, slowing down our services.
As response times increase, we risk timeouts, and also user uncertainty. When we are doing sensitive things, like opening a bank account, we don't want the user to interpret a slow service, or even a timeout error, to imply that he should hit submit again. We want to be told that 1) your request has been received, 2) we are processing it, 3) we'll let you know when we are done.
With sensitive requests that need to hop through many services, and again through services we don't control, we want to ensure that graceful recovery is possible. If we have a multistep process, and a step in the middle fails, we want to have graceful ways of trying again, without 1) loosing the initial request payload, 2) without further overburdening exhausted services and potentially bringing it down (i.e. think circuit breakers).
The message based flow, while more complex, allows for a more graceful way of handling complicated workloads. On the other hand, forcing message based flows all the way to the front-end introduces architectural constraints that may not be ideal, even unnatural. In order to have the best of both worlds we need a way to hand-over from one to the other as needed. But for this to be possible we need to shift from thinking that we are answering to our front end app to understanding that we are looking for the most natural way to communicate to the user.
Published at DZone with permission of Gratus Devanesan , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.