Over a million developers have joined DZone.

Cloud-Based Microservices Discovery

DZone 's Guide to

Cloud-Based Microservices Discovery

Learn how to plan an effective approach to inter-service communication and service discovery in cloud-based microservices.

· Microservices Zone ·
Free Resource


“Microservice” and “Cloud” are two buzzwords in today’s technology landscape. In the current agile era, micro-service is an architectural style which structures an application as a collection of loosely coupled services, on the other hand, the foundational concept of cloud is providing an enterprise standard IT infrastructure easily and seamlessly on few mouse clicks (IaaS) and Amazon is one of the pole star in this expanse.

To implement the enterprise scaled application there is an inescapable need for communication between multiple services. In this article, I will try to put my thought on service discovery and inter-service-communication between micro-services on AWS cloud-based microservice implementation.

Problem Statement

A monolithic application runs on a single process and building blocks communicate internally by invoking language-level methods or functions. On the contrary, a microservices-based application is a distributed system, running on multiple processes, even across multiple servers, and the main challenge here is interaction between microservices to perform a business use case.

Solution Overview

As there are numerous technologies to realize microservice architecture, plenty of solutions address the problem of inter-service communication and service discovery. We can implement asynchronous or synchronous communication between microservices using any inter-processes communication protocol, such as HTTP, AMQP, and TCP depending on the nature of the service. The most popular communication protocol in the microservice community is REST over HTTP or HTTPS. Using the REST protocol, a developer can easily decouple services in terms of business as well as implementation and obey the philosophy of “smart endpoints and dumb pipes.”

As there are plenty of solutions and the scope of the topic is huge, I will discuss some approaches to inter-service communication and service discovery for cloud-based microservices. All the approaches are realized using AWS Cloud.

Solution Approaches

Amazon provides multiple AWS services with which we can implement microservices.

The most direct one uses EC2 Container Service (ECS). An Amazon ECS cluster is a logical grouping of container instances on which we can place tasks. The center of Amazon ECS is the Cluster Management Engine, a back-end service that uses optimistic, shared state scheduling to execute processes on EC2 instances using Docker containers. ECS coordinates the cluster through the Amazon ECS Container Agent running on each EC2 instance in the cluster.

Later, Amazon introduced Fargate to accomplish long-running processes without managing a cluster of EC2 instances. We no longer have to pick the instances types, manage cluster scheduling, or optimize cluster utilization. Fargate will take care of all these.

Lambda is a serverless technology suitable for small computations (execution duration should be less than or equal to 300 seconds). As microservice architecture demands a segregation of tasks in a flock of services, we can easily implement AWS-based microservice architecture using Lambda.

Each of the above technologies has its own technical approach to establish communication amongst services. Hybrid architecture (e.g. ECS with Lambda implementation) also has its own inter-service communication challenges and solutions. In this article, I will touch on an inter-service communication and/or service discovery approach for ECS Cluster and Fargate-based microservice architecture and will try to go deeper for Lambda-based implementation.

ECS Cluster or Fargate

Service discovery for Amazon ECS can be done by Domain Name Server (DNS). We can implement an ELB (Elastic Load Balancer)-based service discovery solution; AWSlab already provides a reference architecture for it.

Image title

In the reference architecture, tasks are created within ECS and are placed behind ELB. A new task can generate an AWS CloudTrail event which is picked up by Amazon CloudWatch Events and trigger a Lambda function. This Lambda function registers the service into an Amazon Route 53 private hosted zone. That CNAME mapping then points to the appropriate ELB.

There is a possible use case where a load balancer is not required. The way around without ELB is proposed on Chris Barclay’s blog. Here, they replace the AWS CloudTrail, Amazon CloudWatch, and Lamba with a simple agent, ecssd_agent.go. This agent runs on the ECS container instance and is responsible for management and maintenance of the service discovery component. The agent receives all Docker events natively and registers the service into Route 53 private hosted zones.

Amazon ECS now includes integrated service discovery. ECS creates and manages a registry of service names using the Route53 Auto Naming API. The Amazon ECS Service Discovery blog is an awesome overview on ECS integrated service discovery service.

Instead of a privately hosted zone, public namespaces can also be supported, but we must have an existing public hosted zone registered with Route 53 before creating the service discovery service.

Automatic service discovery using the Amazon Route 53 auto naming API has a pricing impact. Customers using Amazon ECS service discovery are charged for the usage of Route 53 auto naming APIs. This involves costs for creating the hosted zones and queries to the service registry.

Lambda (Serverless) Implementation

Lambda is a service that allows us to run functions in the AWS cloud entirely serverless and eliminates operational complexity. We can easily design a microservice architecture using Lambda and API Gateway.

Lambda can be triggered by several AWS services. It can be invoked by another Lambda function, and we can orchestrate and invoke Lambda function using AWS Step Functions as well. Below are the relevant AWS services to realize a microservice architecture.

Image title


Each of the triggering points has their own arrangement to trigger Lambda. For example, API Gateway exposes a REST API for the end user to invoke Lambda function(s). Amazon API Gateway and AWS Lambda integration can work in the following manners.

  • Push-event model – This is a model where Amazon API Gateway invokes the Lambda function by passing data in the request body as a parameter to the Lambda function.
  • Synchronous invocation – The Amazon API Gateway can invoke the Lambda function and get a response back in real time by specifying RequestResponse as the invocation type for information about invocation types.
  • Event structure – The event your Lambda function receives is the body from the HTTPS request that Amazon API Gateway receives, and your Lambda function is the custom code written to process the specific event type.

We mainly discuss inter-Lambda communication here instead of triggering approaches. Using the AWS SDK, we can invoke Lambda from another Lambda. We can use a synchronous or asynchronous method calling approach to implement it.


We did our PoC in Java. LambdaSyncHandler itself is a Lambda which is exposed as a service and callable from the API Gateway via REST endpoint. This Lambda can invoke TaskBLambda internally in a synchronous fashion. The code snippet is given below.

public class LambdaSyncHandler implements RequestStreamHandler throws ParseException {
       JSONParser parser = new JSONParser();
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
       ObjectMapper objectMapper = new ObjectMapper();
       final TaskBLambda taskBLambda = LambdaInvokerFactory.builder().lambdaClient(AWSLambdaClientBuilder.defaultClient()).build(ConfigurationService.class);
       TaskBRequest input = new TaskBRequest();
       TaskBResponse taskBResponse = taskBLambda.getTaskBResponse(input);

       String output = objectMapper.writeValueAsString(taskBResponse);
       OutputStreamWriter writer = new OutputStreamWriter(outputStream, "UTF-8");

To integrate with the actual task B Lambda, we need to define an interface for task B Lambda, as below.

public interface TaskBLambda {
@LambdaFunction(functionName = "pocTaskBLambda")
TaskBResponse getTaskBResponse(ConfigurationRequest input);


The asynchronous method of invocation in Java returns a Future object. The Future:isDone API returns "true" if the task is completed and Future:get waits if necessary for the completion of the computation, and then retrieves its result. Below is the code block of an asynchronous Lambda invocation with a callback approach using the AsyncHandler interface.

private void invokeLambdaAsync() {
    String lambdaName = "<LAMBDA_NAME_OR_ARN>";
    String inputJSON = "{\"firstName\":\"Jackie\",\"lastName\": \"Chan\"}";

    InvokeRequest lmbRequest = new InvokeRequest()
    AWSLambdaAsync asyncLambda = AWSLambdaAsyncClientBuilder.defaultClient();

    final CountDownLatch latch = new CountDownLatch(1);
    final Future<InvokeResult> future = asyncLambda.invokeAsync(lmbRequest, new AsyncHandler<InvokeRequest, InvokeResult>() {
      public void onSuccess(InvokeRequest req, InvokeResult res) {
        ByteBuffer payload = res.getPayload();
        System.out.println(new String(payload.array()));
        latch.countDown(); //release waiting thread

      public void onError(Exception e) {
        latch.countDown(); //release waiting thread

    try {
      latch.await();       //wait current thread
      asyncLambda.shutdown();    //shutdown ExecutorService
    } catch (InterruptedException | ExecutionException e) {

Using the Step Function

Using the Step Function, we can easily orchestrate different Lambda functions where some of the functions can be exposed as a REST service and/or some of them can be treated as a pure internal function to realize some business logic. I will discuss the Step Function in detail in my next article.


The main advantage of serverless technology like Lambda is auto-scalability. It can scale automatically based on incoming traffic so capacity planning is not required. For Lambda, the unit of scale is a concurrent execution. By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1,000. Hence, if we are able to implement microservice architecture purely using Lambda, we do not have to consider load balancing explicitly during service discovery and interservice communication.

microservices ,cloud ,service discovery ,distributed systems ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}