Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Distributed Systems and Sidecar Pattern, Part 2

DZone's Guide to

Distributed Systems and Sidecar Pattern, Part 2

In this part, we'll use a similar pattern, but to translate a binary communication protocol, to a web-friendly protocol. Google's gRPC, to a RESTful, web endpoint.

· Microservices Zone ·
Free Resource

Record growth in microservices is disrupting the operational landscape. Read the Global Microservices Trends report to learn more.

In the last post of this series, we looked at a basic sidecar pattern to wrap around a service and direct traffic to it, whilst augmenting its functionality with some additional functionality.

In this part, we'll use a similar pattern, but to translate a binary communication protocol, to a web-friendly protocol. Google's gRPC, to a RESTful, web endpoint.

The application structure is very similar to our last:

We have an application, in this case, a gRPC service, which exposes a single method to return a greeting, when a client calls it with a name.

Let's take a look at our application code:

// app/main.go
package main

import (
	"context"
	"fmt"
	"log"
	"net"
	"os"

	pb "github.com/EwanValentine/distributed-patterns/sidecar-http-grpc/app/transport"
	grpc "google.golang.org/grpc"
)

type server struct{}

func (s *server) FetchGreeting(ctx context.Context, req *pb.Request) (*pb.Response, error) {
	name := req.Name
	response := &pb.Response{
		Reply: fmt.Sprintf("Hello there, %s", name),
	}
	return response, nil
}

func main() {
	port := os.Getenv("PORT")
	lis, err := net.Listen("tcp", fmt.Sprintf(":%s", port))
	if err != nil {
		log.Fatal(err)
	}

	grpcServer := grpc.NewServer()
	pb.RegisterApplicationServer(grpcServer, &server{})

	// start the server
	if err := grpcServer.Serve(lis); err != nil {
		log.Fatalf("failed to serve: %s", err)
	}
}

We take a port, create a TCP connection for our gRPC handler, and create a gRPC server. We then create a server struct, with a method which satisfies our gRPC service interface. This method takes a name, creates a longer greeting message, and returns it again. Pretty simple!

We also make use of multi-stage Docker build images for our two services:

FROM golang:alpine as builder

RUN apk update && apk upgrade && \
  apk add --no-cache bash git openssh gcc musl-dev

RUN mkdir -p /usr/distributed-patterns/sidecar-http-grpc/app

WORKDIR /usr/distributed-patterns/sidecar-http-grpc/app

COPY . .

RUN go get
RUN CGO_ENABLED=0 GOOS=linux go build main.go


FROM alpine

RUN mkdir -p /usr/app

WORKDIR /usr/app

COPY --from=builder /usr/distributed-patterns/sidecar-http-grpc/app/main .

ENV PORT 80

CMD ["./main"]

The first 'stage' in the image takes care of building the binary itself. So it must have Golang installed, so we're using the Golang base image. The Alpine version again, of course. Once that stage has built the image, the runtime stage picks out the binary, in a stripped down Alpine image, without the Go runtime, with just enough functionality to run our binary service file. Pretty neat!

We also have a protobuf definition in /app/transport:

syntax = "proto3";
package transport;

service Application {
  rpc FetchGreeting(Request) returns (Response) {}
}

message Request {
  string name = 1;
}

message Response {
  string reply = 2;
}

This is used to generate the gRPC service code to allow our client to communicate with it.

Now let's take a look at our sidecar application:

package main

import (
	"context"
	"fmt"
	"log"
	"net/http"
	"os"

	pb "github.com/EwanValentine/distributed-patterns/sidecar-http-grpc/app/transport"
	grpc "google.golang.org/grpc"

	"github.com/gorilla/mux"
)

// API -
type API struct {
	service pb.ApplicationClient
}

// Greet a user
func (api *API) Greet(w http.ResponseWriter, r *http.Request) {
	params := mux.Vars(r)
	request := &pb.Request{
		Name: params["name"],
	}
	response, err := api.service.FetchGreeting(context.Background(), request)
	if err != nil {
		w.WriteHeader(500)
		fmt.Fprintf(w, "error")
		return
	}

	fmt.Fprintf(w, response.Reply)
}

func main() {
	port := os.Getenv("PORT")
	router := mux.NewRouter().StrictSlash(true)

	addr := fmt.Sprintf("0.0.0.0:%d", 6000)
	conn, err := grpc.Dial(addr, grpc.WithInsecure())
	if err != nil {
		log.Fatal("Connection failed:", err)
	}
	defer conn.Close()
	client := pb.NewApplicationClient(conn)
	api := API{service: client}

	router.HandleFunc("/user/{name}", api.Greet).Methods("GET")
	log.Fatal(http.ListenAndServe(":"+port, router))
}

We're using Gorilla Mux to create a simple webservice, creating a GET route at /user/{name}. We pass in a name as a parameter, which is then used when calling our gRPC service.

We have a Makefile to take care of some of the build tasks, such as creating our docker images and our protobuf code:

build-proto:
	protoc -I=app/transport --go_out=plugins=grpc:app/transport/ app/transport/app.proto

build-images:
	docker build -t sidecar2-application:v1 app/
	docker build -t sidecar2-sidecar:v1 sidecar/

Now, we have a deployment file in our Kubernetes configs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sidecar-service-two
  labels:
    app: application
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sidecar-service-two
  template:
    metadata:
      labels:
        app: sidecar-service-two
    spec:
      containers:
        - name: sidecar-app
          image: sidecar2-application:v1
          imagePullPolicy: Never
          ports:
            - containerPort: 6000
          env:
            - name: PORT
              value: "6000"

        - name: sidecar
          image: sidecar2-sidecar:v1
          imagePullPolicy: Never
          ports:
            - containerPort: 80
          env:
            - name: PORT
              value: "80"
            - name: TARGET
              value: "6000"

Similar to our last example, we define two containers in our pod, our application, and our sidecar. We use environment variables in order to configure where the sidecar should connect to.

Now our service file, service.yml:

kind: Service
apiVersion: v1
metadata:
  name: sidecar-service-two
spec:
  selector:
    app: sidecar-service-two
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

Which, the same as our last service, load balances requests to our pod, pointing to our sidecar container.

Let's deploy our app! $ kubectl create -f ./deployment.yml,./service.yml.

Now if you navigate to http://localhost/user/Your%20Name%20here you should see a response from your gRPC service.

This is another fairly trivial example, but, hopefully, you can see how you can inject, generic, reusable sidecar container in order to augment or expose functionality within your architecture.

Repository found here!

Learn why microservices are breaking traditional APM tools that were built for monoliths.

Topics:
microservices ,sidecar pattern ,restful api ,endpoint ,microservice communication

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}