DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Trending

  • Microservices: Quarkus vs Spring Boot
  • AWS Multi-Region Resiliency Aurora MySQL Global DB With Headless Clusters
  • 8 Data Anonymization Techniques to Safeguard User PII Data
  • How to Optimize CPU Performance Through Isolation and System Tuning
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Distributed Systems and Sidecar Pattern, Part 2

Distributed Systems and Sidecar Pattern, Part 2

In this part, we'll use a similar pattern, but to translate a binary communication protocol, to a web-friendly protocol. Google's gRPC, to a RESTful, web endpoint.

Ewan Valentine user avatar by
Ewan Valentine
·
Mar. 05, 19 · Tutorial
Like (2)
Save
Tweet
Share
8.32K Views

Join the DZone community and get the full member experience.

Join For Free

In the last post of this series, we looked at a basic sidecar pattern to wrap around a service and direct traffic to it, whilst augmenting its functionality with some additional functionality.

In this part, we'll use a similar pattern, but to translate a binary communication protocol, to a web-friendly protocol. Google's gRPC, to a RESTful, web endpoint.

The application structure is very similar to our last:

We have an application, in this case, a gRPC service, which exposes a single method to return a greeting, when a client calls it with a name.

Let's take a look at our application code:

// app/main.go
package main

import (
	"context"
	"fmt"
	"log"
	"net"
	"os"

	pb "github.com/EwanValentine/distributed-patterns/sidecar-http-grpc/app/transport"
	grpc "google.golang.org/grpc"
)

type server struct{}

func (s *server) FetchGreeting(ctx context.Context, req *pb.Request) (*pb.Response, error) {
	name := req.Name
	response := &pb.Response{
		Reply: fmt.Sprintf("Hello there, %s", name),
	}
	return response, nil
}

func main() {
	port := os.Getenv("PORT")
	lis, err := net.Listen("tcp", fmt.Sprintf(":%s", port))
	if err != nil {
		log.Fatal(err)
	}

	grpcServer := grpc.NewServer()
	pb.RegisterApplicationServer(grpcServer, &server{})

	// start the server
	if err := grpcServer.Serve(lis); err != nil {
		log.Fatalf("failed to serve: %s", err)
	}
}

We take a port, create a TCP connection for our gRPC handler, and create a gRPC server. We then create a server struct, with a method which satisfies our gRPC service interface. This method takes a name, creates a longer greeting message, and returns it again. Pretty simple!

We also make use of multi-stage Docker build images for our two services:

FROM golang:alpine as builder

RUN apk update && apk upgrade && \
  apk add --no-cache bash git openssh gcc musl-dev

RUN mkdir -p /usr/distributed-patterns/sidecar-http-grpc/app

WORKDIR /usr/distributed-patterns/sidecar-http-grpc/app

COPY . .

RUN go get
RUN CGO_ENABLED=0 GOOS=linux go build main.go


FROM alpine

RUN mkdir -p /usr/app

WORKDIR /usr/app

COPY --from=builder /usr/distributed-patterns/sidecar-http-grpc/app/main .

ENV PORT 80

CMD ["./main"]

The first 'stage' in the image takes care of building the binary itself. So it must have Golang installed, so we're using the Golang base image. The Alpine version again, of course. Once that stage has built the image, the runtime stage picks out the binary, in a stripped down Alpine image, without the Go runtime, with just enough functionality to run our binary service file. Pretty neat!

We also have a protobuf definition in /app/transport:

syntax = "proto3";
package transport;

service Application {
  rpc FetchGreeting(Request) returns (Response) {}
}

message Request {
  string name = 1;
}

message Response {
  string reply = 2;
}

This is used to generate the gRPC service code to allow our client to communicate with it.

Now let's take a look at our sidecar application:

package main

import (
	"context"
	"fmt"
	"log"
	"net/http"
	"os"

	pb "github.com/EwanValentine/distributed-patterns/sidecar-http-grpc/app/transport"
	grpc "google.golang.org/grpc"

	"github.com/gorilla/mux"
)

// API -
type API struct {
	service pb.ApplicationClient
}

// Greet a user
func (api *API) Greet(w http.ResponseWriter, r *http.Request) {
	params := mux.Vars(r)
	request := &pb.Request{
		Name: params["name"],
	}
	response, err := api.service.FetchGreeting(context.Background(), request)
	if err != nil {
		w.WriteHeader(500)
		fmt.Fprintf(w, "error")
		return
	}

	fmt.Fprintf(w, response.Reply)
}

func main() {
	port := os.Getenv("PORT")
	router := mux.NewRouter().StrictSlash(true)

	addr := fmt.Sprintf("0.0.0.0:%d", 6000)
	conn, err := grpc.Dial(addr, grpc.WithInsecure())
	if err != nil {
		log.Fatal("Connection failed:", err)
	}
	defer conn.Close()
	client := pb.NewApplicationClient(conn)
	api := API{service: client}

	router.HandleFunc("/user/{name}", api.Greet).Methods("GET")
	log.Fatal(http.ListenAndServe(":"+port, router))
}

We're using Gorilla Mux to create a simple webservice, creating a GET route at /user/{name}. We pass in a name as a parameter, which is then used when calling our gRPC service.

We have a Makefile to take care of some of the build tasks, such as creating our docker images and our protobuf code:

build-proto:
	protoc -I=app/transport --go_out=plugins=grpc:app/transport/ app/transport/app.proto

build-images:
	docker build -t sidecar2-application:v1 app/
	docker build -t sidecar2-sidecar:v1 sidecar/

Now, we have a deployment file in our Kubernetes configs:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sidecar-service-two
  labels:
    app: application
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sidecar-service-two
  template:
    metadata:
      labels:
        app: sidecar-service-two
    spec:
      containers:
        - name: sidecar-app
          image: sidecar2-application:v1
          imagePullPolicy: Never
          ports:
            - containerPort: 6000
          env:
            - name: PORT
              value: "6000"

        - name: sidecar
          image: sidecar2-sidecar:v1
          imagePullPolicy: Never
          ports:
            - containerPort: 80
          env:
            - name: PORT
              value: "80"
            - name: TARGET
              value: "6000"

Similar to our last example, we define two containers in our pod, our application, and our sidecar. We use environment variables in order to configure where the sidecar should connect to.

Now our service file, service.yml:

kind: Service
apiVersion: v1
metadata:
  name: sidecar-service-two
spec:
  selector:
    app: sidecar-service-two
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

Which, the same as our last service, load balances requests to our pod, pointing to our sidecar container.

Let's deploy our app! $ kubectl create -f ./deployment.yml,./service.yml.

Now if you navigate to http://localhost/user/Your%20Name%20here you should see a response from your gRPC service.

This is another fairly trivial example, but, hopefully, you can see how you can inject, generic, reusable sidecar container in order to augment or expose functionality within your architecture.

Repository found here!

Web Service Kubernetes application Docker (software) Golang Protocol (object-oriented programming) Build (game engine) Web Protocols

Published at DZone with permission of Ewan Valentine, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Trending

  • Microservices: Quarkus vs Spring Boot
  • AWS Multi-Region Resiliency Aurora MySQL Global DB With Headless Clusters
  • 8 Data Anonymization Techniques to Safeguard User PII Data
  • How to Optimize CPU Performance Through Isolation and System Tuning

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: