Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Integrating Istio With TIBCO BusinessWorks Container Edition (BWCE) Applications

DZone 's Guide to

Integrating Istio With TIBCO BusinessWorks Container Edition (BWCE) Applications

See how to integrate Istio with TIBCO.

· Integration Zone ·
Free Resource

Introduction

Services Mesh is one of the “greatest new things” in our PaaS environments. No matter if you’re working with K8S, Docker Swarm, pure-cloud with EKS or AWS, you’ve heard and probably tried to know how you can use this new thing that has so many advantages because it provides a lot of options in handling communication between components without impacting the logic of the components. And if you’ve heard of Service Mesh, you’ve heard of Istio as well because it is the “flagship option” at the moment. Even though other options like Linkerd or AWS App Mesh are also great options, Istio is the most-used Service Mesh at the moment.

You've probably seen some examples about how to integrate Istio with your open source-based developments, but what happens if you have a lot of BWCE or BusinessWorks applications…can you use all this power, or are you going to be banned from this new world?

Do not panic! This article is going to show you how you can easily use Istio with your BWCE application inside a K8S cluster. So...

Scenario

The scenario that we’re going to test is quite simple. It’s a simple consumer provider approach. We’re going to use a simple Web Service SOAP/HTTP exposed by a backend to show that this can work not only with a fancy REST API but even with any HTTP traffic that we could generate at the BWCE Application level.

So, we are going to invoke a service that is going to request a response from its provider and give us the response. That’s pretty easy to set up using pure BWCE without anything else.

All code related to this example is available for you in the following GitHub repo: Go get the code!

Steps

Step 1. Install Istio inside your Kubernetes cluster

In my case, I’m using Kubernetes cluster inside my Docker Desktop installation, so you can do the same or use your real Kubernetes cluster, that’s up to you. The first step is to install Istio. To do that, there's nothing better than following the steps given at isito-workshop, which you can find here: https://polarsquad.github.io/istio-workshop/install-istio/ (Just that step).

Once you’ve finished, we’re going to get the following scenario in our Kubernetes cluster, so please check that the result is the same with the following commands:

kubectl get pods -n istio-system

You should see that all pods are running, as you can see in the picture below:

kubectl -n istio-system get deployment -listio=sidecar-injector

You should see that there is one instance (CURRENT = 1) available.

kubectl get namespace -L istio-injection

You should see that ISTIO-INJECTION is enabled for the default namespace, as the image shows below:

Step 2. Build BWCE applications

Now that we have all the infrastructure needed at the Istio level, we can start building our application. To do that, we don’t have to do anything different in our BWCE application. Eventually, they’re going to be two applications that talk using HTTP as the protocol, so nothing specific.

This is something important because when we usually talk about Service Mesh and Istio with customers, the same question always arises: Is Istio supported in BWCE? Can we use Istio as a protocol to communicate our BWCE application? So, they expect that it should exist some palette or some custom plugin they should install to support Istio. But none of that is needed at the application level. And that applies not only to BWCE but also to any other technology like Flogo or even open source technologies because at the end, Istio (and Envoy, which is the other part needed in this technology that we usually avoid talking about when we talk about Istio) works in a Proxy mode using one of the most usual patterns in containers, which is the “sidecar pattern.”

So, the technology that is exposing and implementing or consuming the service knows nothing about all this “magic” that is being executed in the middle of this communication process.

We’re going to define the following properties as environment variables like we’ll do in case we’re not using Istio:

Provider application:

  • PROVIDER_PORT → Port where the provider is going to listen for incoming requests.

Consumer application:

  • PROVIDER_PORT → Port where the provider host will be listening to.
  • PROVIDER_HOST → Host or FQDN (aka K8S service name) where provider service will be exposed.
  • CONSUMER_PORT → Port where the consumer service is going to listen for incoming requests.

So, as you can see, if you check the code of the BWCE application, we don’t need to do anything special to support Istio in our BWCE applications.

NOTE: This is an important topic that is not related to the Istio integration, but BWCE populates the property BW.CLOUD.HOST, which is never translated to a loopback interface or even 0.0.0.0. So, it’s better that you change that variable for a custom one or use localhost or 0.0.0.0 to listen in the loopback interface because that is where the Istio proxy is going to send the requests.

After that, we’re going to create our Dockerfiles for these services without anything in particular; something similar to what you can see here:

NOTE: As a prerequisite, we’re using BWCE Base Docker Image named as bwce_base.2.4.3, which corresponds to version 2.4.3 of BusinessWorks Container Edition.

And now we build our docker images in our repository as you can see in the following picture:

Step 3: Deploy the BWCE Applications

Now, when all the images are being created, we need to generate the artifacts needed to deploy these applications in our Cluster. As you can see in the picture below, we’re going to define a K8S service and a K8S deployment based on the image we’ve created in the previous step:

A similar thing happens with consumer deployment as well, as you can see in the image below:

We can deploy them in our K8S cluster with the following commands:

kubectl apply -f kube/provider.yaml

kubectl apply -f kube/consumer.yaml

Now you should see the following components deployed. To fulfill all the components needed in our structure, we’re going to create an ingress to make it possible to execute requests from outside the cluster to those components. To do that, we’re going to use the following YAML file:

kubectl apply -f kube/ingress.yaml

After doing that, we’re going to invoke the service inside our SOAPUI project, and we should get the following response:

Step 4. Recap what we’ve just done

Ok, it’s working, and you're thinking, "Hmmmm can I get this working without Istio? I don’t know if Istio is still doing anything or not…"

Ok, you’re right, this isn't as great as you were expecting, but trust me, we’re just going step by step. Let’s see what’s really happening. Instead of a simple request from outside the cluster to the consumer service and that request being forwarded to the backend, what’s happening is a little bit more complex. Let’s take a look at the image below:

The incoming request is being handled by an Ingress Envoy Controller that is going to execute all rules defined to choose which service should handle the request. In our case, the only consumer-v1 component is going to do it, and the same thing happens in the communication between consumer and provider.

So, we’re getting some interceptors in the middle that COULD apply the logic that is going to help us to route traffic between our components with the deployment of rules at runtime level without changing the application, and that is the MAGIC.

Step 5. Implement Canary release

Ok, now let’s apply some of this magic to our case. One of the patterns that we usually apply when we’re rolling out an update in some of our services is the canary approach. here's a quick explanation of what this is:

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

If you want to read more about this, you can take a look at the full article in Martin Fowler’s blog.

So, now we’re going to generate a small change in our provider application, which is going to change the response to be sure that we’re targeting version two, as you can see in the image below:

Now we are going to build this application and generate the new image called provider:v2.

But before we do that, we’re going to deploy it using the YAML file called provider-v2.yaml we’re going to set a rule in our Istio Service Mesh that all traffic should be targeted to v1 even when others are applied. To do that, we’re going to deploy the file called default.yaml that has the following content:

So, in this case, what we’re saying to Istio is that even if there are other components registered to the service, it should reply with v1, so we can now deploy v2 without any issues because it is not going to reply to any traffic until we decide to do so. So now we can deploy v2 with the following command:

kubectl apply -f provider-v2.yaml

And even when we execute the SOAPUI request, we’re still getting a v1 reply even if we check in the K8S service configuration that the v2 is still bound to that service.

Ok, now we’re going to start doing the release and we’re going to start with 10% to the new version and 90% of the requests to the old one. To do that, we’re going to deploy the rule canary.yaml using the following command:

kubectl apply -f canary.yaml

Where canary.yaml has the content, as shown below:

And now, when we try enough times, we’re going to get that most of the requests (90% approx) are going to reply v1, and only 10% are going to reply from my new version:

Now we can monitor how v2 is performing without affecting all customers, and if everything goes as expected, we can continue increasing that percentage until all customers are being replied to by v2.

Thanks for reading!

Topics:
kubernetes ,tibco ,microservices ,istio service mesh ,integration ,tutorial

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}