DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Kafka Connect on Kubernetes The Easy Way!
  • Data Ingestion Into Azure Data Explorer Using Kafka Connect
  • Kafka on Kubernetes, the Strimzi Way! (Part 4)
  • Kafka on Kubernetes, the Strimzi Way (Part 2)

Trending

  • Solid Testing Strategies for Salesforce Releases
  • Ensuring Configuration Consistency Across Global Data Centers
  • Grafana Loki Fundamentals and Architecture
  • Java's Quiet Revolution: Thriving in the Serverless Kubernetes Era
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Building Hybrid Multi-Cloud Event Mesh With Apache Camel and Kubernetes

Building Hybrid Multi-Cloud Event Mesh With Apache Camel and Kubernetes

A full installation guide for building the event mesh with Apache Camel. We will be using microservice, function, and connector for the connector node in the mesh.

By 
Christina Lin user avatar
Christina Lin
DZone Core CORE ·
May. 01, 21 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
13.0K Views

Join the DZone community and get the full member experience.

Join For Free

Part 1 || Part 2 || Part 3

This blog is part two of my three blogs on how to build a hybrid multi-cloud event mesh with Camel. In this part, I am going over the more technical aspects. Going through how I set up the event mesh demo. 

Recap, the Demo:

Demo starts by collecting incident tasks from ServiceNow, streams the events to the Kafka cluster in the mesh. The events then orchestrate to AWS, Azure, and GCP, depending on the request type with egress connectors. Ingress connectors are also created for task result updates. And I have also set up a couple of serverless connectors in the mesh that handles notification to Telegram. For more architectural design, please refer to my previous blog.

Prerequisites

For Operations You will need to have the following ready: 

  • OpenShift platform 4.7 (Red Hat's Kubernetes platform)with Administrator right - We need it to setup the operators that can further assist the self-service and life cycle of functions running on top. 
  • OpenShift CLI tool - Manage and install components on OpenShift Container Platform projects from the local terminal.

For Developers You will need to have the following ready: 

  • OpenShift CLI tool - Manage and install components on OpenShift Container Platform projects from the local terminal.
  • IDE - VS code is my choice here
  • Camel K CLI tool - Manage, run the function and connector on Openshift. 
  • Maven - For building Camel Quarkus microservice application
  • Java 1.11- For building Camel Quarkus microservice  application

Environment Setup (Operations)

Log in to your cluster with the CLI,

Plain Text
 




xxxxxxxxxx
1


 
1
oc login 


Here is how you get the login tokens:

Login Tokens GIF

Create a project name 'demo.'

Plain Text
 




xxxxxxxxxx
1


 
1
oc new-project demo



Also need to give appropriate roles in the namespace to the developer. 

Install following cluster-wide operators: 

  • Camel K Operator: Camel K Operator GIF
  • Kafka Operator: Kafka Operator GIF
  • Serverless Operator:Serverless Operator GIF

Install the following operator in the namespace:

  • Grafana Operator:Grafana Operator GIF

Configure and Setup Serverless

Setup Knative Serving in knative-serving namespace:Setting Up Knative Serving

YAML
 




xxxxxxxxxx
1


 
1
apiVersion: operator.knative.dev/v1alpha1
2
kind: KnativeServing
3
metadata:
4
 name: knative-serving
5
 namespace: knative-serving
6
spec: {}



Setup Knative Eventing in knative-eventing namespace:Setting Up Knative Eventing

YAML
 




xxxxxxxxxx
1


 
1
apiVersion: operator.knative.dev/v1alpha1
2
kind: KnativeEventing
3
metadata:
4
 name: knative-eventing
5
 namespace: knative-serving
6
spec: {}



Setup Cluster Monitoring and add monitoring to all camel-app (app-with-metrics):

Plain Text
 




x


 
1
git clone https://github.com/weimeilin79/cameleventmesh.git
2
cd monitoring
3
oc apply -f cluster-monitoring-config.yaml -n openshift-monitoring
4
oc apply -f service-monitor.yaml



You can check if user monitoring is turned on by using:

Plain Text
 




xxxxxxxxxx
1


 
1
oc -n openshift-user-workload-monitoring get pod



And that is all the work that is needed from the Operation team.

Building the Mesh with Connectors (Developers)

In the project that you will be building the mesh. A streaming cluster is the foundation of the mesh, developers can start by creating their own Kafka instance and topics. There is no need for the OPS team to help set up. Developers can easily self-service.

In the OpenShift console developer view,

  • Create a default Kafka Cluster:Creating Default Kafka Cluster
  • Create two topic names: incident-all and gcp-result.Creating 2 Topic Names

Log in to your cluster with the CLI:

Plain Text
 




xxxxxxxxxx
1


 
1
oc login


Login Tokens GIF

Go to the namespace that you will be building the mesh:

Plain Text
 




xxxxxxxxxx
1


 
1
oc project demo



Clone the connector code from the GitHub repository:

Plain Text
 




xxxxxxxxxx
1


 
1
git clone https://github.com/weimeilin79/cameleventmesh.git



Connector to ServiceNow

If you are not familiar with ServiceNow, check out this document to obtain the credentials for the connector.

Now, let's get started. In the folder you cloned. Replace the following cmd with your ServiceNow credentials and run. It creates the secret configuration for you on the OpenShift Platform (Kubernetes). And the next command sets up the destination location to the Streaming foundation.

Plain Text
 




xxxxxxxxxx
1


 
1
oc create secret generic servicenow-credentials \
2
  --from-literal=SERVICENOW_INSTANCE=REPLACE_ME \
3
  --from-literal=SERVICENOW_OAUTH2_CLIENT_ID=REPLACE_ME \
4
  --from-literal=SERVICENOW_OAUTH2_CLIENT_SECRET=REPLACE_ME \
5
  --from-literal=SERVICENOW_PASSWORD=REPLACE_ME \
6
  --from-literal=SERVICENOW_USERNAME=REPLACE_ME
7
 
          
8
oc create -f kafka-config.yaml



The connector to ServiceNow will use what we have just configured (meaning you can change this setting according to your environments. Such as staging, UAT, or production). Now it’s time to deploy the connector.

Plain Text
 




xxxxxxxxxx
1


 
1
cd servicenow
2
mvn clean package -Dquarkus.kubernetes.deploy=true -Dquarkus.openshift.expose=true -Dquarkus.openshift.labels.app-with-metrics=camel-app



You should be able to see an application deployed in the Developer’s topology view.

Developer's Topology View: Application Deployed

Connector to Google Cloud Platform (GCP)

Obtain your google cloud service account key from the GCP platform. Create a gcp-topic under Google pub/sub and make sure to add permissions to your service account, download your service account key under the gcp folder, and name it google-service-acc-key.json.

In our mesh, we need to add the google key into the platform, so the connector can use it to authenticate with GCP. And we can deploy the connector to OpenShift and have it now push events to GCP. 

Plain Text
 




xxxxxxxxxx
1


 
1
cd gcp
2
oc create configmap gcp-configmap --from-file=google-service-acc-key.json
3
mvn clean package -Dquarkus.kubernetes.deploy=true -Dquarkus.openshift.expose=true -Dquarkus.openshift.labels.app-with-metrics=camel-app



OPTIONAL: You can create a Google Cloud function that triggered by the gcp-topic and does whatever you want it to do, in my case, just does some simple logging.

Next up, we will use Camel K to listen to result events from GCP. Events are streamed back to the Kafka topic gcp-result. (Allowing monitoring for all Camel K routes.)

Plain Text
 




xxxxxxxxxx
1


 
1
cd gcp/camelk
2
kamel run Gcpreader.java
3
oc patch ip camel-k --type=merge -p '{"spec":{"traits":{"prometheus":{"configuration":{"enabled":true}}}}}'
4
oc create -f kafka-source.kamelet.yaml



Two instances should appear in the developer topology:

2 Instances in Developer Topology

Serverless Connector to Telegram

I also want to introduce Kamlet, which can be used as a Kafka connector, using a GUI interface on OpenShift. It creates a connector from Kafka to Serverless Knative Channel. (If you are not familiar with serverless architecture, check out this blog).

Let’s create a channel that takes in CloudEvent that contains notification to the Telegram. In the developer console, click on Add +, and click on channel.

Create Channel

Once the channel is available, you will see it appear in the topology. Next, we will create the connector from Kafka to the serverless channel. Select Kafka source. And setup the source and sink of the connector. 

Plain Text
 




xxxxxxxxxx
1


 
1
kind: Channel 
2
name: notify 
3
brokers: my-cluster-kafka-bootstrap.demo.svc:9092'
4
topic: gcp-result


You will see the connector appears in the Developer topology.

Connector in Developer Topology

Connector to Azure

The next part of the mesh is connecting to Azure. In Azure, set up your Access control (IAM), with appropriate role and access.

In Azure, go to Service Bus, and create a queue called azure-bus and in EventHub, and create an event hub called azure-eventhub. Obtain the SAS Policy Connection String (Primary or Secondary).

OPTIONAL: Create a Azure cloud function that triggered by the azure-bus and does whatever you want it to do, in my case, just does some simple logging.

Deploy the egress Azure connector, similar to before, don't forget to create a secret to store your Azure credentials.

Plain Text
 




x


 
1
cd azure
2
oc create secret generic azure-credentials \
3
  --from-literal=eventhub.endpoint=REPLACE_ME \
4
  --from-literal=quarkus.qpid-jms.password=REPLACE_ME \
5
  --from-literal=quarkus.qpid-jms.url=REPLACE_ME \
6
  --from-literal=quarkus.qpid-jms.username=REPLACE_ME 
7
mvn clean package -Dquarkus.kubernetes.deploy=true -Dquarkus.openshift.expose=true -Dquarkus.openshift.labels.app-with-metrics=camel-app


Deploy the ingress Azure connector.

Plain Text
 




xxxxxxxxxx
1


 
1
cd azure/camelk
2
kamel run Azurereader.java



Ingress Controller in Developer Topology

Connector to AWS

Last part of the mesh, we will set up a connector to Amazon AWS. In AWS, set up your user in IAM, grant the permissions, and set up the policies accordingly. Under SNS, create a topic name sns-topic and set up the access policy for your user in IAM Under SQS, create a topic name sqs-queue and set up the access policy for your user in IAM, and subscribe to the sns-topic we have created earlier on.

OPTIONAL: Create a lambda that subscribes to the sns-topic and does whatever you want it to do, in my case, just does some simple logging.

In the aws.properties, replace it with your AWS credentials:

Properties files
 




x


 
1
camel.component.kafka.brokers=my-cluster-kafka-bootstrap.demo.svc:9092
2
accessKey=RAW(REPLACE_ME)
3
secretKey=RAW(REPLACE_ME)
4
region=REPLACE_ME



To show you how Camel can help your event mesh agile and lightweight, I am going to just use the Camel K and Kamelet.

Plain Text
 




x




1
cd aws/camelk
2
kamel run AwsCamel.java
3
cd aws
4
oc create -f https://raw.githubusercontent.com/apache/camel-kamelets/main/aws-sqs-source.kamelet.yaml



Next, create the serverless connector from AWS SQS to the serverless channel. In the Developer Console, select the SQS source. And set up the source and sink of the connector.

Set up AWS SQS to Serverless Channel

AWS Connector in Developer Topology

Bonus, Adding an API Endpoint for Incident Ticket Creation

Sometimes, we want to allow automation in the enterprise to avoid human errors. Having the API endpoint will make it very easy for other systems that want to connect to the event mesh. 

Plain Text
 




xxxxxxxxxx
1


 
1
cd serverless/api
2
oc create secret generic kafka-credential --from-file=incidentapi.properties
3
kamel run IncidentApi.java



Once you see the new Serverless API pod starts up, you will be able to access it via the route.

Monitor Event Mesh in Grafana

Knowing the status of the mesh, making sure its events are streaming smoothly is important. We want to make sure there is no clog in the mesh.  Follow the following steps, it will allow you to create a dashboard in Grafana, that monitors all the connectors in the mesh, plus memory and CPU performances. Any irregular behavior can be caught quickly.

Plain Text
 




xxxxxxxxxx
1


 
1
oc create -f grafana.yaml
2
oc adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-serviceaccount
3
oc serviceaccounts get-token grafana-serviceaccount
4
sed "s/REPLACEME/$(oc serviceaccounts get-token grafana-serviceaccount)/" grafana-datasource.yaml.bak > grafana-datasource.yaml
5
oc create -f grafana-datasource.yaml



After installed, login to the Grafana Dashboard with ID/PWD: root/secret

Grafana Dashboard Location in Developer Topology

and import the grafana-dashboard.json.

Grafana Dashboard

And congratulations! You have successfully set up the mesh and ready to go!! 

Event Plain text Kubernetes Connector (mathematics) Apache Camel OpenShift cluster dev azure kafka

Opinions expressed by DZone contributors are their own.

Related

  • Kafka Connect on Kubernetes The Easy Way!
  • Data Ingestion Into Azure Data Explorer Using Kafka Connect
  • Kafka on Kubernetes, the Strimzi Way! (Part 4)
  • Kafka on Kubernetes, the Strimzi Way (Part 2)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!