DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Zero Trust for AWS NLBs: Why It Matters and How to Do It
  • A Guide to Microservices Deployment: Elastic Beanstalk vs Manual Setup
  • A Deep Dive on Read Your Own Writes Consistency
  • Mastering Load Balancers: Optimizing Traffic for High Availability and Performance

Trending

  • How to Build Scalable Mobile Apps With React Native: A Step-by-Step Guide
  • Chaos Engineering for Microservices
  • Testing SingleStore's MCP Server
  • Simplify Authorization in Ruby on Rails With the Power of Pundit Gem
  1. DZone
  2. Software Design and Architecture
  3. Performance
  4. Deploy Apache Pulsar With MetalLB on K3s

Deploy Apache Pulsar With MetalLB on K3s

In this blog, I would like to try another way of installing Pulsar in the containerized environment by using K3s.

By 
Sherlock Xu user avatar
Sherlock Xu
·
Updated Oct. 10, 22 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
4.8K Views

Join the DZone community and get the full member experience.

Join For Free

Apache Pulsar is a distributed messaging and streaming system. Its unique architecture of compute-storage separation is born for cloud-native environments. As we run Pulsar inside containers in production, we might want to run some tests beforehand. This is also why I tried installing Pulsar on Kubernetes through the KubeSphere App Store in a previous blog. In this blog, I would like to try another way of installing Pulsar in the containerized environment by using K3s.

What Is K3s?

K3s is a lightweight Kubernetes distribution developed by Rancher. It gets rid of some dependencies and components required for Kubernetes. As such, you can run K3s on smaller machines compared with Kubernetes. This also makes it very easy to use for development and testing purposes (you can also scale it for production). If you are new to Kubernetes or are frustrated by its complexity, K3s may be a great alternative to get started.

Installing K3s

I will be using KubeKey, a lightweight installer for Kubernetes and cloud-native add-ons, to install K3s. Compared with the official K3s installation, KubeKey only installs the minimal components you need for K3s, like the CNI plugin and CoreDNS. Additionally, it deploys Helm on your machine automatically, which will be used to install Pulsar later. If you are unfamiliar with KubeKey, see one of my previous blogs to learn more details about installing Kubernetes with KubeKey.

Let’s get started with the installation.

  1. Download KubeKey and make it executable.

     
    curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -


     
    chmod +x kk


  2. Run the following command to create a K3s cluster quickly.

     
    ./kk create config --with-kubernetes v1.21.4-k3s


    The command above creates a one-node K3s cluster. If you want to include multiple nodes in your cluster, you need to create a configuration file using the ./kk create config command. You can add the --with-kubesphere flag to install KubeSphere together. KubeSphere is a container platform running on top of Kubernetes, while it also supports the deployment on K3s.

  3. After the installation is complete, check all Pods in the cluster. As you can see below, KubeKey only installs the required components for you in the cluster.

     
    # kubectl get pods -A
    
    NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
    kube-system   calico-kube-controllers-846b5f484d-6dnl7   1/1     Running   0          40s
    kube-system   calico-node-slmhn                          1/1     Running   0          40s
    kube-system   coredns-7448499f4d-gpjfp                   1/1     Running   0          40s


  4. Helm is installed on the machine as well. This is the Helm version I use for this demo:

     
    # helm version
    
    version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}


Installing Apache Pulsar

The official documentation of Pulsar does not provide any information on its deployment on K3s, but I find that it is the same as its deployment on Kubernetes. Therefore, I won’t explain the details in this part as you can refer to its documentation.

  1. Add the Pulsar Helm chart repository and update it.

     
    helm repo add apache https://pulsar.apache.org/charts
    helm repo update


  2. Clone the GitHub repository of the Pulsar Helm chart to your local machine.

     
    git clone https://github.com/apache/pulsar-helm-chartcd pulsar-helm-chart
    cd pulsar-helm-chart


  3. The Pulsar community provides a simple script to help you quickly create necessary Secrets for Pulsar. Run the script in the repository.

     
    ./scripts/pulsar/prepare_helm_release.sh \ 
    -n pulsar \
    -k pulsar-k3s \
    -c 


  4. Use the Pulsar Helm chart to install a Pulsar cluster on K3s.

     
    helm install \ 
        --values examples/values-minikube.yaml \ 
        --set initialize=true \ 
        --namespace pulsar \ 
        pulsar-k3s apache/pulsar


  5. Check the status of all Pods in the pulsar namespace.

     
    # kubectl get pods -n pulsar
    
    NAMESPACE        NAME                                         READY   STATUS      RESTARTS   AGE
    pulsar           pulsar-k3s-bookie-0                          1/1     Running     0          4m22s
    pulsar           pulsar-k3s-bookie-init-2f2sl                 0/1     Completed   0          4m22s
    pulsar           pulsar-k3s-broker-0                          1/1     Running     0          4m22s
    pulsar           pulsar-k3s-grafana-85fff46957-r2fww          1/1     Running     0          4m22s
    pulsar           pulsar-k3s-prometheus-5bc79fb8f8-2chzd       1/1     Running     0          4m22s
    pulsar           pulsar-k3s-proxy-0                           1/1     Running     0          4m22s
    pulsar           pulsar-k3s-pulsar-init-6kzvc                 0/1     Completed   0          4m22s
    pulsar           pulsar-k3s-pulsar-manager-7f6c88bc85-xltns   1/1     Running     0          4m22s
    pulsar           pulsar-k3s-toolset-0                         1/1     Running     0          4m22s
    pulsar           pulsar-k3s-zookeeper-0                       1/1     Running     0          4m22s


  6. Check all Services in the pulsar namespace.

     
    # kubectl get services -n pulsar
    
    NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
    pulsar-k3s-bookie           ClusterIP      None            <none>        3181/TCP,8000/TCP                     5m3s
    pulsar-k3s-broker           ClusterIP      None            <none>        8080/TCP,6650/TCP                     5m3s
    pulsar-k3s-grafana          LoadBalancer   10.233.18.223   <pending>     3000:31587/TCP                        5m2s
    pulsar-k3s-prometheus       ClusterIP      None            <none>        9090/TCP                              5m3s
    pulsar-k3s-proxy            LoadBalancer   10.233.44.34    <pending>     80:30676/TCP,6650:30121/TCP           5m2s
    pulsar-k3s-pulsar-manager   LoadBalancer   10.233.38.123   <pending>     9527:30528/TCP                        5m2s
    pulsar-k3s-toolset          ClusterIP      None            <none>        <none>                                5m3s
    pulsar-k3s-zookeeper        ClusterIP      None            <none>        8000/TCP,2888/TCP,3888/TCP,2181/TCP   5m3s


We can see that the type of the proxy Service (pulsar-k3s-proxy) is LoadBalancer, while its external IP address is not exposed. This is normal because KubeKey does not install any load balancer on K3s by default.

Keep in mind that Kubernetes provides the implementation of load balancers with the help of cloud providers (for example, GCP, AWS, and Azure). They dynamically provision managed load balancers for LoadBalancer Services. If you are using a local cluster or bare metal machines, the Service external IP address will remain in the <pending> state.

Before we publish messages to the Pulsar cluster, we must obtain the proxy IP address so that our client can connect to it. The Pulsar proxy serves as a gateway, often used when the client can’t directly connect to brokers. Pulsar’s service discovery mechanism makes sure the connection is successful: All client connections will first go to the proxy, which determines which broker should handle the requests.

To expose the external IP address of the proxy, I will install MetalLB in the next section.

Note: If you deploy K3s through the official installation script, note that by default, it installs the built-in load balancer of K3s. Initially, I tried this way, but it seemed that the load balancer had some problem exposing the external IP address of the Pulsar proxy Service. Therefore, I switched to KubeKey and used MetalLB to expose the proxy Service.

Installing MetalLB

MetalLB is a load balancer implementation for bare metal Kubernetes clusters using standard routing protocols. You can configure the implementation to use the Layer 2 mode or the BGP mode to announce service IP addresses. It has two key components:

  • Controller (Deployment): Monitors the creation of LoadBalancer Services and assigns IP addresses to them.
  • Speaker (DaemonSet): Manages the advertisement of the IP addresses to make the Services reachable.

For details about MetalLB, see the MetalLB documentation.

Perform the following steps to install MetalLB.

  1. Apply the following YAML files.

     
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml


  2. Use the following manifest to create a ConfigMap with the layer2 protocol. It allows MetalLB to allocate IP addresses to Services.

     
    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: metallb-system
      name: config
    data:
      config: |
        address-pools:
        - name: default
          protocol: layer2
          addresses:
          - 10.138.0.5-10.138.0.10 # Available IP addresses for MetalLB.


  3. Make sure the Controller Deployment and Speaker DaemonSet are up and running.

     
    # kubectl get pods -n metallb-system
    
    NAMESPACE        NAME                                READY   STATUS      RESTARTS   AGE
    metallb-system   controller-66445f859d-cl268         1/1     Running     0          30s
    metallb-system   speaker-77jtt                       1/1     Running     0          30s


  4. Recheck the status of Services. You should be able to see that the external IP address of the proxy Service is exposed.

     
    $ kubectl get svc -n pulsar
    
    NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                               AGE
    pulsar-k3s-bookie           ClusterIP      None            <none>        3181/TCP,8000/TCP                     35m
    pulsar-k3s-broker           ClusterIP      None            <none>        8080/TCP,6650/TCP                     35m
    pulsar-k3s-grafana          LoadBalancer   10.233.18.223   10.138.0.6    3000:31587/TCP                        35m
    pulsar-k3s-prometheus       ClusterIP      None            <none>        9090/TCP                              35m
    pulsar-k3s-proxy            LoadBalancer   10.233.44.34    10.138.0.5    80:30676/TCP,6650:30121/TCP           35m
    pulsar-k3s-pulsar-manager   LoadBalancer   10.233.38.123   10.138.0.7    9527:30528/TCP                        35m
    pulsar-k3s-toolset          ClusterIP      None            <none>        <none>                                35m
    pulsar-k3s-zookeeper        ClusterIP      None            <none>        8000/TCP,2888/TCP,3888/TCP,2181/TCP   35m


  5. Record the proxy IP address (10.138.0.5 in the above output). We will use it to let our client connect to the Pulsar cluster later.

Creating Tenants, Namespaces, and Topics

Let’s create some resources in the Pulsar cluster first.

  1. Access the container in the pulsar-k3s-toolset-0 Pod.

     
    kubectl -n pulsar exec -it pulsar-k3s-toolset-0 -- /bin/bash


  2. Create a tenant called apache.

     
    ./bin/pulsar-admin tenants create apache


  3. Create a namespace called pulsar in the apache tenant.

     
    ./bin/pulsar-admin namespaces create apache/pulsar


  4. Create a topic test-topic with 4 partitions in the namespace apache/pulsar.

     
    ./bin/pulsar-admin topics create-partitioned-topic apache/pulsar/test-topic -p 4


  5. Check the result.

     
    ./bin/pulsar-admin topics list-partitioned-topics apache/pulsar


  6. Expected output:

     
    "persistent://apache/pulsar/test-topic"


Producing and Consuming Messages

I will use the Pulsar client provided by the community for the demo.

  1. Download the Apache Pulsar package to your machine and decompress it.

  2. Go to the root directory of the package, and edit the /conf/client.conf file to replace the service URLs with the exposed IP address of the proxy (10.138.0.5 in this example).

     
    # Replace the service URL with your own IP address.
    webServiceUrl=http://10.138.0.5:8080
    brokerServiceUrl=pulsar://10.138.0.5:6650


  3. Use the pulsar-client tool to create a subscription to consume messages from the topic apache/pulsar/test-topic.

     
    ./bin/pulsar-client consume -s sub apache/pulsar/test-topic -n 0


  4. Open a new terminal, and create a producer to send 10 messages to the test-topic topic.

     
    ./bin/pulsar-client produce apache/pulsar/test-topic -m "hello k3s and pulsar" -n 10


  5. Expected result:

Conclusion

From the steps above, we can see that the steps of deploying Pulsar on K3s are generally the same as deploying it on Kubernetes. That said, K3s gives you more flexibility when choosing your test machine as it is more lightweight. The key is to make sure you have an available load balancer implementation tool to expose the proxy IP address of your Pulsar cluster.

Load balancing (computing)

Opinions expressed by DZone contributors are their own.

Related

  • Zero Trust for AWS NLBs: Why It Matters and How to Do It
  • A Guide to Microservices Deployment: Elastic Beanstalk vs Manual Setup
  • A Deep Dive on Read Your Own Writes Consistency
  • Mastering Load Balancers: Optimizing Traffic for High Availability and Performance

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!