DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops
  • Automate Cluster Autoscaler in EKS
  • Automate Application Load Balancers With AWS Load Balancer Controller and Ingress
  • Distributed Cloud Architecture for Resilient Systems: Rethink Your Approach To Resilient Cloud Services

Trending

  • How To Build Resilient Microservices Using Circuit Breakers and Retries: A Developer’s Guide To Surviving
  • The Future of Java and AI: Coding in 2025
  • Developers Beware: Slopsquatting and Vibe Coding Can Increase Risk of AI-Powered Attacks
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Implementing EKS Multi-Tenancy Using Capsule (Part 3)

Implementing EKS Multi-Tenancy Using Capsule (Part 3)

Understand how to configure namespace options, resource quotas, and limit ranges, network policies for the tenants. Verify cross-tenant and cross-cluster communications.

By 
Phani Krishna Kollapur Gandla user avatar
Phani Krishna Kollapur Gandla
·
May. 04, 24 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
3.1K Views

Join the DZone community and get the full member experience.

Join For Free

In the previous articles of this series (Part 1 and Part 2), we have understood what multi-tenancy is, different types of tenant isolation models, challenges with Kubernetes native services, installing capsule framework on AWS EKS, and creating single or multiple tenants on EKS cluster, with single or multiple AWS IAM users as tenant owners.

In this part, we will explore how to configure namespace options, resource quotas, and limit ranges, and assign network policies for the tenants using the Capsule framework.

Configure Namespace Options and Verify Across Namespace Inheritance for the Tenant

The cluster admin can control how many namespaces a tenant can create by setting a quota in the tenant manifest spec.namespaceOptions.quota.

PowerShell
 
kubectl apply -f oil-ns-options.yaml


YAML
 
# oil-ns-options.yaml is as below
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - name: shiva
    kind: User
  namespaceOptions:
    quota: 3
PowerShell
 
# get the description of the tenant created
kubectl get tenant oil -o yaml


cluster admin

The current namespace size is 0 while the quota allotted is 3. Tenant owner can create additional namespaces according to the quota:

PowerShell
 
# login as shiva and create multiple namespaces until it exceeds quota
aws eks --region us-east-1 update-kubeconfig --name eks-cluster1 --profile shiva
kubectl create ns oil-development
kubectl create ns oil-production
kubectl create ns oil-qa
kubectl create ns oil-tesing

namespace

Once the namespace quota assigned to the tenant has been reached, the tenant owner cannot create further namespaces. The enforcement of the maximum number of namespaces per Tenant is the responsibility of the Capsule controller via its Dynamic Admission Webhook capability.

As a cluster administrator, verify the tenant oil description.

PowerShell
 
kubectl get tenant oil -o yaml

Configure Resource Quotas and Verify Across Namespaces Inheritance for the Tenant

ResourceQuota is an object in Kubernetes that enables administrators to restrict cluster tenants' resource usage per namespace. Namespaces creates virtual clusters within a physical Kubernetes cluster to help users avoid resource naming conflicts and manage capacity, among others. 

As a cluster administrator, apply the below YAML configuration.

YAML
 
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - name: shiva
    kind: User
  namespaceOptions:
    quota: 3
  resourceQuotas:
    scope: Tenant
    items:
    - hard:
        limits.cpu: "8"
        limits.memory: 16Gi
        requests.cpu: "8"
        requests.memory: 16Gi
    - hard:
        pods: "10"


We have defined two hard limits. One with CPU, memory limits, and requests. Other total number of pods in the tenant oil. When the aggregate usage for all namespaces crosses the hard quota, then the native ResourceQuota . The admission Controller in Kubernetes denies the tenant owner's request to create resources exceeding the quota:

As a tenant owner, verify the resource quotas above will be inherited by all the namespaces.

PowerShell
 
kubectl -n oil-production describe quota

quota

PowerShell
 
kubectl -n oil-development describe quota

oil

As a tenant owner, create an nginx deployment by applying the below YAML in the oil-development namespace.

PowerShell
 
kubectl -n oil-development apply -f oil-nginx-deploy-quota.yaml

# Get the pods created in the namespace development of tenant oil
kubectl -n oil-development get pods

nginx deployment

Now, change the replica count from 3 to 10 in oil-nginx-deploy-quota.yaml and apply it in the ‘oil-production’ namespace.

PowerShell
 
kubectl -n oil-production apply -f oil-nginx-deploy-quota.yaml


YAML
 
# oil-nginx-deploy-quota.yaml is as below 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 10
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        resources:
            limits:
              memory: 200Mi
              cpu: 1
            requests:
              memory: 100Mi
              cpu: 100m
        ports:
        - containerPort: 80
PowerShell
 
kubectl -n oil-production get pods

kubectl

Though the replica count is 10, only 7 pods are created as the total hard limit specified is 10, and 3 are already created in the 'oil-development' namespace.

Configure Pod and Container Limits, and Verify Across Namespaces Inheritance for the Tenant

A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object.

As a cluster admin, apply the below YAML configuration.

YAML
 
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
...
  limitRanges:
    items:
      - limits:
          - type: Pod
            min:
              cpu: "50m"
              memory: "5Mi"
            max:
              cpu: "1"
              memory: "1Gi"
      - limits:
          - type: Container
            defaultRequest:
              cpu: "100m"
              memory: "10Mi"
            default:
              cpu: "200m"
              memory: "100Mi"
            min:
              cpu: "50m"
              memory: "5Mi"
            max:
              cpu: "1"
              memory: "1Gi"
      - limits:
          - type: PersistentVolumeClaim
            min:
              storage: "1Gi"
            max:
              storage: "10Gi"


When a tenant owner creates the namespace any new namespace, Limits will be inherited by all the namespaces created.

Assign Network Policies

Kubernetes network policies control network traffic between namespaces and between pods in the same namespace. A cluster admin can enforce network traffic isolation between different tenants while leaving to the tenant owner, the freedom to set isolation between namespaces in the same tenant or even between pods in the same namespace.

Create a Tenant-Level Network Policy

As a cluster administrator, create a new tenant ‘oil’ with IAM user ‘shiva’ as the tenant owner having ingress and egress network policies as shown in the below YAML file:

PowerShell
 
kubectl apply -f oil-tenant-net.yaml
YAML
 
## oil-tenant-net.yaml
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
  name: oil
spec:
  owners:
  - name: shiva
    kind: User
  networkPolicies:
    items:
    - policyTypes:
      - Ingress
      - Egress
      egress:
      - to:
        - ipBlock:
            cidr: 0.0.0.0/0
            except:
              - 192.168.0.0/16 
      ingress:
      - from:
        - namespaceSelector:
            matchLabels:
              capsule.clastix.io/tenant: oil
        - podSelector: {}
        - ipBlock:
            cidr: 192.168.0.0/16
      podSelector: {}


Verify Network Policy Applied Across All the Namespaces in the Tenant

As a tenant owner, create two new namespaces ‘oil-development’ and ‘oil-production’. Get the network policies of the namespaces. Tenant policies have to be applied across all the namespaces.

PowerShell
 
# Get network policies of the namespace
kubectl -n oil-develoment get networkpolicies
kubectl -n oil-production get networkpolicies

Tenant policies have to be applied across all the namespaces

PowerShell
 
kubectl -n oil-production describe networkpolicy casule-oil-0

oil production

Verify if the Tenant Owner Can Create Additional Network Policies in the Namespace

As tenant owner, create network policy “production-network-policy" against the namespace ‘oil-production’ by applying the below YAML file.

YAML
 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  labels:
  name: production-network-policy
  namespace: oil-production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
PowerShell
 
kubectl -n oil-production get networkpolicies

example

Verify if the Tenant Owner Can Delete the Network Policies

Shiva can create, patch, and delete additional network policies within her namespaces.

PowerShell
 
kubectl -n oil-production auth can-i get networkpolicies
yes

kubectl -n oil-production auth can-i delete networkpolicies
yes

kubectl -n oil-production auth can-i patch networkpolicies
yes

kubectl -n oil-production delete networkpolicy production-network-policy

Shiva can create, patch, and delete additional network policies within her namespaces

Any attempt of the tenant owner to delete the tenant network policy defined in the tenant manifest is denied by the Validation Webhook enforcing it.

Expose Services Across Tenant Namespaces and Among Clusters in the Kubernetes Environment

NodePort and LoadBalancer are both service types used in Kubernetes to expose deployments or pods to external traffic. They differ in how they achieve this and the level of control they offer.

NodePort services expose pods internally the same way a ClusterIP service does. However, NodePort exposes a service on a predefined port range (30000-32767) on all worker nodes in the Kubernetes cluster.  

NodePort services are useful for exposing pods to external traffic where clients have network access to the Kubernetes nodes. For example, if your nodes have the hostnames node1 and node2, the example service above lets clients access here or here. It doesn't matter which node the external client connects to, as Kubernetes configures the network routing to direct all traffic from port 30007 on any node to the appropriate pods. 

But the nodeport has many disadvantages:

  • Can have only one service per port.
  • Only can use ports 30,000-32,767
  • If the Node/VM IP address changes, it has to be dealt with manually.

For these reasons, we will expose the service as a load balancer service. 

Below are the steps we have used for exposing nginx with a load balancer for communicating within or outside the cluster and verifying the use cases below.

  • We will create two tenants: “oil” with tenant owner shiva and “gas” with tenant owner “ganesha.” Please refer to previous sections on the creation process.
  • As a tenant owner Shiva, creates nginx deployment in the oil-production namespace and exposes it as a load balancer service.
PowerShell
 
kubectl -n oil-production create deployment lb-nginx --image=nginx
kubectl -n oil-production create service loadbalancer lb-nginx --tcp=80:80


  • As a tenant owner Shiva, get the services in the space to get the external-ip address. 
PowerShell
 
kubectl -n oil-production get svc


From the result, the external address of the service is:

Expose Service and Verify Cross Tenant Communication

As a tenant owner, Ganesha created a test pod in the gas-production namespace in tenant ‘gas’.

PowerShell
 
aws eks --region us-east-1 update-kubeconfig --name eks-cluster1 --profile ganesha
kubectl -n gas-production run webserver --image=nginx


Log into the pod and try to access/curl the external IP address of lb-nginx service created by owner Shiva in the namespace “oil-production” of tenant ‘oil’. The “Welcome to Nginx” page is displayed.

PowerShell
 
kubectl -n gas-production get pods
kubectl -n gas-production exec -it nginx-6bb49bc94b-g52qw sh
curl a55013c0596be488a9ee2a63254bd021-1734400256.us-east-1.elb.amazonaws.com


Expose Service and Verify Cross-Cluster Communication

  • Login into the AWS console and create one more EKS cluster eks-cluster2.
  • From the local machine, log in to the PowerShell.
  • As a cluster administrator, in "eks-cluster2" cluster, create a new namespace, called test.
PowerShell
 
aws eks --region us-east-1 update-kubeconfig --name eks-cluster2
kubectl create ns test
kubectl get ns


Create a test pod in the namespace and try to access/curl the external IP address of lb-nginx service created by owner Shiva in the namespace “oil-production” of tenant ‘oil’. 

PowerShell
 
kubectl -n test run nginx --image=nginx
kubectl -n test exec -it nginx sh
curl a55013c0596be488a9ee2a63254bd021-1734400256.us-east-1.elb.amazonaws.com


“Welcome to Nginx” page is displayed.

As the cluster's nodes are in a private subnet and are reachable only inside or through a VPC, the services will be only accessible within the organization and not from the internet.

Summary

In this part, we have understood how to configure namespace options, resource quotas, and limit ranges, network policies for the tenants. Also, we have exposed services and verified cross-tenant and cross-cluster communications. In the next article, we will deep dive into security policy verification use cases by configuring security policy engines like Kyverno or Open Gatekeeper.

AWS Kubernetes cluster Load balancing (computing)

Opinions expressed by DZone contributors are their own.

Related

  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops
  • Automate Cluster Autoscaler in EKS
  • Automate Application Load Balancers With AWS Load Balancer Controller and Ingress
  • Distributed Cloud Architecture for Resilient Systems: Rethink Your Approach To Resilient Cloud Services

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!