DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • GitOps: Flux vs Argo CD
  • 3 Best Tools to Implement Kubernetes Observability
  • Zero to Hero on Kubernetes With Devtron
  • Flux and ArgoCD: A Guide to Kubernetes Deployment Automation

Trending

  • How to Submit a Post to DZone
  • Docker Base Images Demystified: A Practical Guide
  • How Large Tech Companies Architect Resilient Systems for Millions of Users
  • Medallion Architecture: Why You Need It and How To Implement It With ClickHouse
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Developing Applications on Multi-tenant Clusters With Flux and Kustomize

Developing Applications on Multi-tenant Clusters With Flux and Kustomize

Take a look at how multiple teams can use the resources of a single cluster to develop an application.

By 
Stefan Prodan user avatar
Stefan Prodan
·
Jul. 29, 19 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
12.2K Views

Join the DZone community and get the full member experience.

Join For Free

To maximize resources and minimize costs when developing applications on Kubernetes, you may opt to use a single multi-tenant cluster split into namespaces. The use of namespaces allows your teams to develop applications and maybe even QA them on a single cluster. Fewer running clusters can help prevent costs from spiralling out of control and more seriously, can reduce security breaches that may occur with multiple unattended clusters running throughout your organization.

In this tutorial, Stefan Prodan (@stefanprodan) describes how to manage deployments with GitOps using Flux and Kustomize on a multi-tenant cluster split into namespaces. But before we dive into the details, let’s define some of the terms and tools that we’ll be using in this tutorial:

  • Flux — Flux is a GitOps operator for continuous delivery that automatically ensures that the state of a cluster matches the config in git. It uses an operator in the cluster to trigger deployments inside Kubernetes. means you don't need a separate CD tool. Flux monitors all relevant image repositories, detects new images, triggers deployments and updates the desired running configuration based on that (and a configurable policy).
  • Namespaces and Multi-tenant clusters — Kubernetes challenges the way we have traditionally thought about development environments. Kubernetes clusters have a built-in feature that allows you to share a cluster among different environments and between different projects on separate teams using namespaces. A namespace is a way to divide the resources of one cluster amongst teams.
  • Kustomize — Kustomize lets you customize raw, template-free YAML files and use them for multiple purposes, while at the same time leaving the original YAML untouched and usable as is.
  • Flagger — Flagger is an open-source tool that automates progressive delivery strategies like canary, A/B and other more complex deployments.

1. Create the fluxcd-multi-tenancy Repository.

This initial repository serves as a starting point for a multi-tenant cluster managed with Git, Flux, and Kustomize.

I'm assuming that a multi-tenant cluster is shared by multiple teams. The cluster wide operations are performed by the cluster administrators while the namespace scoped operations are performed by various teams each with their own Git repository. This means a team member who is not a cluster admin can't create namespaces, custom resources definitions or change something in another team’s namespace.

flux-multi-tenancy.png

2. Next, Create Two Git Repositories: One for Cluster Admins and Another for Teams.

Create two git repositories:

  • Clone the fluxcd-multi-tenancy repository for the cluster admins. This will be referred to as org/dev-cluster
  • Clone the fluxcd-multi-tenancy-team1 repository for the dev team1. This repo will be referred to it as org/dev-team1
Team Namespace Git Repository Flux RBAC
ADMIN all org/dev-cluster Cluster wide e.g. namespaces, CRDs, Flux controllers
DEV-TEAM1 team1 org/dev-team1 Namespace scoped e.g. deployments, custom resources
DEV-TEAM2 team2 org/dev-team2 Namespace scoped e.g. ingress, services, network policies


Cluster Admin repository structure:

cluster-admin-yaml.png

The base folder holds the deployment spec used for installing Flux in the flux-system namespace and in the teams' namespaces. All Flux instances share the same Memcached server deployed at install time in flux-system namespace.

With .flux.yaml, configure Flux to run the Kustomize build on the cluster dir and to deploy the generated manifests:

kustomize-build-cluster-admin.png

Repository Structure for development team1:

team1-yaml.png

The workloads folder contains the desired state of the team1 namespace and the flux-patch.yaml contains the Flux annotations that define how the container images should be updated.

With .flux.yaml configure Flux to run the Kustomize build, apply the container update policies and to deploy the generated manifests:

flux-patch-yaml.png

3. Install the Cluster Admin Flux Agent

In the dev-cluster repo, change the git URL to point to your fork:

vim ./install/flux-patch.yaml
--git-url=git@github.com:org/dev-cluster


Install the cluster-wide Flux with kubectl kustomize:

kubectl apply -k ./install/


Get the public SSH key with:

fluxctl --k8s-fwd-ns=flux-system identity


Add the public key to the github.com:org/dev-cluster repository deploy keys with write access.

The cluster-wide Flux does the following:

  • Creates the cluster objects from cluster/common directory (CRDs, cluster roles, etc.)
  • Creates the team1 namespace and deploys a Flux instance with restricted access to that namespace

4. Install a Flux Agent per Team

Change the dev team1 git URL:

vim ./cluster/team1/flux-patch.yaml
--git-url=git@github.com:org/dev-team1


After committing your changes, the system Flux configures the team1's Flux to sync with the org/dev-team1 repository.

Get the public SSH key for team1 with:

fluxctl --k8s-fwd-ns=team1 identity


Add the public key to the github.com:org/dev-team1 deploy keys with write/access. Team1's Flux applies the manifests from org/dev-team1 repository only in the team1 namespace, which is enforced with RBAC and role bindings.

If team1 needs to deploy a controller that depends on a CRD or a cluster role, they'll have to open a PR in the org/dev-cluster`repository and add those cluster-wide objects in the cluster/common directory.

The team1's Flux instance can be customized with different options than the system Flux using the cluster/team1/flux-patch.yaml.

flux-patch-yaml.png

The k8s-allow-namespace restricts the Flux discovery mechanism to a single namespace.

5. Install Flagger

Flagger is a progressive delivery Kubernetes operator that can be used to automate canary, A/B testing, and Blue/Green deployments. You can deploy Flagger by including its manifests in the cluster/kustomization.yaml file:

flux-flagger.png

Commit the changes to git and wait for system Flux to install Flagger and Prometheus:

fluxctl --k8s-fwd-ns=flux-system sync
kubectl -n flagger-system get po
NAME                                  READY   STATUS
flagger-64c6945d5b-4zgvh              1/1     Running
flagger-prometheus-6f6b558b7c-22kw5   1/1     Running

A team member can now push canary objects to org/dev-team1 repository and Flagger will automate the deployment process. Flagger can notify your teams when a canary deployment has been initialized, when a new revision has been detected and if the canary analysis failed or succeeded.

Enable Slack notifications by editing the cluster/flagger/flagger-patch.yaml file:

flagger-patch.png


6. Configuring Pod Security Policies per Team

With pod security policies, a cluster admin can define a set of conditions that a pod must run with in order to be accepted into the system. For example, you can forbid a team from creating privileged containers or use the host network.

Edit the team1 pod security policy cluster/team1/psp.yaml

psp-yaml.png

Set privileged , hostIPC , hostNetwork  and hostPID  to false and commit the change to git. From this moment on, team1 will not be able to run containers with an elevated security context under the default service account.

If a team member adds a privileged container definition in the org/dev-team1 repository, Kubernetes will deny it:

kubectl -n team1 describe replicasets podinfo-5d7d9fc9d5
Error creating: pods "podinfo-5d7d9fc9d5-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

7. Enforcing Custom Policies per Team

Gatekeeper is a validating webhook that enforces CRD-based policies executed by Open Policy Agent.

flux-open-policy-agent-gatekeeper.png

You can deploy Gatekeeper by including its manifests in the cluster/kustomization.yaml

gatekeeper-kustomize.png

Inside the gatekeeper dir there is a constraint template that instructs OPA to reject Kubernetes deployments if no container resources are specified.

Enable the constraint for team1 by editing the cluster/gatekeeper/constraints.yaml:

gatekeeper-constraints.png

Commit the changes to git and wait for system Flux to install Gatekeeper and apply the constraints:

fluxctl --k8s-fwd-ns=flux-system sync
kubectl -n gatekeeper-system get po 

If a team member adds a deployment without CPU or memory resources in the org/dev-team1< repository, Gatekeeper will deny it:

kubectl -n flux-system logs deploy/flux
admission webhook "validation.gatekeeper.sh" denied the request:
[denied by containerresources] container <podinfo> has no memory requests
[denied by containerresources] container <sidecar> has no memory limits


8. Adding a New Team, Namespace and Repository

If you want to add another team to the cluster, first create a git repository as github.com:org/dev-team2

Run the create team script:

./scripts/create-team.sh team2
team2 created at cluster/team2/</p><p></p>
team2 added to cluster/kustomization.yaml 


Change the git URL in "cluster/team2" dir:

vim ./cluster/team2/flux-patch.yaml
--git-url=git@github.com:org/dev-team

Push the changes to the master branch of "org/dev-cluster" and sync with the cluster:

fluxctl --k8s-fwd-ns=flux-system sync

Get the team2 public SSH key with:

fluxctl --k8s-fwd-ns=team2 identity 


Add the public key to the github.com:org/dev-team2 repository to deploy keys with write access. Team2's Flux applies the manifests from the org/dev-team2 repository to only the team2 namespace.

9. Isolating Tenants

With this setup, Flux prevents a team member from altering cluster level objects or other team's workloads.

To harden tenant isolation, the cluster admin should consider using:

  • Resource quotas (limit the compute resources that can be requested by a team)
  • Network policies (restrict cross namespace traffic)
  • Pod security policies (prevent running privileged containers or host network and filesystem usage)
  • Open Policy Agent admission controller (enforce custom policies on Kubernetes objects)
Kubernetes cluster Flux (machine-learning framework) Git teams Repository (version control) application

Published at DZone with permission of Stefan Prodan, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • GitOps: Flux vs Argo CD
  • 3 Best Tools to Implement Kubernetes Observability
  • Zero to Hero on Kubernetes With Devtron
  • Flux and ArgoCD: A Guide to Kubernetes Deployment Automation

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: