Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Kubernetes Namespaces Explained

DZone's Guide to

Kubernetes Namespaces Explained

Learn to use Kubernetes namespaces to define the objects within for better cluster management across development teams.

· Cloud Zone ·
Free Resource

Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.

Why Namespaces?

Kubernetes Namespaces can be used to divide and manage a cluster, segregating the cluster for use by different dev teams or for different purposes (e.g. dev and prod). Namespaces can be a focus for role-based access control, with the option to define roles as applying across a namespace. Resource quotas can be set as specific to a namespace, allowing specific namespaces to be given higher resource allowances. Sometimes one will choose to use distinct clusters for different purposes but if the purposes are related then namespaces can be a more convenient or appropriate option.

To understand namespaces we need to understand that they provide a scope for Kubernetes Names. Within the Namespace an Object can be referred to by a short name like ‘Captain’ and the Namespace adds further scope to identify the object, e.g. which ship the Captain is Captain of. Within the Namespace the name must be unique (there can only be one Captain per Namespace) but only the combination of Name and Namespace needs to be unique across Namespaces. This simplifies naming within Namespaces and means that it is possible to cross Namespace boundaries. Let’s explore this by example.

Namespaces Demo App

Our Demo App consists of two simple Spring Boot Web apps with simple UIs. Each app is a microservice which can be set to represent the role of ‘ captain ’, ‘ bridge ’ or ‘ science-officer ’, depending upon the value set for the spring.application.name property. We’ll build one instance of each role per project. The two project represent different ‘ships’ and are the same except that the UIs have different images to distinguish the ships.

Within each microservice, we can make a call to another, such as:

@GetMapping(value = "bridge")
public String callBridge(){
    return restTemplate.postForEntity(bridgeUrl, appName, String.class).getBody();
}

The  bridgeUrl  is configured using a spring boot property (bridge.call.url). The appName  is calling service’s spring.application.name, so that the responding service can acknowledge who it is replying to. Each service needs to be able to reply to such a call:

@PostMapping("call")
public String respond(@RequestBody String caller){
    return String.format("yes %1$s, %2$s here",caller,appName);
}

So if the captain calls the bridge then the bridge responds with “Yes captain, bridge here”.

Each service contains a UI that retrieves the spring.application.name from the backend’s ‘app-name’ endpoint and uses it to set the title and decide which image to show:

$.get( "app-name", function( data ) {
    document.title=data;
    $('#pageTitle').text(data);
    $('#picture').attr('src',data + '.jpg');
    $('#call-'+data).hide();
}).fail(function(error) {
    alert('cannot retrieve app config: '+error.responseText);
});

So the image could be bridge.jpg, captain.jpg or science-officer.jpg. The .hide()  function here ensures we don’t have a button for the captain to call himself, for example.

The embedded UI just provides buttons to call its backend to communicate across the ship:

Image title

Demo App in Minikube

The two apps are both set up with the Fabric8 Maven plugin so that we can build Docker images for each — the first is called ‘startrek/tos’ and the second is called ‘startrek/tng’.

To deploy the apps to minikube we have a Kubernetes deployment descriptor for each. The descriptors are almost the same. For the tos descriptor we first create a Namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: tos
  labels:
    name: tos

Then a ConfigMap for configurations that we will apply to all of the services in the descriptor:

apiVersion: v1
kind: ConfigMap
metadata:
  name: tos-config
  namespace: tos
data:
  JAVA_OPTS: -Xmx64m -Xms64m
  BRIDGE_CALL_URL: "http://bridge:8080/call"
  CAPTAIN_CALL_URL: "http://captain:8080/call"
  SCIENCEOFFICER_CALL_URL: "http://science-officer:8080/call"

So we’ll call the captain using the name ‘ captain ’. There will be a captain available using that name because the descriptor includes a Kubernetes Service for it:

apiVersion: v1
kind: Service
metadata:
  name: captain
  namespace: tos
spec:
  selector:
    serviceType: captain
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30081
  type: NodePort

The Service looks for Pods labeled with ‘ serviceType: captain ’. There will be Pods matching this because the descriptor includes a Deployment to create them:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: captain
  namespace: tos
  labels:
    serviceType: captain
spec:
  replicas: 1
  template:
    metadata:
      name: captain
      labels:
        serviceType: captain
    spec:
      containers:
        - name: captain
          image: startrek/tos:latest
          imagePullPolicy: Never
          ports:
          - containerPort: 8080
          env:
          - name: SPRING_APPLICATION_NAME
            value: "captain"
          envFrom:
          - configMapRef:
              name: tos-config

These Pods use the startrek/tos Docker image and the tos-config ConfigMap. The setup is much the same for the bridge and science-officer except that different external ports are used on the Services. The tng descriptor instead uses the startrek/tng docker image.

With these descriptors, we can deploy to minikube. We can start minikube with:

minikube start --memory 4000 --cpus 3

Link our terminal to the minikube Docker registry with:

eval $(minikube docker-env)

And then build each application by running the following from its directory:

mvn clean install

And deploy with:

kubectl create --save-config -f ./tos/k8sdescriptor.yaml 
kubectl create --save-config -f ./tng/k8sdescriptor.yaml 

Then see all the services with:

open http://$(minikube ip):30080 
open http://$(minikube ip):30081 
open http://$(minikube ip):30082 
open http://$(minikube ip):30083 
open http://$(minikube ip):30084 
open http://$(minikube ip):30085

Image title

Crossing the Namespace Divide

We can switch the tng bridge and captain to call to the tos science-officer. To do this we go to the k8sdescriptor for tng and replace the value of SCIENCE_OFFICER_CALL_URL (http://science-officer:8080/call) with http://science-officer.tos:8080/call. (It would also work to use the fully qualified name of http://science-officer.tos.svc.cluster.local:8080/call) Then we save the change and do:

kubectl apply -f ./tng/k8sdescriptor.yaml 
kubectl delete --all pods --namespace=tng 

This applies the change and deletes the tng Pods. The Deployment will automatically create new ones using the new config. (The ConfigMap change doesn’t trigger an update automatically.)

The difference isn’t very apparent as we just see the same reply we did before (‘ yes captain, science-officer here ’). If we want to be sure that the tng services really are calling the tos science-officer then we can delete the tos namespace with ‘ kubectl delete namespace tos ’ and see that the call now fails and that the longer name is in the error.

Image title

Now that we know that we can call across Namespaces by giving the Namespace name in the call, we might consider just including an extra name (in this case ‘ tos ’ or ‘ tng ’) in all of our Kubernetes Object names so that instead of calling the ‘ captain ’ we call ‘ tos-captain ’. If we’re interested in doing that then Helm will do it for us, and more. But before we look at that we should clean up everything we’ve created:

kubectl delete namespaces tos 
kubectl delete namespaces tng

The Objects in the namespaces will be removed automatically.

Sharing a Namespace and Helm

Namespaces provide distinct logical spaces for which we can control the permissions or resources. But if the logical separation we’re looking for is not about resources or permissions and is more about packaging and deployment then we might look at using a single namespace and packaging with helm.

Helm helps us to package up our Kubernetes applications so that they can be more easily shared and re-used. To achieve this we create a helm chart. A helm chart is like a parameterized deployment descriptor — actually it is a template that is used to generate deployment descriptors for each install.

The chart for this application in GitHub was created in two stages. First, a chart was created to represent a single microservice using ‘ helm create startrekcharacter ’ and for that the configurations were applied to set environment variables to be able to call other services and to default to the startrek/tos image. Then an umbrella chart was created called ‘ startrek ’ and the first chart was moved to be a sub chart of the umbrella chart. The umbrella chart re-uses the sub chart multiple times so as to include a bridge, captain and scienceofficer, with the option to skip any of these. The umbrella chart allows the image to be set for all of the sub charts by treating that variable as a global. It also allows the URL for another service (e.g. the bridge) to be overridden to cover the case where a component is skipped (meaning that the user chooses to point to an existing instance of the bridge rather than installing a bridge together with the captain and scienceofficer).

This means that we can install the components for tos into the default namespace with the command:

helm install --name=tos ./charts/startrek/

Then install the tng captain and scienceofficer (no tng bridge) in the same namespace, to point to the tos bridge with:

helm install --name=tng --set global.image.repository=startrek/tng,bridge.enabled=false,global.bridge.call.url=http://tos-bridge:8080/call ./charts/startrek/

Test them with:

minikube service tos-bridge
minikube service tos-captain
minikube service tos-science-officer
minikube service tng-captain
minikube service tng-science-officer

And delete the tos release with:

helm del --purge tos

Again we can confirm that the tng services were pointing to the (now removed) tos bridge:

Image title

And we can remove tng with:

helm del --purge tng


TrueSight Cloud Cost Control provides visibility and control over multi-cloud costs including AWS, Azure, Google Cloud, and others.

Topics:
kubernetes ,spring boot 2.0 ,helm ,minikube ,cloud ,microservices ,docker ,jquery ,java ,maven

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}