{{announcement.body}}
{{announcement.title}}

Protecting Hosts in Kubernetes Cluster

DZone 's Guide to

Protecting Hosts in Kubernetes Cluster

In this article, we will learn how to use Calico CNI Network Policies to build firewall functionalities to protect hosts in the cluster.

· Security Zone ·
Free Resource

The administrator of a Kubernetes cluster wants to secure it against incoming traffic from outside the cluster. Calico is a Container Network Interface (CNI) plugin that, in addition to CNI capabilities, provides Network policies to control traffic between pods as well as firewall functionality to secure nodes. 

In order to utilize Calico's capabilities as a firewall to secure node using Calico's GlobalNetworkPolicy, a HostEndpoint would need to be created per network interface on the node. This is a one off job it could be automated within the installer. Since the nodes are ephemeral and policies can be dynamic, we need a way to manage HostEndpoint objects on each host even after installation. 

There are several ways to accomplish this using Kubernetes approach, for example,

  1. DaemonSet that runs a container on every node and creates needed artifacts
  2. Static pod that runs on each node and creates needed artifacts
  3. Kubernetes Operator that makes sure that HostEndpoint object is created all nodes in the cluster

Outside of Kubernetes, traditional approaches to endpoint protection involves installing an agent on the host and enforcing policies through this agent.

We will use the first option using DaemonSet. Unlike DaemonSet, static Pods cannot be managed with kubectl or other Kubernetes API clients. DaemonSet ensures that a copy of a Pod always run on all or certain hosts, and it starts before other Pods.

Solution Overview

The proposed solution consists of creating a DaemonSet that will launch a Pod per host. The Pod will run an application to create HostEndpoint object for that host, if required.

As an example, we decide to enforce the following sample policy using HostEndpoint object:

  • Allow any egress traffic from the nodes.
  • Allow ingress SSH access to all nodes from a specific IP address.
  • Deny any other traffic.

This results in the following sequence of steps:

  1. Creating the application
  2. Create a Docker image
  3. Create a DaemonSet
  4. Create Network policy

Creating the Application

We will use shell script to write our application. The script loops infinitely and checks if a HostEndpoint object is created for the host where it is running. If not, it uses kubectl client to create HostEndpoint object for the host that is applicable for all the host's interfaces. If the HostEndpoint objects exists already for the host, it sleeps for 10 seconds before continuing. Notice that the name of the node is injected into the script via an environment variable. Node name is obtained using Downward API that allows containers to consume information about themselves or the cluster. 

Shell
 




x
29


1
#!/bin/sh
2
 
          
3
while [ true ]; do
4
 
          
5
  echo $NODE_NAME
6
 
          
7
  kubectl get hostendpoint $NODE_NAME
8
 
          
9
  if [ $? -eq 0 ]; then
10
    echo "Found hep for node $NODE_NAME"
11
    sleep 10
12
    continue
13
  fi
14
 
          
15
  echo "Creating hep for node $NODE_NAME"
16
 
          
17
  kubectl apply -f - <<EOF
18
apiVersion: crd.projectcalico.org/v1
19
kind: HostEndpoint
20
metadata:
21
  name: $NODE_NAME
22
  labels:
23
    host-endpoint: ingress
24
spec:
25
  interfaceName: "*"
26
  node: "$NODE_NAME"
27
EOF
28
 
          
29
done


The Network policy is applicable to any host that has label host-endpoint. We are creating the label here. The Network policies created later will check if the host has this label. 

Create a Docker Image

To deploy your app to Kubernetes, we first have to containerise it. To do so, create the following Dockerfile in the same directory as the source code file:

Shell
 




xxxxxxxxxx
1
11


1
FROM alpine
2
 
          
3
WORKDIR /app
4
 
          
5
ADD https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl /usr/local/bin
6
 
          
7
ADD run.sh /app
8
 
          
9
RUN chmod +x /usr/local/bin/kubectl
10
RUN chmod +x /app/run.sh
11
CMD [ "/app/run.sh" ]


We are using Alpine as our base image as it is a minimal Linux distribution that allows us to run a shell script. To talk to the Kubernetes API server, we include kubectl in the image and add the script created in the last step. When the container starts, script is executed.

Build the Docker image and push it to a Docker registry that is accessible from all the nodes in the Kubernetes cluster.

Shell
 




xxxxxxxxxx
1


1
docker build -t randhirkumars/hepinstall:v1 .
2
docker push randhirkumars/hepinstall:v1


We would need to login to the Docker registry if it requires credentials to push an image.

Create Network Policy

GlobalNetworkPolicy and HostEndpoint objects from Calico are available as custom objects. We need to create corresponding CustomResourceDefinition (CRD) first. Create CRD for GlobalNetworkPolicy:

YAML
 




xxxxxxxxxx
1
12


1
apiVersion: apiextensions.k8s.io/v1beta1
2
kind: CustomResourceDefinition
3
metadata:
4
  name: globalnetworkpolicies.crd.projectcalico.org
5
spec:
6
  scope: Cluster
7
  group: crd.projectcalico.org
8
  version: v1
9
  names:
10
    kind: GlobalNetworkPolicy
11
    plural: globalnetworkpolicies
12
    singular: globalnetworkpolicy


and CRD for HostEndpoint:

YAML
 




xxxxxxxxxx
1
12


 
1
apiVersion: apiextensions.k8s.io/v1beta1
2
kind: CustomResourceDefinition
3
metadata:
4
  name: hostendpoints.crd.projectcalico.org
5
spec:
6
  scope: Cluster
7
  group: crd.projectcalico.org
8
  version: v1
9
  names:
10
    kind: HostEndpoint
11
    plural: hostendpoints
12
    singular: hostendpoint


Shell
 




xxxxxxxxxx
1


 
1
kubectl create -f crds.yaml


Next create policies to

  • Allow any egress traffic from the nodes.
YAML
 




xxxxxxxxxx
1


 
1
apiVersion: projectcalico.org/v3
2
kind: GlobalNetworkPolicy
3
metadata:
4
  name: allow-outbound-external
5
spec:
6
  order: 10
7
  egress:
8
    - action: Allow
9
  selector: has(host-endpoint)


  • Allow ingress to all nodes from a specific IP address. Here, ingress traffic from CIDRs - [10.240.0.0/16, 192.168.0.0/16] are allowed.
Shell
 




xxxxxxxxxx
1
13


1
apiVersion: projectcalico.org/v3
2
kind: GlobalNetworkPolicy
3
metadata:
4
  name: allow-cluster-internal-ingress
5
spec:
6
  order: 10
7
  preDNAT: true
8
  applyOnForward: true
9
  ingress:
10
    - action: Allow
11
      source:
12
        nets: [10.240.0.0/16, 192.168.0.0/16]
13
  selector: has(host-endpoint)


  • Deny any other traffic.
Shell
 




xxxxxxxxxx
1
11


 
1
apiVersion: projectcalico.org/v3
2
kind: GlobalNetworkPolicy
3
metadata:
4
  name: drop-other-ingress
5
spec:
6
  order: 20
7
  preDNAT: true
8
  applyOnForward: true
9
  ingress:
10
    - action: Deny
11
  selector: has(host-endpoint)


The order field is important here. The drop-other-ingress policy has a higher order value than allow-cluster-internal-ingress, so that it applies after allow-cluster-internal-ingress

Shell
 




xxxxxxxxxx
1


1
kubectl create -f policy.yaml


Create a DaemonSet

Apart from the Docker image, to deploy our application on Kubernetes cluster, we need a few more artifacts.

  • A Pod that runs the image in a container
  • A Control plane object that watches over the Pod, DaemonSet in our case
  • A service account with which Pod runs
  • A cluster role that allows Pod to interact with API server for resources
  • A cluster role binding to bind the cluster role to the service account

Service Account

This is the service account that the Pod uses.

YAML
 




xxxxxxxxxx
1


1
apiVersion: v1
2
kind: ServiceAccount
3
metadata:
4
  name: hep-sa


Cluster Role

We need RBAC to runs APIs on API server for HostEndpoint objects. We have asked for all actions on HostEndpoint objects from the appropriate API group.

YAML
 




xxxxxxxxxx
1
14


 
1
kind: ClusterRole
2
apiVersion: rbac.authorization.k8s.io/v1
3
metadata:
4
  name: hep-cr
5
rules:
6
  - apiGroups: ["crd.projectcalico.org"]
7
    resources:
8
      - hostendpoints
9
    verbs:
10
      - create
11
      - get
12
      - list
13
      - update
14
      - watch


Cluster Role Binding

Next, we need to bind the role to the service account thereby providing permissions to the Pod.

YAML
 




xxxxxxxxxx
1
12


1
apiVersion: rbac.authorization.k8s.io/v1
2
kind: ClusterRoleBinding
3
metadata:
4
  name: hep-crb
5
roleRef:
6
  apiGroup: rbac.authorization.k8s.io
7
  kind: ClusterRole
8
  name: hep-cr
9
subjects:
10
- kind: ServiceAccount
11
  name: hep-sa
12
  namespace: default


Pod and DaemonSet

Finally, create a DaemonSet object that will create a Pod with the desired service account. The host name is injected as an environment variable to the container. It is good practice to not run container with root privileges. Here, we are using a non-root account to run the container.

YAML
 




xxxxxxxxxx
1
26


1
apiVersion: apps/v1
2
kind: DaemonSet
3
metadata:
4
  name: hep-ds
5
  labels:
6
spec:
7
  selector:
8
    matchLabels:
9
      name: hep-ds
10
  template:
11
    metadata:
12
      labels:
13
        name: hep-ds
14
    spec:
15
      serviceAccountName: hep-sa
16
      containers:
17
      - image: randhirkumars/hepinstall:v1
18
        imagePullPolicy: Always
19
        name: hep-install
20
        env:
21
        - name: NODE_NAME
22
          valueFrom:
23
            fieldRef:
24
              fieldPath: spec.nodeName
25
        securityContext:
26
          runAsUser: 1337


Create all the objects.

Shell
 




xxxxxxxxxx
1
12


 
1
> kubectl apply -f hep.yaml
2
serviceaccount/hep-sa created
3
clusterrole.rbac.authorization.k8s.io/hep-cr created
4
clusterrolebinding.rbac.authorization.k8s.io/hep-crb created
5
daemonset.apps/hep-ds created
6
 
          
7
> kubectl get po
8
NAME                                     READY   STATUS    RESTARTS   AGE
9
hep-ds-9jjtq                             1/1     Running   0          2s
10
hep-ds-c97jz                             1/1     Running   0          2s
11
hep-ds-fbghm                             1/1     Running   0          2s
12
hep-ds-nbllb                             1/1     Running   0          2s


Check the logs for a Pod to ensure it is creating HostEndpoint for that node.

Shell
 




xxxxxxxxxx
1
13


1
> kubectl logs hep-ds-9jjtq
2
k8s-node-2
3
Error from server (NotFound): hostendpoints.crd.projectcalico.org "k8s-node-2" not found
4
Creating hep for node k8s-node-2
5
hostendpoint.crd.projectcalico.org/k8s-node-2 created
6
k8s-node-2
7
NAME                    AGE
8
k8s-node-2   0s
9
Found hep for node k8s-node-2
10
k8s-node-2
11
NAME                    AGE
12
k8s-node-2   8s
13
Found hep for node k8s-node-2


Verify that HostEndpoint is created for each node.

Shell
 




xxxxxxxxxx
1


 
1
> kubectl get hostendpoint
2
NAME                         AGE
3
k8s-master-nf-1   36s
4
k8s-master-nf-2   39s
5
k8s-master-nf-3   36s
6
k8s-node-1        38s
7
k8s-node-2        36s


Conclusion

Kubernetes does not provide firewall functionality to protect hosts in the cluster. We used Calico for that purpose which, in addition to CNI capabilities, provides Network policies to control traffic between pods as well as firewall functionality to secure nodes. This solution is Kubernetes-native and does not require any external software to be installed.

Topics:
cni, firewall, kubernetes, security, tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}