Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Azure Kubernetes Service (AKS) Security Features

DZone 's Guide to

Azure Kubernetes Service (AKS) Security Features

Want to learn more about how to use AKS to secure your applications?

Free Resource

Today, we are deploying a Kubernetes cluster for our application. Azure Kubernetes Service (AKS) has many advantages over similar Kubernetes platforms because the user does not pay for the master VMS or its maintenance. An Azure subscriber pays only for the worker VMS. However, AKS — out of the box — is not a production-ready product. The following are the steps we need to take before we became almost production-ready.

In this article, we are going to discuss the following topics:

  1. AKS with Role-Based Access Control (RBAC)
  2. AKS secrets
  3. Ingress Controller and Ingresses
  4. Network policies
  5. Image Hardening
  6. Azure API management

AKS With Role-Based Access Control (RBAC)

Create an AKS cluster, ensure RBAC is enabled while creating, ( default comes with no RBAC/means all or nothing kind of permissions to your cluster.). Integrate the cluster with AAD as the following shows.

https://docs.microsoft.com/en-us/azure/aks/aad-integration

Create groups in AAD, and note the object id from the Azure portal. Use clusterrrolebinding and rolebinding to attach the AAD groups to default cluster roles such as clusteradmin, read, and view.

Example for ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: <Name-of-the-ClusterRoleBinding>
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: <Object-ID-of-the-AAD-group>


Create additional groups to AAD and map them to role bindings, so you isolate one namespace from another, in case you are using multiple namespaces for different environments.

Examples for RoleBinding and Roles Applied to a Namespace:

Role:

Roles are used to grant permissions within a namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kube-dev-role
  namespace: <namespace>
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
  - delete
  - create
- apiGroups:
  - "extensions"
  resources:
  - deployments
  verbs:
  - get
  - list
  - watch
  - delete
  - create


RoleBinding:

Once roles are defined to grant permissions to resources, you assign those Kubernetes RBAC permissions with a RoleBinding.

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kube-dev-rolebinding
  namespace: <namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kube-dev-role
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: <object ID of AAD group>


AKS Secrets

(The below content is taken from the Microsoft docs.)

A Kubernetes Secret is used to inject sensitive data into pods, such as access credentials or keys. You first create a Secret using the Kubernetes API. When you define your pod or deployment, a specific Secret can be requested. Secrets are only provided to nodes that have a scheduled pod that requires it, and the Secret is stored in tmpfs, not written to a disk. When the last pod on a node that requires a Secret is deleted, the Secret is deleted from the node's tmpfs. Secrets are stored within a given namespace and can only be accessed by pods within the same namespace.

The use of Secrets reduces the sensitive information that is defined in the pod or service YAML manifest. Instead, you request the Secret stored in Kubernetes API Server as part of your YAML manifest. This approach only provides the specific pod access to the Secret.

In our use case, we used the Kubernetes Secrets feature to store the keys and certificates issued by GoDaddy that is applied to all the ingresses.

Ex: Create GoDaddy Secret

kubectl -n <namespace> create secret tls <godaddy-secret name> --key <path to the private key\private.key> --cert <path to the certificate\certificate.crt>

Ingress Controller and Ingresses

Create an ingress controller to let the traffic in and out of a single entrance/exit of the cluster to the outside world. We used the Kubernetes version of the nginx ingress controller.

https://kubernetes.github.io/ingress-nginx/deploy/#mandatory-command

https://kubernetes.github.io/ingress-nginx/deploy/#generic-deployment

Create ingresses for all your services that need to be accessed from the outside world.

Example for Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: <Name of the Ingress>
  namespace: <Namespace>
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  tls:
  - hosts:
    - ui.domainname.com
    - appstream.domainname.com
    - auth0.domainname.com
    - cleanup.domainname.com
    - cleanupresource.domainname.com
    secretName: godaddy-secret
  rules:
  - host: ui.domainname.com
    http:
      paths:
      - backend:
          serviceName: tlui
          servicePort: 80


  - host: appstream.domainname.com
    http:
      paths:
      - backend:
          serviceName: appstream
          servicePort: 7075


  - host: auth0.domainname.com
    http:
      paths:
      - backend:
          serviceName: auth0
          servicePort: 7086


  - host: cleanup.domainname.com
    http:
      paths:
      - backend:
          serviceName: cleanup
          servicePort: 7092


Please follow the below link if you face any issues in ingresses.

https://dzone.com/articles/aks-common-issue-faced-when-internet-traffic-hits

Network Policies

By default, Kubernetes has an open network where every pod can talk to every pod.

Create network policies that control traffic from one pod to another or from an IP outside of the cluster. Enable this to control your network using network policies.

Kube-router can be deployed as a daemonset and offers this functionality. This can be installed on to your AKS cluster using the following YAML.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-router
  namespace: kube-system
  labels:
    k8s-app: kube-router
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-router
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      containers:
      - name: kube-router
        image: cloudnativelabs/kube-router
        args: ["--run-router=false", "--run-firewall=true", "--run-service-proxy=false", "--kubeconfig=/var/lib/kube-router/kubeconfig"]
        securityContext:
          privileged: true
        imagePullPolicy: Always
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        livenessProbe:
          httpGet:
            path: /healthz
            port: 20244
          initialDelaySeconds: 10
          periodSeconds: 3
        volumeMounts:
        - name: kubeconfig
          mountPath: /var/lib/kube-router/kubeconfig
          readOnly: true
        - name: kubecerts
          mountPath: /etc/kubernetes/certs/
          readOnly: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      initContainers:
      - name: install-cni
        image: busybox
        imagePullPolicy: Always
        command:
        - /bin/sh
        - -c
        - set -e -x;
      hostNetwork: true
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: kubeconfig
        hostPath:
          path: /var/lib/kubelet/kubeconfig
      - name: kubecerts
        hostPath:
          path: /etc/kubernetes/certs/


Start with denying all ingress and egress policies and continue to add white list rules. Make sure every pod can egress to the DNS server.

Deny all ingress traffic: Enable ingress isolation on the namespace by deploying the following YAML file.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: <namespace>
  name: deny-ingress
spec:
  podSelector: 
    matchLabels:
      environment: <env-name>
  policyTypes:
  - Ingress


Deny all egress traffic: Enable egress isolation on the namespace by deploying the following YAML file.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: <namespace>
  name: deny-egress
spec:
  podSelector: 
    matchLabels:
      environment: <env-name>
  policyTypes:
  - Egress


Allow egress traffic within the Namespace: Run the following to create a  NetworkPolicy, which allows egress traffic from any pods within the Namespace using the following YAML file.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: <nammespace>
  name: allow-from-same-namespace-and-egress
spec:
  podSelector:
    matchLabels:
      environment: <env-name>
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          tl-int: <nammespace>
    - podSelector: {}


Allow ingress traffic within the Namespace: Run the following to create a  NetworkPolicy,  which allows ingress traffic from any pods within the namespace using the following file.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: <namespace>
  name: allow-from-same-namespace-and-ingress
spec:
  podSelector:
    matchLabels:
      environment: <env-name>
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          tl-int: <namespace>
    - podSelector: {}


Allow DNS egress traffic: Run the following to create a label of name: kube-system on the kube-system namespace and a  NetworkPolicy, which allows DNS egress traffic from any pods in your namespace to the kube-system namespace.

# Execute the following command before applying this network policy
# kubectl label namespace kube-system kube-system=kube-system
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: <namespace>
  name: allow-dns
spec:
  podSelector: 
    matchLabels:
      environment: <env-name>
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kube-system: kube-system
    - podSelector:  
        matchLabels:
          k8s-app: kube-dns


Allow Internet for required services that access outside services using the following YAML file.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: <namespace>
  name: allow-<servicename>
spec:
  podSelector:
    matchLabels:
      service-name: <servicename>
  egress:
  - {} 
  policyTypes: 
  - Egress


Image Hardening

By default, Docker containers run as a root user. To run containers with a less privileged user, like "testuser," in the following example, modify your Dockerfile.

Note that the "From" image in the following Dockerfile is the existing image currently being used that runs as root.

FROM <your repo>/<your project>/<your image>:<your tag>
ARG user=testuser
ARG group=testuser
ARG uid=2000
ARG gid=2000
ENV TEST_HOME /home/testuser
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$TEST_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user} 
RUN chown -R ${user} "$TEST_HOME"
#RUN chown -R ${user} "/usr/src"
USER ${user}


Azure API management

(The below content is taken from the Microsoft docs)

API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers, and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it.

Advantages of Using an API Management Platform

  • Secured and protected channel between API gateway and backend
  • Request authentication and authorization, from consumer to API
  • Business and operational insights through reports and dashboards
  • Interactive API documentation
  • Facade layer to decouple internal implementation (extending life of older APIs)
  • Self-service on-boarding and efficient off-boarding processes.
  • Up and running quickly
  • Multi-region deployment makes APIs “local” to all consumers with a global content delivery network, resulting in improved app response time and local cache
  • All APIs accessible from one place, enabling centralized integration, no matter where backend systems reside
  • Elasticity and scalability
  • Superior security as cloud providers are held to higher standards and certifications than corporate data centers
  • Turn key solution out of the box (no salespeople or lengthy negotiations when API management is a cloud service).
  • In our use case, we import the swagger endpoints of API service into API management.

For more information, please visit the following link.

Kubernetes Dashboard

Kubernetes includes a web dashboard that can be used for basic management operations. This dashboard lets you view basic health status and metrics for your applications, create and deploy services, and edit existing applications.

If your AKS cluster uses RBAC, a ClusterRoleBinding must be created before you can correctly access the dashboard. Use the below command to create clusterrolebinding.

kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=view --serviceaccount=kube-system:kubernetes-dashboard


You can check the permissions of the clusterrole "view" by using the following command.

kubectl get clusterroles view -o yaml


You can now access the Kubernetes dashboard in your RBAC-enabled cluster. To start the Kubernetes dashboard, use the command below.

az aks browse --resource-group <RGName> --name <ClusterName>


Upgrade an AKS Cluster

Make sure you have the latest version of AKS, which has more security.

az aks upgrade --name <ClusterName> --resource-group <RGName> --kubernetes-version <version>


For more information, use the following link.

Upcoming Publications

1. Limit range

2. Resource quota

3. HPA(Horizontal Pod Autoscaling)

4. Cluster Autoscaling

Reference Links

https://docs.microsoft.com/en-us/azure/aks/aad-integration
https://kubernetes.github.io/ingress-nginx/deploy/#mandatory-command
https://kubernetes.github.io/ingress-nginx/deploy/#generic-deployment
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
https://docs.microsoft.com/en-us/azure/aks/upgrade-cluster

Topics:
kubernetes ,azure ,aks ,rbac ,security ,app security ,azure kubernetes service

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}