{{announcement.body}}
{{announcement.title}}

Locking Down Application Access in AKS

DZone 's Guide to

Locking Down Application Access in AKS

Learn how to lock down application access in AKS.

· Security Zone ·
Free Resource

When you deploy a service in AKS that is a type of load balancer, or you create an Ingress, AKS will automatically create an NSG rule for you. This rule is applied to the NSG that is attached to your nodes NICs and will open up your services port to "All." If you're planning on allowing your application to be accessed by anyone, then this is fine. However, if you want to lock access to your application to specific users, then this can be a bit of a pain. You will find that if you amend the rules that AKS created for you, it will put them back again later, which is annoying. So, how do you lock your application down? Here's how:

Ingress

To get traffic into your cluster, you have a choice of using a service that is exposed via a load balancer or creating an Ingress resource for your cluster and directing all traffic through that. If you are looking to control inbound traffic, I would strongly recommend using an Ingress controller. By funneling all your traffic through the Ingress, you make it much easier to lock this down, as you are only dealing with one entry point and one IP. You can also use some of the features of the Ingress to lock things down.

In my environments, I am using an Ingress, with an Nginx Ingress controller, and so, the examples below will focus on this.

Subnet-Level NSG

The simplest solution just overrides the NSG that AKS creates for you; you can do this by creating a subnet-level NSG. NSGs are evaluated in order; Subnet first, so if you configure your subnet NSG for the way you want to allow traffic, then this will take precedence. There are a few things to be aware of here:

Inbound Rules

The only inbound rules you need to create are those for your applications ports. Create these rules as normal, and apply any restrictions you require. If you are using an Ingress controller, then you can scope your inbound rules to a destination IP of just your Ingress's external IP; you do not need to use any internal IP's.

Outbound Rules

The easiest setup for outbound traffic is to leave the default rules in place, which will allow all traffic outbound to the Internet. However, if you need to lock down outbound access, you can do so, but you need to ensure you allow the following traffic:

  • Outbound port 22 to the AzureCloud service tag
  • If you are using CertManager alongside Nginx to issue certificates and are using DNS validation, you need to allow traffic on port 53 outbound
  • Any external services your application needs to access will need to be allowed. If these are Azure services, you can use service tags to limit the scope (e.g. SQL, ServiceBus, KeyVault)

Ingress Whitelisting

Adding an NSG will allow you to lock down access to your cluster; however, these rules will apply to all applications running on your cluster. If you need to limit access to some applications but not others, on the same cluster, then this may not work for you. In particular, if your applications are running on the same port and rely on Ingress routes to differentiate them, then using an NSG won't help. Your NSG will need to be configured for the application with the lowest restriction, which will then apply to all applications. To resolve this, we can look at filtering at the Ingress controller level instead.

Using Nginx as the Ingress controller means we have access to a number of configuration annotations that we can apply, one of which allows us to configure a list of whitelisted IP addresses that are applied at the Ingress level, so each separate Ingress can have its own list of allowed IPs. Those IP's that are not in the allowed list will receive a 403 error when they try and access the application.

Nginx Configuration

Before we can apply the Ingress annotation, we need to make a change to our Nginx setup. If you deploy Nginx in its default configuration (or use the Helm chart with no settings enabled), then you will find that your whitelist prevents access even for IP's that are on your allowed this. This is due to access to the Ingress going through an Azure load balancer, what you will see in the logs is all traffic coming from the load balancer IP, not the client IP. To ensure that the source IP is preserved, we need to enable the local external traffic policy.

If your Nginx controllers were deployed using Helm, you can run Helm upgrade to apply this:

helm upgrade <releaseName> stable/nginx-ingress --set controller.service.externalTrafficPolicy=local --set controller.service.type=LoadBalancer 


If you have manually deployed Nginx, you need to amend the service to set the externalTrafficPolicy:

  {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "Nginx"
      },
      "spec": {
     ...
        "type": "LoadBalancer",
        "externalTrafficPolicy": "Local"
      }
    }


Whitelist Annotation

Once Nginx is up and running again with the correct external policy, we can go ahead and apply the annotation to our Ingress controller, the annotation value is the IP Range, or a comma-separated list of IP Ranges. You can apply this using Kubectl or using a YAML definition.

Kubectl

kubectl annotate ingress nginx.ingress.kubernetes.io/whitelist-source-range="201.202.203.10/32,187.212.32.12/32"


YAML

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/whitelist-source-range:"201.202.203.10/32,187.212.32.12/32"
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: service1
          servicePort: 4200
      - path: /bar
        backend:
          serviceName: service2
          servicePort: 8080


If we now try and hit the app from an IP that is not in the whitelist, we will get a 403 response from Nginx.

403 Image

API Lockdown

Just to round things off, a very common question I see is whether the management API for AKS can be locked down to specific IP ranges or use something like service endpoints. While I think this is a very valid question, unfortunately, at the time of writing, it is not possible to restrict what IP ranges can access the Kubernetes management API in AKS. The uservoice entry for this has a comment from August last year saying this is being worked on, so hopefully, we will see some movement on this in the near future.

Image Attribution

Barrier flickr photo by Nik Stanbridge shared under a Creative Commons (BY-NC-ND) license.

Topics:
aks ,security ,access ,lock down ,cluster ,azure kubernetes service ,azure ,cloud ,cloud security

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}