DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Auto-Instrumentation in Azure Application Insights With AKS
  • Deploying a Scala Play Application to Heroku: A Step-by-Step Guide
  • Why Mocking Sucks
  • Moving PeopleSoft ERP Data Between Databases With Data Mover Scripts

Trending

  • Why Documentation Matters More Than You Think
  • Optimize Deployment Pipelines for Speed, Security and Seamless Automation
  • Artificial Intelligence, Real Consequences: Balancing Good vs Evil AI [Infographic]
  • Start Coding With Google Cloud Workstations

Locking Down Application Access in AKS

Learn how to lock down application access in AKS.

By 
Sam Cogan user avatar
Sam Cogan
·
Apr. 10, 19 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
9.4K Views

Join the DZone community and get the full member experience.

Join For Free

When you deploy a service in AKS that is a type of load balancer, or you create an Ingress, AKS will automatically create an NSG rule for you. This rule is applied to the NSG that is attached to your nodes NICs and will open up your services port to "All." If you're planning on allowing your application to be accessed by anyone, then this is fine. However, if you want to lock access to your application to specific users, then this can be a bit of a pain. You will find that if you amend the rules that AKS created for you, it will put them back again later, which is annoying. So, how do you lock your application down? Here's how:

Ingress

To get traffic into your cluster, you have a choice of using a service that is exposed via a load balancer or creating an Ingress resource for your cluster and directing all traffic through that. If you are looking to control inbound traffic, I would strongly recommend using an Ingress controller. By funneling all your traffic through the Ingress, you make it much easier to lock this down, as you are only dealing with one entry point and one IP. You can also use some of the features of the Ingress to lock things down.

In my environments, I am using an Ingress, with an Nginx Ingress controller, and so, the examples below will focus on this.

Subnet-Level NSG

The simplest solution just overrides the NSG that AKS creates for you; you can do this by creating a subnet-level NSG. NSGs are evaluated in order; Subnet first, so if you configure your subnet NSG for the way you want to allow traffic, then this will take precedence. There are a few things to be aware of here:

Inbound Rules

The only inbound rules you need to create are those for your applications ports. Create these rules as normal, and apply any restrictions you require. If you are using an Ingress controller, then you can scope your inbound rules to a destination IP of just your Ingress's external IP; you do not need to use any internal IP's.

Outbound Rules

The easiest setup for outbound traffic is to leave the default rules in place, which will allow all traffic outbound to the Internet. However, if you need to lock down outbound access, you can do so, but you need to ensure you allow the following traffic:

  • Outbound port 22 to the AzureCloud service tag
  • If you are using CertManager alongside Nginx to issue certificates and are using DNS validation, you need to allow traffic on port 53 outbound
  • Any external services your application needs to access will need to be allowed. If these are Azure services, you can use service tags to limit the scope (e.g. SQL, ServiceBus, KeyVault)

Ingress Whitelisting

Adding an NSG will allow you to lock down access to your cluster; however, these rules will apply to all applications running on your cluster. If you need to limit access to some applications but not others, on the same cluster, then this may not work for you. In particular, if your applications are running on the same port and rely on Ingress routes to differentiate them, then using an NSG won't help. Your NSG will need to be configured for the application with the lowest restriction, which will then apply to all applications. To resolve this, we can look at filtering at the Ingress controller level instead.

Using Nginx as the Ingress controller means we have access to a number of configuration annotations that we can apply, one of which allows us to configure a list of whitelisted IP addresses that are applied at the Ingress level, so each separate Ingress can have its own list of allowed IPs. Those IP's that are not in the allowed list will receive a 403 error when they try and access the application.

Nginx Configuration

Before we can apply the Ingress annotation, we need to make a change to our Nginx setup. If you deploy Nginx in its default configuration (or use the Helm chart with no settings enabled), then you will find that your whitelist prevents access even for IP's that are on your allowed this. This is due to access to the Ingress going through an Azure load balancer, what you will see in the logs is all traffic coming from the load balancer IP, not the client IP. To ensure that the source IP is preserved, we need to enable the local external traffic policy.

If your Nginx controllers were deployed using Helm, you can run Helm upgrade to apply this:

helm upgrade <releaseName> stable/nginx-ingress --set controller.service.externalTrafficPolicy=local --set controller.service.type=LoadBalancer 


If you have manually deployed Nginx, you need to amend the service to set the externalTrafficPolicy:

  {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "Nginx"
      },
      "spec": {
     ...
        "type": "LoadBalancer",
        "externalTrafficPolicy": "Local"
      }
    }


Whitelist Annotation

Once Nginx is up and running again with the correct external policy, we can go ahead and apply the annotation to our Ingress controller, the annotation value is the IP Range, or a comma-separated list of IP Ranges. You can apply this using Kubectl or using a YAML definition.

Kubectl

kubectl annotate ingress nginx.ingress.kubernetes.io/whitelist-source-range="201.202.203.10/32,187.212.32.12/32"


YAML

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/whitelist-source-range:"201.202.203.10/32,187.212.32.12/32"
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: service1
          servicePort: 4200
      - path: /bar
        backend:
          serviceName: service2
          servicePort: 8080


If we now try and hit the app from an IP that is not in the whitelist, we will get a 403 response from Nginx.

403 Image

API Lockdown

Just to round things off, a very common question I see is whether the management API for AKS can be locked down to specific IP ranges or use something like service endpoints. While I think this is a very valid question, unfortunately, at the time of writing, it is not possible to restrict what IP ranges can access the Kubernetes management API in AKS. The uservoice entry for this has a comment from August last year saying this is being worked on, so hopefully, we will see some movement on this in the near future.

Image Attribution

Barrier flickr photo by Nik Stanbridge shared under a Creative Commons (BY-NC-ND) license.

application

Opinions expressed by DZone contributors are their own.

Related

  • Auto-Instrumentation in Azure Application Insights With AKS
  • Deploying a Scala Play Application to Heroku: A Step-by-Step Guide
  • Why Mocking Sucks
  • Moving PeopleSoft ERP Data Between Databases With Data Mover Scripts

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!