Kubernetes has several services that make up the control plane. It’s important to address the security of these services at the cluster level and not rely on external security protections like network firewalls, as an attacker may be able to access these services via vulnerabilities in the applications running on your cluster (e.g. via a server-side request forgery attack) in a way that would bypass traditional network firewall protection.
This is the main entry point for management of a Kubernetes cluster and runs on the control plane node(s). It will typically run on
6443/TCP, although it may also be running on
8443/TCP depending on the cluster configuration. Additionally, on some clusters, the unsecure API port may be enabled. This is a legacy option but is still seen on some clusters. Typically, this service will be be available on port
There have been a number of cases of unauthorized access to the Kubernetes API and, in particular, the dashboard area leading to system compromise, showing how important it is that this is secured.
Checking for the configuration of the API server to ensure that it has been configured securely is achieved by reviewing the start-up flags used when launching it. The precise location of these flags will depend on the installation method used. When using kubeadm, these options can be found in
/etc/kubernetes/manifests/kube-apiserver.yaml. The key parameters to check are:
--anonymous-auth: This should be set to “false” explicitly, as the default is to allow some anonymous access to the API server.
--insecure-bind-address: This should not be set, even to the localhost address.
--insecure-port: This should be set to 0 to ensure that it is not configured.
In addition to the main Kubernetes API, a key network service (and one which is often poorly protected) is the Kubelet which runs on some or all nodes of cluster, depending on the deployment mechanism used. This service is responsible for managing the container runtime on each cluster node (e.g., Docker or CRI-O) and as such has a wide range of privileges to the server it’s running on. Unauthenticated Kubelet access is a common problem with older Kubernetes versions, as only recent versions require authentication by default, and as with the Kubernetes API, there have been a number of attacks that exploited this service due to it being left exposed to the Internet without appropriate protection.
The Kubelet will typically be running on two ports on each node. Port
10250/TCP is the read/write service and port
10255/TCP is the read-only port, which is used to expose information for cluster monitoring services. Unless required, remove the read-only port entirely from your configuration, as it doesn’t have the option to require authentication. All access to the read/write port should require authentication. The Kubelet configuration is managed similarly to that of the Kubernetes API server via start-up parameters. The main ones to look for are:
--anonymous-auth: This should be set to false.
--read-only-port: This should be set to 0.
At least one etcd key/value store is provisioned with almost every Kubernetes cluster to provide persistent storage of cluster configuration information. Unauthorized access to the etcd database can have serious consequences for the cluster’s security, as it contains sensitive information such as cluster secrets.
As with the other key Kubernetes features, there has been evidence of services being left exposed on the Internet without appropriate security measures, so this is an important area to check.
The etcd database will typically listen on port
2379/TCP for client access and
2380/TCP for peer access. All access to it should require authentication by setting the following configuration parameters:
--client-cert-auth: This should be set to true.
--peer-client-cert-auth: This should also be set to true.
It’s also worth reviewing your clusters for other instances of the etcd service, as some network plugins make use of separate instances for their own purposes and these may have different security settings than those set on the main database.
Securing the Container Environment
Once you’ve secured the management interfaces from unauthenticated access from outside the cluster, your next step in securing Kubernetes should be to analyze how attackers might compromise a pod and what might be possible for them to do. In addition to the access available to external attackers, access to a single container may provide a number of additional avenues of attack.
- Host filesystem access
- Container network access
- Access to a service token
- Node Kernel access
Each of these attack paths can be addressed by Kubernetes security mechanisms.
The first step when considering the individual containers in an environment is trying to stop an attacker from compromising them in the first place. An attack could happen via unpatched application software, configuration issues, or errors in custom code that has been deployed into the containers.
As most container images will come from a common distribution base such as Debian or Alpine, reviewing them for missing patches is handled similarly to the process for any other Linux-based system. Custom tooling may be required, as some patch management systems are not container-aware and, as such, won’t effectively scan inside the images.
There are a number of options for this. Aqua Security provides MicroScanner, which scans images based on public and proprietary sources for vulnerabilities and malware, and can be used in conjunction with Aqua’s runtime protection to assess image security and block any container suspicious activity based on container runtime profiles. There are also some standalone container vulnerability-scanning tools that could be useful where convenient cloud access isn’t available. Both Clair and Dagda can be used offline. It’s worth noting how these tools tend to work in reviewing container images for vulnerabilities. Where they’re looking at issues in system software (e.g. a web server), they usually base their analysis on the package manager used by the image (e.g. apt in Debian or Ubuntu or yum in Fedora Core). This is important, since when images that aren’t based on a common distribution are used, some vulnerability-scanning tools may not be able to detect weaknesses, as there is no central vulnerability database and package metadata to query.