Kubernetes Secrets Management
We take a look at how to effectively configure secrets in Kubernetes so dev teams can add needed security to their K8s instances.
Join the DZone community and get the full member experience.Join For Free
The modularity of Kubernetes—and the container environment—means apps and microservices can be deployed to multiple servers in a seamless way. To take advantage of that modularity, it is necessary to develop your app or web service to be as fluid as the environment in which it is running.
Thankfully, config keys and value pairs can be used to make web services or apps compatible with different environments through Kube ConfigMaps. The use of values allows developers to seamlessly transition from the development environment to testing and production ones without changing the code directly.
Kubernetes supports the use of values in different ways. ConfigMap properties and keys can be injected into containers running inside a pod as environment variables or configuration files. Tools such as direnv and autoenv are designed for this purpose. While this method is effective, it is not always the most efficient way to go.
Kubernetes secrets are in fact a special type of ConfigMaps designed to hold sensitive data. If you have many configs, importing them into the container as a configuration file will help save you time. To avoid declaring one variable per setting. The app has to be able to read those settings from a given config file instead of from an environment variable.
So, how can you manage Kubernetes Secrets better?
Why Use a Kubernetes Secret?
A Kubernetes Secret is mainly designed to carry sensitive information that the web service needs to run. This includes information such as username and password, tokens for connecting with other pods, and certificate keys. Putting sensitive information in a Secret object allows for better security and tighter control over those details.
Secrets are also easy to integrate with existing services. You just have to tell the pods to use the custom Secrets you have created alongside the native Secrets created by Kubernetes. This means you can use Secrets to make deploying a web service across multiple clusters easier.
It is also worth noting that Secrets can are base64 encoded for ‘encryption’ purposes. You can convert strings or values into base64 and revert them back before use. The encoding/decoding process is already built into Kubernetes, eliminating the need for third-party tools when adding this extra layer of security. Storing sensitive environment variables becomes more seamless.
It’s important not to commit base64-encoded Secrets, as they can be easily decoded by anyone. An alternative to committing Secrets in a secure way in version control is to use an encryption tool like Key Management Service (KMS) or Pretty Good Privacy (PGP). Once the content of the Kube Secret is encrypted, it can be safely stored in Git. This will require the application pod to be able to decrypt those settings at run-time. Some application changes will be needed.
If Kube secrets are created directly into the cluster via the
kubectl command line,
it’s important to be careful about which cluster user has access to secrets (this can and should be limited by RBAC policies).
The Basics of Using Secrets
Creating Secrets and storing values in them are relatively easy. You start by converting the values you want to store as base64 using the command,
echo -n '[value]' | base64 and capturing the output. You can then create a plaintext file for storing your app configurations. A standard Secret file looks something like this:
The rest is easy from there. Once you get the secret’s YAML file, you can create it in the Kube cluster as follows and that reads your secret with the command
kubectl create -f [location of your secret]and start using the values you put in it as part of the web service.
Of course, this is the most basic way of using Secrets. There are more ways you can manage Kubernetes Secrets as part of a robust environment. You can, for example, use a cloud-managed secret storage.
If you're using Helm, helm-secrets is a great tool for managing secrets and storing them with encrypted version control.
Amazon has AWS Secrets Manager built into its ecosystem. It supports advanced features such as automatic rotation of credentials with a well-defined lifecycle. The AWS Secrets Manager also allows you to manage the entire security aspect of your cluster or environment from a centralized console, with logging and monitoring features also available.
Secrets management features for Azure are available via Key Vault. In the case of Key Vault, you have Key Management Systems acting as the centralized console for Secrets management in your Kubernetes environment.
The real hits with Secrets management, however, are open-source solutions developed by the community and third-party developers. Vault by HashiCorp is a popular option to look into. The solution takes security to a whole new level not only from a technical standpoint but from a user’s perspective too.
Vault is designed with a user-friendly interface. The whole concept behind Vault is allowing developers with little experience in server administration access to the most comprehensive management tools for tokens, passwords, certificates, and encryption keys.
Vault is also compatible with every cloud infrastructure regardless of the way the environment is set up. On top of that, there is support for dynamic Secrets and cross-cluster replication. To further complete the management tools, Vault comes with an API for different functions.
Secrets in the Code
Despite the expansive secrets management tools, it is still possible to run into issues with deployment, especially when you are deploying to multiple environments—though the use of helm, helm-secrets, and helm-files will make this process much easier. The need for unique variables that match each environment still requires you to push secrets to their respective clusters. In short, there is still a big possibility of mistakes being made along the way.
Pushing secrets alongside the code and service configurations is certainly easy to do. You can encrypt the secrets and later add a decryption routine to the deployment. Or the decryption routine can be added into the application for it to decrypt encrypted secrets at runtime. You can use
kms decrypt to read the encoded secrets, and then add a
kubectl step to apply those secrets. This is as seamless as it gets, although the method is not compatible with all environments.
Nevertheless, Kubernetes Secrets make anything from migrating to the production environment to deploying across multiple clusters easier. Use the feature to help streamline your CI/CD workflow further.
Published at DZone with permission of Juan Ignacio Giro. See the original article here.
Opinions expressed by DZone contributors are their own.