DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • The Evolution of Scalable and Resilient Container Infrastructure
  • Scaling Microservices With Docker and Kubernetes on Production
  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • Building Reliable LLM-Powered Microservices With Kubernetes on AWS

Trending

  • Exploring Intercooler.js: Simplify AJAX With HTML Attributes
  • The Ultimate Guide to Code Formatting: Prettier vs ESLint vs Biome
  • Agentic AI Systems: Smarter Automation With LangChain and LangGraph
  • Proactive Security in Distributed Systems: A Developer’s Approach
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Using Hashicorp Vault on Kubernetes

Using Hashicorp Vault on Kubernetes

Learn how to put secret storage using HashiCorp Vault into practice.

By 
Gabriel Garrido user avatar
Gabriel Garrido
·
Updated Dec. 25, 19 · Tutorial
Likes (0)
Comment
Save
Tweet
Share
2.6K Views

Join the DZone community and get the full member experience.

Join For Free

vault

Put HashiCorp Vault into practice.

Introduction

In a previous article, we configured Vault with Consul on our cluster. Now, it’s time to go ahead and use it to provision secrets to our pods/applications. If you don’t remember the post or haven’t configured Vault yet, head to Getting Started with HashiCorp Vault on Kubernetes first.

In this article, we will create an example using mutual TLS and provision some secrets to our app. You can find the files used here in this repo.

Creating a Certificate for Our New Client

As you can see below, we need to enable kv version 1 on /secret for this to work. Then we create a secret and store it as a Kubernetes secret for an app. Note that the CA was created in the previous article and we rely on these certificates so we can keep building on that.

# For this to work we need to enable the path /secret with kv version 1
vault secrets enable -path=secret -version=1 kv

# Then create a separate certificate for our client (Important in case we need or want to revoke it later)
$ consul tls cert create -client -additional-dnsname vault
==> Using consul-agent-ca.pem and consul-agent-ca-key.pem
==> Saved dc1-client-consul-1.pem
==> Saved dc1-client-consul-1-key.pem

# And store the certs as a kubernetes secrets so our pod can use them
$ kubectl create secret generic myapp \
  --from-file=certs/consul-agent-ca.pem \
  --from-file=certs/dc1-client-consul-1.pem \
  --from-file=certs/dc1-client-consul-1-key.pem

A Service Account for Kubernetes

In Kubernetes, a service account provides an identity for processes that run in a pod so that processes can contact the API server.

$ cat vault-auth-service-account.yml
  ---
  apiVersion: rbac.authorization.k8s.io/v1beta1
  kind: ClusterRoleBinding
  metadata:
    name: role-tokenreview-binding
    namespace: default
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:auth-delegator
  subjects:
  - kind: ServiceAccount
    name: vault-auth
    namespace: default

# Create the 'vault-auth' service account
$ kubectl apply --filename vault-auth-service-account.yml



Vault Policy

Next, we need to set a read-only policy for our secrets as we don’t want any apps to be able to write or rewrite them.


# Create a policy file, myapp-kv-ro.hcl
$ tee myapp-kv-ro.hcl <<EOF
# If working with K/V v1
path "secret/myapp/*" {
    capabilities = ["read", "list"]
}

# If working with K/V v2
path "secret/data/myapp/*" {
    capabilities = ["read", "list"]
}
EOF

# Create a policy named myapp-kv-ro
$ vault policy write myapp-kv-ro myapp-kv-ro.hcl

$ vault kv put secret/myapp/config username='appuser' \
        password='suP3rsec(et!' \
        ttl='30s'



Kubernetes Configuration

Set the environment variables to point to the running Minikube environment, enable the Kubernetes authentication method, and then validate it from a temporal pod.


# Set VAULT_SA_NAME to the service account you created earlier
$ export VAULT_SA_NAME=$(kubectl get sa vault-auth -o jsonpath="{.secrets[*]['name']}")

# Set SA_JWT_TOKEN value to the service account JWT used to access the TokenReview API
$ export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)

# Set SA_CA_CRT to the PEM encoded CA cert used to talk to Kubernetes API
$ export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)

# Set K8S_HOST to minikube IP address
$ export K8S_HOST=$(minikube ip)

# Enable the Kubernetes auth method at the default path ("auth/kubernetes")
$ vault auth enable kubernetes

# Tell Vault how to communicate with the Kubernetes (Minikube) cluster
$ vault write auth/kubernetes/config \
        token_reviewer_jwt="$SA_JWT_TOKEN" \
        kubernetes_host="https://$K8S_HOST:8443" \
        kubernetes_ca_cert="$SA_CA_CRT"

# Create a role named, 'example' to map Kubernetes Service Account to
# Vault policies and default token TTL
$ vault write auth/kubernetes/role/example \
        bound_service_account_names=vault-auth \
        bound_service_account_namespaces=default \
        policies=myapp-kv-ro \
        ttl=24h

# Run a temp pod to test that we can reach vault
$ kubectl run --generator=run-pod/v1 tmp --rm -i --tty --serviceaccount=vault-auth --image alpine:3.7
$ apk add curl jq
$ curl -k https://vault/v1/sys/health | jq
{
  "initialized": true,
  "sealed": false,
  "standby": false,
  "performance_standby": false,
  "replication_performance_mode": "disabled",
  "replication_dr_mode": "disabled",
  "server_time_utc": 1556488210,
  "version": "1.1.1",
  "cluster_name": "vault-cluster-1677ba10",
  "cluster_id": "fa706969-085b-91ac-36de-de6fcf2328c5"
}

# Then we can test the login
$ curl --request POST \
        --data '{"jwt": "'"$KUBE_TOKEN"'", "role": "example"}' \
        https://vault:8200/v1/auth/kubernetes/login | jq
{
  ...
  "auth": {
    "client_token": "s.7cH83AFIdmXXYKsPsSbeESpp",
    "accessor": "8bmYWFW5HtwDHLAoxSiuMZRh",
    "policies": [
      "default",
      "myapp-kv-ro"
    ],
    "token_policies": [
      "default",
      "myapp-kv-ro"
    ],
    "metadata": {
      "role": "example",
      "service_account_name": "vault-auth",
      "service_account_namespace": "default",
      "service_account_secret_name": "vault-auth-token-vqqlp",
      "service_account_uid": "adaca842-f2a7-11e8-831e-080027b85b6a"
    },
    "lease_duration": 86400,
    "renewable": true,
    "entity_id": "2c4624f1-29d6-972a-fb27-729b50dd05e2",
    "token_type": "service"
  }
}


Deployment and the Consul-Template Configuration

To check the volume mounts and secrets, we load the certificates we created initially and use them to fetch the secrets data from Vault.

---
apiVersion: v1
kind: Pod
metadata:
  name: vault-agent-example
spec:
  serviceAccountName: vault-auth

  restartPolicy: Never

  volumes:
    - name: vault-token
      emptyDir:
        medium: Memory
    - name: vault-tls
      secret:
        secretName: myapp

    - name: config
      configMap:
        name: example-vault-agent-config
        items:
          - key: vault-agent-config.hcl
            path: vault-agent-config.hcl

          - key: consul-template-config.hcl
            path: consul-template-config.hcl


    - name: shared-data
      emptyDir: {}

  initContainers:
    # Vault container
    - name: vault-agent-auth
      image: vault

      volumeMounts:
        - name: config
          mountPath: /etc/vault
        - name: vault-token
          mountPath: /home/vault
        - name: vault-tls
          mountPath: /etc/tls

      # This assumes Vault running on a pod in the K8s cluster and that the service name is vault
      env:
        - name: VAULT_ADDR
          value: https://vault:8200
        - name: VAULT_CACERT
          value: /etc/tls/consul-agent-ca.pem
        - name: VAULT_CLIENT_CERT
          value: /etc/tls/dc1-client-consul-1.pem
        - name: VAULT_CLIENT_KEY
          value: /etc/tls/dc1-client-consul-1-key.pem
        - name: VAULT_TLS_SERVER_NAME
          value: client.dc1.consul

      # Run the Vault agent
      args:
        [
          "agent",
          "-config=/etc/vault/vault-agent-config.hcl",
          #"-log-level=debug",
        ]

  containers:
    # Consul Template container
    - name: consul-template
      image: hashicorp/consul-template:alpine
      imagePullPolicy: Always

      volumeMounts:
        - name: vault-token
          mountPath: /home/vault

        - name: config
          mountPath: /etc/consul-template

        - name: shared-data
          mountPath: /etc/secrets

        - name: vault-tls
          mountPath: /etc/tls

      env:
        - name: HOME
          value: /home/vault

        - name: VAULT_ADDR
          value: https://vault:8200

        - name: VAULT_CACERT
          value: /etc/tls/consul-agent-ca.pem

        - name: VAULT_CLIENT_CERT
          value: /etc/tls/dc1-client-consul-1.pem

        - name: VAULT_CLIENT_KEY
          value: /etc/tls/dc1-client-consul-1-key.pem

        - name: VAULT_TLS_SERVER_NAME
          value: client.dc1.consul

      # Consul-Template looks in $HOME/.vault-token, $VAULT_TOKEN, or -vault-token (via CLI)
      args:
        [
          "-config=/etc/consul-template/consul-template-config.hcl",
          #"-log-level=debug",
        ]

    # Nginx container
    - name: nginx-container
      image: nginx

      ports:
        - containerPort: 80

      volumeMounts:
        - name: shared-data
          mountPath: /usr/share/nginx/html


This is where the magic happens. We’re able to fetch secrets thanks to the role and the token that will then be stored there.

# Uncomment this to have Agent run once (e.g. when running as an initContainer)
exit_after_auth = true
pid_file = "/home/vault/pidfile"

auto_auth {
    method "kubernetes" {
        mount_path = "auth/kubernetes"
        config = {
            role = "example"
        }
    }

    sink "file" {
        config = {
            path = "/home/vault/.vault-token"
        }
    }
}


And last but not least, we create a file—in the template provided—which our Nginx container will render on the screen later using Consul Template.

vault {
  renew_token = false
  vault_agent_token_file = "/home/vault/.vault-token"
  retry {
    backoff = "1s"
  }
}

template {
  destination = "/etc/secrets/index.html"
  contents = <<EOH
  <html>
  <body>
  <p>Some secrets:</p>
  {{- with secret "secret/myapp/config" }}
  <ul>
  <li><pre>username: {{ .Data.username }}</pre></li>
  <li><pre>password: {{ .Data.password }}</pre></li>
  </ul>
  {{ end }}
  </body>
  </html>
  EOH
}


Let’s Test It

The last step is to test all of the above. After the files are deployed to Kubernetes, we should see something like this:

# Finally let's create our app and see if we can fetch secrets from Vault
$ kubectl apply -f example-k8s-spec.yml

# The init container log should look something like this if everything went well.
$ kubectl logs vault-agent-example vault-agent-auth -f
Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
==> Vault server started! Log data will stream in below:

==> Vault agent configuration:

                     Cgo: disabled
               Log Level: info
                 Version: Vault v1.1.2
             Version Sha: 0082501623c0b704b87b1fbc84c2d725994bac54

2019-04-28T20:37:46.328Z [INFO]  sink.file: creating file sink
2019-04-28T20:37:46.328Z [INFO]  sink.file: file sink configured: path=/home/vault/.vault-token
2019-04-28T20:37:46.329Z [INFO]  auth.handler: starting auth handler
2019-04-28T20:37:46.329Z [INFO]  auth.handler: authenticating
2019-04-28T20:37:46.334Z [INFO]  sink.server: starting sink server
2019-04-28T20:37:46.456Z [INFO]  auth.handler: authentication successful, sending token to sinks
2019-04-28T20:37:46.456Z [INFO]  auth.handler: starting renewal process
2019-04-28T20:37:46.456Z [INFO]  sink.file: token written: path=/home/vault/.vault-token
2019-04-28T20:37:46.456Z [INFO]  sink.server: sink server stopped
2019-04-28T20:37:46.456Z [INFO]  sinks finished, exiting

# Then we use a port-forward to test if the template created the files with our secrets correctly
$ kubectl port-forward pod/vault-agent-example 8080:80

# As we can see here we were able to fetch our secrets
$ curl -v localhost:8080
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.15.12
< Date: Sun, 28 Apr 2019 20:47:02 GMT
< Content-Type: text/html
< Content-Length: 166
< Last-Modified: Sun, 28 Apr 2019 20:37:53 GMT
< Connection: keep-alive
< ETag: "5cc60f21-a6"
< Accept-Ranges: bytes
<
  <html>
  <body>
  <p>Some secrets:</p>
  <ul>
  <li><pre>username: appuser</pre></li>
  <li><pre>password: suP3rsec(et!</pre></li>
  </ul>

  </body>
  </html>
* Connection #0 to host localhost left intact
* Closing connection 0


Closing Notes

This post was heavily inspired by this doc page. The main difference though is that we have mutual Transport Layer Security (TLS) on, so the only thing left would be to auto unseal our Vault. But we will leave that for a future article or as an exercise for you, the reader.

Errata

If you spot any errors or have any suggestions, please send us a message so it gets fixed quickly.

Also, you can check the source code and changes in the generated code and sources here.

Kubernetes

Published at DZone with permission of Gabriel Garrido. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • The Evolution of Scalable and Resilient Container Infrastructure
  • Scaling Microservices With Docker and Kubernetes on Production
  • Can You Run a MariaDB Cluster on a $150 Kubernetes Lab? I Gave It a Shot
  • Building Reliable LLM-Powered Microservices With Kubernetes on AWS

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!