Microsoft Azure AKC and SQL Database With Terraform

DZone 's Guide to

Microsoft Azure AKC and SQL Database With Terraform

In the final of this three part series, we will learn how to construct, connect, and break down Azure AKC and SQL within Kubernetes using Terraform.

· Cloud Zone ·
Free Resource

This is the 3rd and last article in the series Kubernetes in the clouds, in which we will deploy Microsoft Azure AKC and Azure Database for PostgreSQ with Terraform.

In Part I we covered Google GKE and CloudSQL and you can find source code.

In Part II we covered Amazon AWS EKS and RDS and you can find source code.

  • Initial tooling setup for Azure CLI, kubectl, and Terraform
  • Configure Azure CLI
  • Setup prerequisites for Terraform account
    • Create Service Principals Terraform account
    • Create resource group
    • Create storage account for Terraform
    • Create Azure Blob Storage for Remote Terraform State tfstate
    • Configure Terraform credentials to allow access Azure Terraform service principal
  • Review code structure
  • Creating Kubernetes cluster on Azure AKC and PostgreSQL
  • Working with kubernetes "kubectl" in AKC
  • Destroy created infrastructure

First we need to get all tools needed fo our work.

Initial Tooling Setup of Azure CLI, kubectl, and Terraform

Deploying Azure CLI

At the time of this article, we are using the latest available CLI version 2.0.45. We have a few options to install or run Azure CLI. For additional option and more detailes check Official Azure CLI 2.0 documentation

az cli for OS X

For OS X az cli we will use homebrew.

brew update

brew install azure-cli

# If unable to find Python run below command in the shell that should fix the issue
brew link --overwrite python3

az cli for Linux

As Azure CLI installation is very much dependant on Linux distribution, we gave an installation script that should work in all Linux distributions.

curl -L https://aka.ms/InstallAzureCli | bash

# or with full link
curl https://azurecliprod.blob.core.windows.net/install | bash

az cli in Docker

docker run -it microsoft/azure-cli

# you can use SSH keys from your user environment
docker run -it -v ${HOME}/.ssh:/root/.ssh microsoft/azure-cli

#or you can use alias with azure sdk docker image
alias az='docker run -v ${HOME}:/root -it --rm azuresdk/azure-cli-python az'

az cli installation Verification

az --version

azure-cli (2.0.45)
acr (2.1.4)
acs (2.3.2)
advisor (0.6.0)
ams (0.2.3)
appservice (0.2.3)
backup (1.2.1)
batch (3.3.3)
batchai (0.4.2)
billing (0.2.0)
botservice (0.1.0)
cdn (0.1.1)
cloud (2.1.0)
cognitiveservices (0.2.1)
command-modules-nspkg (2.0.2)
configure (2.0.18)
consumption (0.4.0)
container (0.3.3)
core (2.0.45)
cosmosdb (0.2.1)
dla (0.2.2)
dls (0.1.1)
dms (0.1.0)
eventgrid (0.2.0)
eventhubs (0.2.3)
extension (0.2.1)
feedback (2.1.4)
find (0.2.12)
interactive (0.3.28)
iot (0.3.1)
iot (0.3.1)
iotcentral (0.1.1)
keyvault (2.2.2)
lab (0.1.1)
maps (0.3.2)
monitor (0.2.3)
network (2.2.4)
nspkg (3.0.3)
policyinsights (0.1.0)
profile (2.1.1)
rdbms (0.3.1)
redis (0.3.2)
relay (0.1.1)
reservations (0.3.2)
resource (2.1.3)
role (2.1.4)
search (0.1.1)
servicebus (0.2.2)
servicefabric (0.1.2)
sql (2.1.3)
storage (2.2.1)
telemetry (1.0.0)
vm (2.2.2)

Python location '/usr/local/bin/python'
Extensions directory '/root/.azure/cliextensions'

Python (Linux) 3.6.4 (default, Jan 10 2018, 05:20:21)
[GCC 6.4.0]

Legal docs and information: aka.ms/AzureCliLegal

Deploying Terraform

We will install latest version of terraform 0.11.8.

Terraform for OS X

curl -o terraform_0.11.8_darwin_amd64.zip \

unzip terraform_0.11.8_linux_amd64.zip -d /usr/local/bin/

Terraform for Linux

curl https://releases.hashicorp.com/terraform/0.11.8/terraform_0.11.8_linux_amd64.zip \
> terraform_0.11.8_linux_amd64.zip

unzip terraform_0.11.8_linux_amd64.zip -d /usr/local/bin/

Terraform for Verification

Verify terraform version:

terraform version

Terraform v0.11.8

Deploying kubectl

kubectl for OS X

We will install latest kubectl version to date 1.11.2.

curl -o kubectl \

chmod +x kubectl

sudo mv kubectl /usr/local/bin/

kubectl for Linux

wget \

chmod +x kubectl

sudo mv kubectl /usr/local/bin/

kubectl verification

kubectl version --client

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-07-26T20:40:11Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Configure Azure CLI

Once we have Azure CLI installed will need to configure az cli to allow the CLI to access Azure Cloud Services.

Associate Azure CLI with your Microsoft Azure cloud account:

az login

You are then prompted to open a web browser at a specific URL (seen in the response to your login command). Open a browser from any computer to that URL and enter the code you see specified. This associates your CLI instance with your Azure Cloud Account and completes the login process.

You can login using additional options following documentation in the link.

You may want to do additional configurations; example adding color, JSON output etc.

az configure

List Azure accounts associated with CLI:

az account list

For more information, you can follow Azure CLI Official documentation.

Setup Prerequisites for Terraform Account

In order to configure Terraform, we will need id's from the Azure account like tenantId and subscriptionId and for the newly created technical account Terraform clientId.

Get a list of subscriptionId and tenantId values:

az account show --query "{subscriptionId:id, tenantId:tenantId}"

Create an Azure Service Principal Account

Service principals are separate identities that can be associated with an account. Service principals are useful for working with applications and tasks that can be automated.

export AZ_SUBSCRIPTION_ID=$(az account show --query id --out tsv)

az ad sp create-for-rbac --name terraform --role="Contributor" --scopes="/subscriptions/$AZ_SUBSCRIPTION_ID"
Note: Please note down output from "password" field.

Once service principal account was created we will need to export the rest of required environment variables:

export AZ_CLIENT_ID=$(az ad sp list --query "[?appDisplayName == 'terraform']|[].appId" --out tsv) && \
export AZ_TENANT_ID=$(az ad sp list --display-name terraform --query "[].appOwnerTenantId" --out tsv) && \
export AZ_CLIENT_NAME_ID=$(az ad sp list --query "[?appDisplayName == 'terraform']|[].appId" --out tsv) && \

printenv | grep AZ
Note: for   AZ_CLIENT_SECRET   replace  PASSWORD-XXXX-XXXX-XXXX-PASSWORD with output from field "password" when running az ad sp create-for-rbac ...

Verified by listing the assigned roles:

az role assignment list --assignee $AZ_CLIENT_ID

Show details on service principal account

az ad sp show --id $AZ_CLIENT_NAME_ID

Optional you can test account by sign in using the service principal terraform account

az login --service-principal --username $AZ_CLIENT_NAME_ID --password $AZ_CLIENT_SECRET --tenant $AZ_TENANT_ID

Sign in with your Azure user account

az login -u your@email -p your_password
Note: replace your@email and your password with your login credentials to Azure

Optional: If you want you can reset service principal terraform account password:

az ad sp credential reset --name $AZ_CLIENT_NAME_ID --password NEW_PASSWORD

Create Resource Group

Before deploying any resources to your subscription Azure Cloud account, you must create a resource group that will host provisioned resources. Newly created resource group will be used for Terraform service principal account to host Azure blop storage for tsftate files.

List available locations where can will create resource group

az account list-locations --query []."{displayName:displayName, name:name}" --out table

Create resource group

In below example we will use Southeast Asia region "Singapore".

az group create --name Terraform --location "Southeast Asia"

If you need to retrieve the resource group later, use the following command:

az group show --name Terraform

To get all the resource groups in your subscription, use:

az group list

Create Storage Account for tfstate file

Azure storage account provides a unique namespace to store and access your Azure Storage data objects.

az storage account create -n terraformeks -g Terraform -l southeastasia --sku Standard_LRS

Retrieve storage account resource information:

az storage account show --name terraformeks --resource-group Terraform

Assign tags to the storage account resource:

az resource tag --tags Environment=Test Resource=tfstate -g Terraform -n terraformeks --resource-type "Microsoft.Storage/storageAccounts"

Create a Container in Your Azure Storage Account

In order to create new storage container we will need to find the account key:

az storage account keys list -g Terraform -n terraformekc --query [0].value -o tsv

Export account key into an env variable:

ACCOUNT_KEY="$(az storage account keys list -g Terraform -n terraformekc --query [0].value -o tsv)"

Create container for terraform tfstate files:

az storage container create -n tfstate --account-name terraformekc --account-key $ACCOUNT_KEY

Verify container creation:

az storage container list --account-name terraformekc

Optional: Retrieve Information on Newly Created Resources

Get resource by name:

az resource list -n terraformekc

Get all the resources in a resource group:

az resource list --resource-group Terraform

Query resources with a particular resource type:

az resource list --resource-type "Microsoft.Storage/storageAccounts"

Get all the resources with a tag name and value:

az resource list --tag Environment=Test

Configure Terraform credentials to allow access to Azure terraform Service Principal Account

We will create two tfvars files and populate them with credentials.

backend.tfvars will be used to create tfstate file in terraformekc Azure container

terraform.tfvars will be used to provision Azure infrastructure.

Create and populate terraform.tfvars file:

subscription_id = "$AZ_SUBSCRIPTION_ID"
client_id       = "$AZ_CLIENT_NAME_ID"
client_secret   = "$AZ_CLIENT_SECRET"
tenant_id       = "$AZ_TENANT_ID"
pgsql_password   = "$YOUR_DB_PASSWORD"

Create and populate backend.tfvars file:

resource_group_name   = "Terraform"
storage_account_name  = "terraformeks"
container_name        = "tfstate"
access_key            = "$ACCOUNT_KEY"
key                   = "terraform.tfstate"
Note: Replace   $AZ_SUBSCRIPTION_ID ,   $AZ_CLIENT_NAME_ID ,   $AZ_CLIENT_SECRET ,  $AZ_TENANT_ID    and  $ACCOUNT_KEY   with data from environment variables exported earlier "printenv | grep AZ".

Review code structure

As in previous article for gCloud and AWS, we will make use of workspace and modules that give us better Terraform code management.

Code structure:

├── aks_cluster
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── az_psql
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── backend.tf
├── backend.tfvars
├── base
│   ├── sec_group
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── subnet
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── vpc
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── main.tf
├── outputs.tf
├── README.md
├── resource_group
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
├── terraform.tfvars
├── variables.tf

First will setup our backend:

terraform {
  backend "azurerm" {}

Initiate terraform plugin download:

terraform init -backend-config=backend.tfvars

Add terraform workspace:

terraform workspace new dev

Check terraform workspace:

terraform workspace list

Check out main.tf file that will pull modules from diff folders:

provider "azurerm" {
  subscription_id = "${var.subscription_id}"
  client_id       = "${var.client_id}"
  client_secret   = "${var.client_secret}"
  tenant_id       = "${var.tenant_id}"
module "res_group" {
  source   = "./resource_group"
  location = "${var.location}"
module "vpc" {
  source         = "./base/vpc"
  address_space  = "${var.address_space}"
  location       = "${var.location}"
  res_group_name = "${module.res_group.res_group_name}"
module "sec_group" {
  source         = "./base/sec_group"
  location       = "${var.location}"
  res_group_name = "${module.res_group.res_group_name}"
module "subnet" {
  source           = "./base/subnet"
  res_group_name   = "${module.res_group.res_group_name}"
  net_sec_group_id = "${module.sec_group.net_sec_group_id}"
  vnet_name        = "${module.vpc.vnet_name}"
  subnet_prefixes  = "${var.subnet_prefixes}"
module "eks_cluster" {
  source         = "./aks_cluster"
  res_group_name = "${module.res_group.res_group_name}"
  subnet_id      = "${module.subnet.subnet_id}"
  location       = "${var.location}"
  ssh_public_key = "${var.ssh_public_key}"
  agent_count    = "${var.agent_count}"
  client_id      = "${var.client_id}"
  client_secret  = "${var.client_secret}"
module "az_psql" {
  source                 = "./az_psql"
  location               = "${var.location}"
  res_group_name         = "${module.res_group.res_group_name}"
  pgsql_capacity         = "${var.pgsql_capacity}"
  pgsql_tier             = "${var.pgsql_tier}"
  pgsql_storage          = "${var.pgsql_storage}"
  pgsql_backup           = "${var.pgsql_backup}"
  pgsql_redundant_backup = "${var.pgsql_redundant_backup}"
  pgsql_password         = "${var.pgsql_password}"

After providing Azure credentials we will create a resource group where our Kubernetes cluster will reside:

resource "azurerm_resource_group" "res_group" {
  name     = "aks-${terraform.workspace}"
  location = "${var.location}"
  tags {
    environment = "${terraform.workspace}"
Note: The resource name will be different for dev or prod, depending on the workspace you are in.

Next step will create new vpc:

name          = "vpc-${terraform.workspace}"
  address_space = ["${lookup(var.address_space, terraform.workspace)}"]
  location            = "${var.location}"
  resource_group_name = "${var.res_group_name}"
  tags {
    environment = "${terraform.workspace}"
Note: VPC name and CIDR are dependent on the workspace.

Once VPC is created we can create Subnet:

resource "azurerm_subnet" "subnet" {
  name                      = "akc-${terraform.workspace}-subnet"
  resource_group_name       = "${var.res_group_name}"
  network_security_group_id = "${var.net_sec_group_id}"
  virtual_network_name      = "${var.vnet_name}"
  address_prefix            = "${var.subnet_prefixes[terraform.workspace]}"
Note: The subnet name and CIDR are dependent on the workspace used.

And the last part for networking we need to create a security group:

resource azurerm_network_security_group "net_sec_group" {
  name                = "akc-${terraform.workspace}-nsg"
  location            = "${var.location}"
  resource_group_name = "${var.res_group_name}"
  tags {
    environment = "${terraform.workspace}"

Now is time for AKC cluster:

resource "azurerm_kubernetes_cluster" "k8s" {
  name                = "k8s-${terraform.workspace}"
  location            = "${var.location}"
  resource_group_name = "${var.res_group_name}"
  dns_prefix          = "k8s-${terraform.workspace}"
  linux_profile {
    admin_username = "ubuntu"
    ssh_key {
      key_data = "${file("${var.ssh_public_key}")}"
  agent_pool_profile {
    name            = "agentpool"
    count           = "${var.agent_count[terraform.workspace]}"
    vm_size         = "Standard_DS2_v2"
    os_type         = "Linux"
    os_disk_size_gb = 30
    vnet_subnet_id  = "${var.subnet_id}"
  service_principal {
    client_id     = "${var.client_id}"
    client_secret = "${var.client_secret}"
  network_profile {
    network_plugin = "azure"
  tags {
    Environment = "${terraform.workspace}"
Note: The number of nodes is dependent on what workspace you are in. To access Kubernetes nodes you need ssh public key in  ~/.ssh/id_rsa.pub.

And lastly, we will deploy the DB PostgreeSQL Azure service

resource "azurerm_postgresql_server" "az_psql" {
  name                = "az-${terraform.workspace}-psql"
  location            = "${var.location}"
  resource_group_name = "${var.res_group_name}"

  sku {
    name     = "B_Gen5_2"
    capacity = "${var.pgsql_capacity[terraform.workspace]}"
    tier     = "${var.pgsql_tier[terraform.workspace]}"
    family   = "Gen5"

  storage_profile {
    storage_mb            = "${var.pgsql_storage[terraform.workspace]}"
    backup_retention_days = "${var.pgsql_backup[terraform.workspace]}"
    geo_redundant_backup  = "${var.pgsql_redundant_backup[terraform.workspace]}"

  administrator_login          = "psqladmin"
  administrator_login_password = "${var.pgsql_password}"
  version                      = "9.6"
  ssl_enforcement              = "Enabled"

  tags {
    Environment = "${terraform.workspace}"
Note: as in previous example capacity and type of node depends on workspace.

Creating Kubernetes Cluster on Azure AKC and PostgreSQL

It's time to deploy our infrastructure.

View terraform plan:

terraform plan

Deploy infrastructure with terraform:

terraform apply

Alternatively, we can export our plan and apply the exported plan:

terraform plan -out=my.plan

terraform show my.plan

terraform apply my.plan

Working with kubernetes "kubectl" in AKC

Connect to terraform:

export KUBECONFIG=~/.kube/azurek8s

echo "$(terraform output ekc_kube_config)" > ~/.kube/azurek8s

Now we should be able to access kubernetes API with kubectl

kubectl get nodes

kubectl get namespaces

kubectl get services

Now you can play around and deploy your application into Azure EKC cluster and make use of PostgreSQL DB.

Destroy Created Infrastructure

Destroy infrastructure created with terraform:

terraform destroy -auto-approve

Remove All Resource Created with az cli

Delete terraform service principal:

az ad sp list --query "[?appDisplayName == 'terraform']|[].appId"

az ad sp delete --id $AZ_CLIENT_ID

Delete storage account:

az storage account list

az storage account delete -y -n terraformeks -g Terraform

Delete a resource group and all its resources:

az group list

az group delete -y -n Terraform

Terraform sources can be found in GitHub

azure ,azure kubernetes service ,database ,kubernetes cluster ,terraform ,tutorial

Published at DZone with permission of Ion Mudreac . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}