DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Setting Up a ScyllaDB Cluster on AWS Using Terraform
  • Implementing EKS Multi-Tenancy Using Capsule (Part 3)
  • Securing Your Kubernetes Cluster: Terraform Secrets Management
  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops

Trending

  • Efficient API Communication With Spring WebClient
  • Introducing Graph Concepts in Java With Eclipse JNoSQL
  • The Evolution of Scalable and Resilient Container Infrastructure
  • Supervised Fine-Tuning (SFT) on VLMs: From Pre-trained Checkpoints To Tuned Models
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Amazon AWS EKS and RDS PostgreSQL With Terraform

Amazon AWS EKS and RDS PostgreSQL With Terraform

This tutorial will give you an in-depth look at how to set up AWS EKS, RDS, and Terraform, from initiall installation and configuration to roll back.

By 
Ion Mudreac user avatar
Ion Mudreac
·
Updated Apr. 22, 20 · Tutorial
Likes (10)
Comment
Save
Tweet
Share
27.9K Views

Join the DZone community and get the full member experience.

Join For Free

This is the second part of the 3 parts series article on how to use Terraform to deploy on Cloud providers Kubernetes offerings. In the previous article, I showed how you can deploy a complete Kubernetes setup on Google Cloud GKE and PostgreSQL Google SQL offering. In this article, I will show how can you deploy Amazon AWS EKS and RDS with Terraform. As AWS EKS is the most recent service Amazon AWS cloud provider that adopted EKS Managed Kubernetes, be aware of the additional cost of $0.20 per hour for the EKS Control Plane "Kubernetes Master", and usual EC2, EBS, etc. prices for resources that run in your account. As compared to GKE, EKS is not as straightforward to deploy, and configuring requires more moving pieces, like setting up AWS launch configuration, an AWS autoscaling group, and IAM roles and policies to allow AWS to manage EKS.

NOTE: This tutorial is not secured and is not production-ready

This article is structured in 5 parts:

  • Initial tooling setup aws cli, kubectl, and terraform
  • Creating terraform IAM account with access keys and access policy
  • Creating back-end storage for tfstate file in AWS S3
  • Creating Kubernetes cluster on AWS EKS and RDS on PostgreSQL
  • Working with kubernetes "kubectl" in EKS

Initial Tooling: Set Up aws-cli, kubectl, Terraform and aws-iam-authenticator

Assuming you already have an AWS account and AWS CLI installed and AWS CLI configured for your user account, we will need additional binaries for terraform and kubectl.

Deploying Terraform

Terraform for OS X

curl -o terraform_0.11.7_darwin_amd64.zip \
https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_darwin_amd64.zip

unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/


Terraform for Linux

curl https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip \
> terraform_0.11.7_linux_amd64.zip

unzip terraform_0.11.7_linux_amd64.zip -d /usr/local/bin/


Terraform Installation Verification

Verify Terraform version 0.11.7 or higher is installed:

terraform version


Deploying kubectl

kubectl for OS X

curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/darwin/amd64/kubectl

chmod +x kubectl

sudo mv kubectl /usr/local/bin/


kubectl for Linux

wget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl

chmod +x kubectl

sudo mv kubectl /usr/local/bin/


kubectl Installation Verification

kubectl version --client


Deploying aws-iam-authenticator

aws-iam-authenticator is a tool developed by the Heptio Team and this tool will allow us to manage EKS by using kubectl.

aws-iam-authenticator for OS X

curl -o aws-iam-authenticator \
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/darwin/amd64/aws-iam-authenticator

chmod +x ./aws-iam-authenticator

cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH


aws-iam-authenticator for Linux

curl -o aws-iam-authenticator \
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator

chmod +x ./aws-iam-authenticator

cp ./aws-iam-authenticator $HOME/.local/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH


aws-iam-authenticator installation verification

aws-iam-authenticator help


Authenticate to AWS

Before configuring AWS, CLI as EKS at this time is only available in US East (N. Virginia) and US West (Oregon). In the example below we will be using US West (Oregon) "us-west-2"

aws configure

Creating a Terraform IAM Account with Access Keys and Access Policy

The first step is to set up a Terraform admin account in AWS IAM.

Create IAM terraform User

aws iam create-user --user-name terraform


Add to Newly-Created Terraform User IAM Admin Policy

Note: For production or event proper testing account you may need tighten up and restrict access for the Terraform IAM user.

aws iam attach-user-policy --user-name \
terraform --policy-arn arn:aws:iam::aws:policy/AdministratorAccess


Create access keys for the user

Note: This Access Key and Secret Access Key will be used by terraform to manage infrastructure deployment.

aws iam create-access-key --user-name terraform


Update terraform.tfvars file with Access and Security Keys for Newly Created Terraform IAM Account

Image title

Creating Backend Storage For tfstate File in AWS S3

Once we have a Terraform IAM account created we can proceed to the next step, creating a dedicated bucket to keep Terraform state files.

Create a Terraform State Bucket

Note: Change the name of the bucket, as the name should be unique across all AWS S3 buckets:

aws s3 mb s3://terra-state-bucket --region us-west-2


Enable Versioning on The Newly Created Bucket

aws s3api put-bucket-versioning --bucket \
terra-state-bucket --versioning-configuration Status=Enabled

Image title

Creating Kubernetes cluster on AWS EKS and RDS on PostgreSQL

Now we can move into creating new infrastructure, using EKS and RDS with Terraform.

    .
    ├── backend.tf
    ├── eks
    │   ├── eks_cluster
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── eks_iam_roles
    │   │   ├── main.tf
    │   │   └── outputs.tf
    │   ├── eks_node
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   ├── userdata.tpl
    │   │   └── variables.tf
    │   └── eks_sec_group
    │       ├── main.tf
    │       ├── outputs.tf
    │       └── variables.tf
    ├── main.tf
    ├── network
    │   ├── route
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── sec_group
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── subnets
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   └── vpc
    │       ├── main.tf
    │       ├── outputs.tf
    │       └── variables.tf
    ├── outputs.tf
    ├── rds
    │   ├── main.tf
    │   ├── outputs.tf
    │   └── variables.tf
    ├── README.md
    ├── terraform.tfvars
    ├── variables.tf
    └── yaml
        ├── eks-admin-cluster-role-binding.yaml
        └── eks-admin-service-account.yaml


We will use Terraform modules to keep our code clean and organized. Terraform will run 2 separate environments, dev and prod, using the same sources. The only difference, in this case, is the number of worker nodes for Kubernetes.

# Specify the provider and access details
provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
  region     = "${var.aws_region}"
}

## Network
# Create VPC
module "vpc" {
  source           = "./network/vpc"
  eks_cluster_name = "${var.eks_cluster_name}"
  cidr_block       = "${var.cidr_block}"
}

# Create Subnets
module "subnets" {
  source           = "./network/subnets"
  eks_cluster_name = "${var.eks_cluster_name}"
  vpc_id           = "${module.vpc.vpc_id}"
  vpc_cidr_block   = "${module.vpc.vpc_cidr_block}"
}

# Configure Routes
module "route" {
  source              = "./network/route"
  main_route_table_id = "${module.vpc.main_route_table_id}"
  gw_id               = "${module.vpc.gw_id}"

  subnets = [
    "${module.subnets.subnets}",
  ]
}

module "eks_iam_roles" {
  source = "./eks/eks_iam_roles"
}

module "eks_sec_group" {
  source           = "./eks/eks_sec_group"
  eks_cluster_name = "${var.eks_cluster_name}"
  vpc_id           = "${module.vpc.vpc_id}"
}

module "eks_cluster" {
  source           = "./eks/eks_cluster"
  eks_cluster_name = "${var.eks_cluster_name}"
  iam_cluster_arn  = "${module.eks_iam_roles.iam_cluster_arn}"
  iam_node_arn     = "${module.eks_iam_roles.iam_node_arn}"

  subnets = [
    "${module.subnets.subnets}",
  ]

  security_group_cluster = "${module.eks_sec_group.security_group_cluster}"
}

module "eks_node" {
  source                    = "./eks/eks_node"
  eks_cluster_name          = "${var.eks_cluster_name}"
  eks_certificate_authority = "${module.eks_cluster.eks_certificate_authority}"
  eks_endpoint              = "${module.eks_cluster.eks_endpoint}"
  iam_instance_profile      = "${module.eks_iam_roles.iam_instance_profile}"
  security_group_node       = "${module.eks_sec_group.security_group_node}"

  subnets = [
    "${module.subnets.subnets}",
  ]
}

module "sec_group_rds" {
  source         = "./network/sec_group"
  vpc_id         = "${module.vpc.vpc_id}"
  vpc_cidr_block = "${module.vpc.vpc_cidr_block}"
} 


module "rds" {
  source = "./rds"

  subnets = [
    "${module.subnets.subnets}",
  ]

  sec_grp_rds       = "${module.sec_group_rds.sec_grp_rds}"
  identifier        = "${var.identifier}"
  storage_type      = "${var.storage_type}"
  allocated_storage = "${var.allocated_storage}"
  db_engine         = "${var.db_engine}"
  engine_version    = "${var.engine_version}"
  instance_class    = "${var.instance_class}"
  db_username       = "${var.db_username}"
  db_password       = "${var.db_password}"
  sec_grp_rds       = "${module.sec_group_rds.sec_grp_rds}"
}


Terraform modules will create:

  • VPC
  • Subnets
  • Routes
  • IAM Roles for master and nodes
  • Security Groups "Firewall" to allow master and nodes to communicate
  • EKS cluster
  • Autoscaling Group will create nodes to be added to the cluster
  • Security group for RDS
  • RDS with PostgreSQL

It's very important to keep tags — as if tags are not specified, nodes will not be able to join the cluster.

Initial Setup Create and Create New Workspace for Terraform

Initialize and Pull Terraform Cloud-Specific Dependencies

terraform init


Create Dev Workspace

terraform workspace new dev


List Available Workspace

terraform workspace list


Select Dev Workspace

terraform workspace select dev


Before we can start will need to update variables and add the database password to terraform.tfvars.

echo 'db_password = "Your_DB_Passwd."' >> terraform.tfvars


It's a Good Idea to Sync Terraform Modules

terraform get -update

Image title


View Terraform Plan

terraform plan


Apply Terraform Plan

Building the complete infrastructure may take more than 10 minutes.

terraform apply

Image title


Verify Instance Creation

aws ec2 describe-instances --output table


We are not done yet!

Create a New AWS CLI profile

In order to use kubectl with EKS, we need to set new AWS CLI profile. You will need to use secret and access keys from terraform.tfvars.

cat terraform.tfvars

aws configure --profile terraform

export AWS_PROFILE=terraform


Configure kubectl to Allow Us to Connect to EKS Cluster

In the Terraform configuration, we output configuration file for kubectl.

terraform output kubeconfig


Add Output of "terraform output kubeconfig" to ~/.kube/config-devel

terraform output kubeconfig > ~/.kube/config-devel

export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel


Verify kubectl Connectivity

kubectl get namespaces

kubectl get services


Allow EKS to Add Nodes by Running configmap

terraform output config_map_aws_auth > yaml/config_map_aws_auth.yaml

kubectl apply -f yaml/config_map_aws_auth.yaml

Now You Should Be Able to See Nodes

kubectl get nodes

Image title


Working with Terraform on EKS

Deploy the Kubernetes dashboard

kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml


Deploy Heapster to Enable Container Cluster Monitoring and Performance Analysis on Your Cluster

kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml


Deploy the InfluxDB Backend for Hheapster to Your Cluster

kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

Create the heapster cluster role binding for the dashboard

kubectl apply -f \
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml


Create an eks-admin Service Account and Cluster Role Binding

Apply the Service Account to Your Cluster

kubectl apply -f yaml/eks-admin-service-account.yaml


Apply the Cluster Role Binding to Your Cluster

kubectl apply -f yaml/eks-admin-cluster-role-binding.yaml


Connect to the Dashboard

kubectl -n kube-system describe secret \
$(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

kubectl proxy

Open the link with a web browser to access the dashboard endpoint. Choose Token and paste the output from the previous command into the Token field.

Image title

Rolling Back All Changes

Destroy All Terraform Created Infrastructure

terraform destroy -auto-approve


Removing S3 Bucket, IAM Roles and Terraform Account

export AWS_PROFILE=default

aws s3 rm s3://terra-state-bucket --recursive

aws s3api put-bucket-versioning --bucket terra-state-bucket \
--versioning-configuration Status=Suspended

aws s3api delete-objects --bucket terra-state-bucket --delete \
"$(aws s3api list-object-versions --bucket terra-state-bucket | \
jq '{Objects: [.Versions[] | {Key:.Key, VersionId : .VersionId}], Quiet: false}')"

aws s3 rb s3://terra-state-bucket --force

aws iam detach-user-policy --user-name terraform --policy-arn \
arn:aws:iam::aws:policy/AdministratorAccess

aws iam list-access-keys --user-name terraform  --query \
'AccessKeyMetadata[*].{ID:AccessKeyId}' --output text

aws iam delete-access-key --user-name terraform --access-key-id OUT_KEY

aws iam delete-user --user-name terraform


Terraform and Kubernetes sources can be found in GitHub.

AWS Terraform (software) Kubernetes cluster PostgreSQL

Published at DZone with permission of Ion Mudreac. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Setting Up a ScyllaDB Cluster on AWS Using Terraform
  • Implementing EKS Multi-Tenancy Using Capsule (Part 3)
  • Securing Your Kubernetes Cluster: Terraform Secrets Management
  • Establishing a Highly Available Kubernetes Cluster on AWS With Kops

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!