Deploy Elasticsearch with Kubernetes on AWS in 10 Steps
It couldn't be easier to deploy Elasticsearch on AWS than with this detailed, step-by-step tutorial to show you all the ropes.
Join the DZone community and get the full member experience.
Join For Free
Kubernetes, aka "K8s", is an open-source system for automating deployment, scaling and management of containerized applications. In this tutorial, I will show how to set up a Kubernetes cluster and deploy an Elasticsearch cluster on it in AWS. A similar setup should also work for GCE and Azure.
Configuring Kubernetes on AWS
Before we get started, you should have administrative access to the following AWS services: S3, EC2, Route53, IAM, and VPC.
1. Installation: I will be showing the installation setup for a Linux based CLI. If you are on a different machine, follow the below tool links to get installation instructions for your OS.
First, we will get the AWS CLI to access AWS via CLI. If you already have Python and pip in your system, you can run this command:
pip install awscli --upgrade --user
Next, we will be using a tool called Kops, a command line tool that offers an opinionated approach for setting up a production-grade K8S cluster.
We will install the Kops binaries directly from GitHub.
wget -O kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64chmod +x ./kopssudo mv ./kops /usr/local/bin/
And finally, we will use a CLI for managing a K8S cluster (if you have used Docker, this is similar to the docker CLI). We can install the latest stable release with the following command:
wget -O kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectlchmod +x ./kubectlsudo mv ./kubectl /usr/local/bin/kubectl
Note: You can also run the Kubernetes cluster and the setup we are going through in this tutorial on your own machine with minikube.
2. IAM User Creation: In order to build clusters within AWS, we'll create a dedicated IAM user for kops
. This user requires API credentials in order to use kops
. You can create the user and assign credentials using the AWS console UI. kops
user will require the following IAM permissions:
- AmazonEC2FullAccess
- AmazonRoute53FullAccess
- AmazonS3FullAccess
- IAMFullAccess
- AmazonVPCFullAccess

Alternatively, you can do the same from the CLI as well. Apply the below commands.
aws iam create-group --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kopsaws iam create-user --user-name kopsaws iam add-user-to-group --user-name kops --group-name kopsaws iam create-access-key --user-name kops
Note the SecretAccessKey
and AccessKeyID
of kops
.
Now, let's configure the AWS CLI to use these credentials usingaws configure
.
Confirm that the user you just created is in the list with the following aws iam list-users
.
We will also need to export the AWS credentials as the following environment variables for the kops
CLI to be able to use these.
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
If you are using Kops 1.6.2 or later, then DNS configuration is optional. Instead, a gossip-based cluster can be easily created. The only requirement to trigger this is to have the cluster name end with
.k8s.local
.
Configuring DNS
If you have already hosted your domain via AWS and plan to use the domain, you do not need to do anything. Alternatively, if you plan to use a sub-domain of your hosted domain, you will have to create a second public hosted zone for this sub-domain. For this tutorial, we are going with a private hosted zone. You need to set up a private hosted zone by any name. Use this name for creating Kubernetes clusters. For details on configuring DNS check this out.
3. Create S3 bucket: In order to store the state and representation of our K8S cluster, we need to create a dedicated S3 bucket for kops
to use. This bucket will become the source of truth for our cluster configuration.
aws s3api create-bucket \ --bucket <your-unique-bucket-name> \ --region us-east-1
Note: If you are provisioning your bucket in region other us-east-1
, in addition to setting the --region
switch to your desired region, you will need to add a LocationConstraint
to the same region. As an example, the following will be the command for creating the bucket in the us-west-1
region.
aws s3api create-bucket \ --bucket <your-unique-bucket-name> \ --region us-west-1 \ --create-bucket-configuration LocationConstraint=us-west-1
You might also want to version your S3 bucket with the following command in case you ever need to revert or recover a previous state store.
aws s3api put-bucket-versioning \ --bucket <your-unique-bucket-name> \ --versioning-configuration Status=Enabled
4. Creating our first Kubernetes cluster: Now we're ready to start creating our first cluster! Let's first set up a few environment variables to make this process easier. In case you skipped the DNS configuration (after step 2.), suffix .k8s.local
to the value of NAME
.
export NAME=myfirstcluster.example.comexport KOPS_STATE_STORE=s3://your-bucket-name
We will also need to note which availability zones are available to us. In this example, we will be deploying our cluster to the us-east-2 region.
aws ec2 describe-availability-zones --region us-east-2
Create the cluster with the following command, if you are using a public hosted zone:
kops create cluster \ --zones us-east-2c \ --node-count 3 \ ${NAME}
If you are using a private hosted zone, run:
--dns private
This command should give you the log of setting up of your K8S cluster. You need to give some time for the cluster to be up as it creates new EC2 machines for the master and minion nodes.
[ec2-user@ip-172-31-35-145 test]$ kops create cluster \> --dns private \> --zones us-east-2c \ > --node-count 3 \> ${NAME} --yesI0306 09:45:29.636834 20628 create_cluster.go:439] Inferred --cloud=aws from zone "us-east-2c"I0306 09:45:29.637030 20628 create_cluster.go:971] Using SSH public key: /home/ec2-user/.ssh/id_rsa.pubI0306 09:45:29.850021 20628 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet us-east-2cI0306 09:45:31.118837 20628 dns.go:92] Private DNS: skipping DNS validationI0306 09:45:46.986963 20628 executor.go:91] Tasks: 73 done / 73 total; 0 can runI0306 09:45:46.987101 20628 dns.go:153] Pre-creating DNS recordsI0306 09:45:47.668392 20628 update_cluster.go:248] Exporting kubecfg for clusterkops has set your kubectl context to k8s.appbase
Cluster is starting. It should be ready in a few minutes.
Voila! Our K8s cluster should be now up and running.
5. Cluster Validation: All instances created by kops
will be built within ASG (Auto Scaling Groups). ASG instances are automatically rebuilt and monitored if they suffer a failure.
If you want to edit any of the cluster's configurations, you can run the following command:
kops edit cluster ${NAME}
Every time you make a change in the cluster configuration, you will also need to build the cluster by running the following command:
kops update cluster ${NAME} --yes
You should see something similar to this.
[ec2-user@ip-172-31-35-145 examples]$ kops update cluster --yesUsing cluster from kubectl context: k8s.appbase
I0216 05:09:06.074467 2158 dns.go:92] Private DNS: skipping DNS validationI0216 05:09:07.699380 2158 executor.go:91] Tasks: 73 done / 73 total; 0 can runI0216 05:09:07.699486 2158 dns.go:153] Pre-creating DNS recordsI0216 05:09:07.961703 2158 update_cluster.go:248] Exporting kubecfg for clusterkops has set your kubectl context to k8s.appbase
Cluster changes have been applied to the cloud.
Let's validate our cluster to make sure that everything is working correctly.
kops validate cluster
You should see that your cluster is up and ready.
Using cluster from kubectl context: k8s.appbase
Validating cluster k8s.appbase
INSTANCE GROUPSNAME ROLE MACHINETYPE MIN MAX SUBNETSmaster-us-east-2c Master t2.large 1 1 us-east-2cnodes Node t2.medium 3 3 us-east-2c
NODE STATUSNAME ROLE READYip-172-20-44-33.us-east-2.compute.internal master Trueip-172-20-52-48.us-east-2.compute.internal node Trueip-172-20-62-30.us-east-2.compute.internal node Trueip-172-20-64-53.us-east-2.compute.internal node True
Your cluster k8s.appbase is ready
Go check out your new K8s!
A simple Kubernetes API call can be used to check if the API is online and listening. Let's use kubectl
to check the nodes.
kubectl get nodes
This will give the details of your nodes and their current status.
[ec2-user@ip-172-31-35-145 elasticsearch]$ kubectl get nodesNAME STATUS ROLES AGE VERSIONip-172-20-44-33.us-east-2.compute.internal Ready master 1m v1.8.6ip-172-20-52-48.us-east-2.compute.internal Ready node 3m v1.8.6ip-172-20-62-30.us-east-2.compute.internal Ready node 2m v1.8.6ip-172-20-64-53.us-east-2.compute.internal Ready node 4m v1.8.6
A Pod in Kubernetes is an abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers. A pod is deployed on a node. As you need to scale out your application, you can add nodes to your K8S deployment.
To get the available pods, run kubectl get pods
This command will list the available pods in our cluster.
[ec2-user@ip-172-31-35-145 ~]$ kubectl get podsNAME READY STATUS RESTARTS AGEes-5967f5d99c-5vcpb 1/1 Running 0 3hes-5967f5d99c-cqk88 1/1 Running 0 3hes-5967f5d99c-lp789 1/1 Running 0 3h
Published at DZone with permission of Prince Raj, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
How To Use Pandas and Matplotlib To Perform EDA In Python
-
Operator Overloading in Java
-
Mastering Time Series Analysis: Techniques, Models, and Strategies
-
Top 10 Engineering KPIs Technical Leaders Should Know
Comments