A Cloud-Native Starter Kit
For those thinking of starting with cloud native, here's a quick starting guide to help you do so.
Join the DZone community and get the full member experience.
Join For FreeI have drawn my inspiration from articles published on this subject. While I was learning, I had to move around a bit to make sure I have the setup right, the steps, and the small workarounds in place. This article summarizes all of that in one place, shows the steps with screenshots and console outout. Hope this will be of some help!
Setup
Install Minikube locally. Details available https://kubernetes.io/docs/tasks/tools/install-minikube/
I downloaded the minikube installer exe and ran locally on Windows 10 Home, also installed kubectl 1.18.0 and enabled Hyper V
Important commands:
- ‘kubectl version –client’
- ‘minikube start --driver=hyperv’ (next time, we can only use ‘minikube start’ )
When we run the command for the first time, we will see the VMs getting created (1 for the Control Plane, 1 worker node, memory allocation of 2 GB each and a virtual disk space of 20 GB)
‘minikube status’ - output below
C:\windows\system32>minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
‘minikube dashboard’..output below
Opens the dashboard with default settings on Node, Services, Storage, Cluster Roles, Namespaces
Install Docker Desktop for Windows 10 (Professional or Enterprise based on our home PC. OS). Do not try the hacks of making it work for Windows 10 Home as the results may be unpredictable. Instead install Docker toolbox, which works with Windows 10 Home. (https://docs.docker.com/toolbox/toolbox_install_windows/), download the latest 19.03.1. After installing Docker Toolbox, from the desktop icon, right-click and ‘Run as Administrator’. It will open the terminal, run the initial setup, and gives you the default command prompt for you to run the docker commands
At the time of start, you might get an error that says that there is a VirtualBox error check that failed. All you need to do is the following:
- Open file C:\Program Files\Docker Toolbox\start and look for “STEP="Checking if machine $VM exists".
- In the last line of that IF block, where you see the below line, add the check(highlighted in BOLD). This means that this check will be bypassed and the hyper-v will be used as the default driver
xxxxxxxxxx
"${DOCKER_MACHINE}" create -d virtualbox --virtualbox-no-vtx-check $PROXY_ENV "${VM}"
If there is a problem in startup (next time you open) with the message about certificate error/cannot reach server, you can do the following.
- Run “docker-machine ls” to list the machine
- “docker-machine rm -f default” – to remove the machine
- Recreate through “docker-machine create -d virtualbox default”
- Close the session and re-open
Important Commands to Check the Installation
- ‘docker-machine version’
- ‘docker run hello-world’ (to test if the docker toolbox is installed properly)
- ‘docker-machine ls’ (will list the virtual box that is running the docker).
- $ docker-machine ls
xxxxxxxxxx
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.102:2376 v19.03.12
Code Setup and Testing the Above Configuration:
You will find details at the link https://nodejs.org/de/docs/guides/nodejs-docker-webapp/
Write a simple Node.js program:
xxxxxxxxxx
var express = require("express");
//var cors = require("cors");
var app = express();
//app.use(cors())
//CORS Middleware
// Add headers
app.use(function (req, res, next) {
// Website you wish to allow to connect
res.setHeader('Access-Control-Allow-Origin', '*');
// Request methods you wish to allow
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
// Request headers you wish to allow
res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
// Set to true if you need the website to include cookies in the requests sent
// to the API (e.g. in case you use sessions)
res.setHeader('Access-Control-Allow-Credentials', true);
// Pass to next layer of middleware
next();
});
app.get("/", function (req, res) {
res.send("Rudranil's first Node.js application is working well and still strongerrrr")
}
)
var server = app.listen(8081, function () {
var host = server.address().address
var port = server.address().port
}
)
In the folder structure you created for your node program, create a Docker file.
DockerFile: (the usual commands you would run, if you were to run it locally, which is download node runtime, installing npm package dependencies, copying main JS file, opening listening port, running command ‘node app.js’
xxxxxxxxxx
FROM node:12
# Create app directory
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8081
CMD [ "node", "app.js" ]
You will also add the folders/files that you don’t want to be included in the final build in .dockkerignore (e.g node_modules as we are running command ‘npm install’ to download dependencies anyway)
Run the docker commands for Build and Run now
xxxxxxxxxx
docker build -t rudra1/node-js-sample1 .
If you run ‘docker images’, you will see the new image.
xxxxxxxxxx
docker run -p <machine-port>:<container_port> -d rudra1/node-js-sample1
So we have now 49062 mapped to 8081. We have to get the IP address of the Docker toolbox virtual machine.
Run the curl command to test it or through your browser.
Now that we have tested the Node program, let’s see how it can be tested from an Angular app (to give you the end-end full stack flow)
We already have the URL to the application in the container now: http://192.168.99.102:40962
Angular application: I am not going into the details of Angular as you can learn from the tutorials as I did. The HTTP call is being made in the lines highlighted in Yellow below
xxxxxxxxxx
import{ Component, OnInit } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { MyserviceService } from './../myservice.service';
import { Observable } from 'rxjs';
import { FormGroup, FormControl } from '@angular/forms';
@Component({
selector: 'app-new-component',
templateUrl: './new-component.component.html',
styleUrls: ['./new-component.component.css']
})
export class NewComponentComponent implements OnInit {
newComponent = "Rudranil's new component!"
months = ["Jan", "Feb"];
todaydate;
httpdata;
formdata;
emailid;
isavailable = true;
constructor(private myservice: MyserviceService, private http: HttpClient) { console.log("sss");}
ngOnInit(): void {
console.log("Hello");
this.todaydate = this.myservice.showTodayDate();
console.log(this.todaydate);
this.formdata = new FormGroup({
emailid: new FormControl("default value")
}
);
this.getResponseFromServer().subscribe(data => {
this.httpdata = data;
});
}
getResponseFromServer(): Observable<any> {
//return this.http.get("http://localhost:8081/", { responseType: 'text' });
return this.http.get("http://192.168.99.102:40962", { responseType: 'text' });
}
showData(data) {
console.log("Hi");
console.log(data);
//this.httpdata = JSON.stringify(data);
this.httpdata = data;
}
onClickSubmit(data) {
alert("Form data" + data.emailid);
this.emailid = data.emailid;
}
myclickfunct(event) {
alert("Button is clicked for Rudranil")
}
changeevent(event) {
alert("Changed")
}
}
After you start the Angular runtime and type the URL: http://localhost:4200/ you will see the response output of the node application. Enable Developer Tools console for debugging.
Publishing the Image to The Minikube One Node Cluster
‘docker images’ command shows the images from local docker repository/environment. But when we run ‘kubectl create deployment node-sample1 –image=<image_tag>’, the image is pulled from the minikube repository and not from the local docker repo. So the command when executed will show an error on minikube dashboard that repository for the image cannot be found
You can look at https://medium.com/bb-tutorials-and-thoughts/how-to-use-own-local-doker-images-with-minikube-2c1ed0b0968 to see how the minikube can work with local docker images and don’t always have to pull from a central registry
a) C:\windows\system32>minikube docker-env (running the following command will give the below output)
- SET DOCKER_TLS_VERIFY=1
- SET DOCKER_HOST=tcp://172.17.246.60:2376
- SET DOCKER_CERT_PATH=C:\Users\91983\.minikube\certs
- SET MINIKUBE_ACTIVE_DOCKERD=minikube
- REM To point your shell to minikube's docker-daemon, run:
- REM @FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i
- b) Execute the last command to set the environment
- @FOR /f "tokens=*" %i IN ('minikube -p minikube docker-env') DO @%i
c) Typing now command ‘docker images’ will show the images from the Docker daemon in the Minikube environment
d) Run the command ‘docker build -t rudra2/node- sample2
.’ The image will be created in the minikube environment. If you run the ‘docker images’ command now, you will see the newly created image
e) Run the command ‘kubectl create deployment node-sample1 –image=rudra2/node-sample-2’. The POD will be visible on the dashboard (but will have to disable the imagePullPolicy)
f) Edit the YAML file of the node from the Dashboard. Change to ‘imagePullPolicy:Never’ from ‘imagePullPolicy:Always’. This means that it will look in the local repo and not a remote repo. The deployments will change to GREEN from RED.
g)
The service will expose the POD outside the cluster. Service will have a Cluster IP. The pods will also have unique IPs. Run the second command as shown above and your new service will be listed as shown below
h) Run the command ‘minikube service node-sample1’
This command will open a browser and show the output
i) OR you can type the following URL: http://172.17.246.60:31058 The Host/Cluster Server IP 172.17.246.60 IP is visible in C:\Users\91983\.kube\config file. The IP could change when you stop/start your apiserver. But the IP will always be found in the kube\config file and also on the dashboard under Nodes->Internal IP Address.
This, they say is the primitive way of making the service accessible to the outside world. But if the Node IP changes, a new port will be assigned and we will have to change every calling application with the new port. This approach cannot be used in PROD. For learning purpose/POC/Demo, we can use.
So the loadbalancer/ingress controller is the standard way of exposing a service
- Run the command
C:\windows\system32>minikube addons enable ingress
- Verify the status by running the command
Kubectl get pods -n kube-system
- Create the Ingress resource (YAML file)
xxxxxxxxxx
apiVersion networking.k8s.io/v1beta1
kind Ingress
metadata
name example-ingress
annotations
nginx.ingress.kubernetes.io/rewrite-target /$1
spec
rules
http
paths
path /
backend
serviceName node-sample1
servicePort8081
- Run the command “kubectl apply -f sample-ingress.yaml” to create the Ingress resource (set of Rules)
- “Minikube ip” will give the IP address to access.
- http://<IP_Address_Above>/ will invoke the service
- We can get public static IPs for long running applications. Public cloud providers can create public static IPs for use with the Ingress resource
- Multiple rules can be setup on the ingress resource(pointing to multiple services) and serviced through the same loadbalancer
j) Scaling the application: You can see the internal structure (YAML) of Deployments/POD, Services from the dashboard. Every such object type will have a SPEC and STATUS
This is primarily done by changing the number of replicas
‘Kubectl get rs’ shows the number of replicas
New replicas can be configured either by running the kubectl command or we can edit the YAML file directly.
On the Dashboard, we will see the 2 pods.
Running the image on a Kubernetes cluster on a hosted service (Azure cloud)
The basic steps would include the following. To start with, you can get the free account trial activated
a) Setting up the remote registry on Azure. We need a place to store the docker image so that the AKS can access that
I had some problems running from docker-toolbox. But it can be run from Docker. I installed Azure CLI for Windows, open a command prompt and executed the following commands
- “az login” – it opened a browser, logged me in successfully, but gave me an error on the command prompt “No subscriptions found”
Subscription Link is here. You can go for the free trial and give your card number to verify. You won’t be charged unless you upgrade. The amount we pay via the merchant site for the free trial is INR 2
When you type “az login” now, you will get the following information in your command prompt
Now if you go to the HOME link and Navigate to subscriptions, you will see the default subscription
az group create --name dbiTestGroup --location southindia
Navigating to Home-Resource Groups will show the new entry
az acr create --resource-group dbiTestGroup --name dbiTestACR1 --sku Basic
If you navigate to Home-All Resources, you will see the new entry
b) Import the image into the Azure container registry
Login to the acr
az acr login --name dbiTestACR1
TAG a container image for ACR. The image needs to be tagged with the login server of the registry
az acr list --resource-group dbiTestGroup --query "[].{acrLoginServer:loginServer}" --output table
TAG the local image to ACR
docker tag rudra2/node-sample-2 dbitestacr1.azurecr.io/node-sample-2
View the Image in the ACR
“docker images” will show the new tag (third entry from the top)
Before pushing the image, we have to enable the Access Key, otherwise the ‘docker push…’ command will throw an Unauthorized Error in the end
Push the image to the Azure Container Registry
docker push dbitestacr1.azurecr.io/node-sample-2
List the images in the Azure Container Registry
az acr repository list --name dbiTestACR1 --output table
c) Build the AKS cluster and set up the application container
xxxxxxxxxx
az aks create --resource-group dbiTestGroup --name dbiTestCluster --node-count 2
--generate-ssh-keys --attach-acr dbiTestACR1
You might get this error
To fix this, we have to add the extension aks-preview (as the issue is provided in the preview)
az extension add --name aks-preview
Run the command again to create the cluster and you should get a JSON output showing the cluster
xxxxxxxxxx
az aks create --resource-group dbiTestGroup --name dbiTestCluster --node-count 2 --generate-ssh-keys --attach-acr dbiTestACR1
To connect kubectl to the Kubernetes cluster
xxxxxxxxxx
az aks get-credentials --resource-group dbiTestGroup --name dbiTestCluster
To verify if kubectl is connected to the right AKS, run the command and we will see the details for the node on AKS.
kubectl get nodes
Create a YAML file with the following mappings/details to map the container. The YAML file is bigger than this, which will have the Deployment details.
containers:
- name: node-sample-2
image: dbiTestACR1.azurecr.io/node-sample-2
Run the command kubectl apply -f <above_YAML_File>
You can run “kubectl get service node-sample-2” for the IP
Details can be found in https://docs.microsoft.com/en-in/azure/aks/tutorial-kubernetes-deploy-application
Opinions expressed by DZone contributors are their own.
Trending
-
TDD vs. BDD: Choosing The Suitable Framework
-
Mastering Time Series Analysis: Techniques, Models, and Strategies
-
Logging Best Practices Revisited [Video]
-
What Is Test Pyramid: Getting Started With Test Automation Pyramid
Comments