Alexa and Kubernetes: Kubernetes Objects of the Alexa Skill (IV)
In this fourth installment, we will teach you how to create the Kubernetes objects you need to run the Alexa Skill in a cluster.
Join the DZone community and get the full member experience.
Join For FreeIn these steps, we have our Alexa Skill properly dockerized. As we are not going to package all the software components (Alexa Skill + MongoDB) yet, in this fourth step, we will set up all the Kubernetes objects of our Alexa Skill using MongoDB Atlas.
Prerequisites
Here, you have the technologies used in this project:
- Node.js v12.x
- Visual Studio Code
- Docker 19.x
- Kubectl CLI
- MongoDB Atlas Account
- Kind
- go >=1.11
Creating the Kubernetes Cluster With Kind
The first thing we need to do is install a Kubernetes cluster on our local machine in order to create all the Kubernetes objects.
For that, we are going to use Kind. Kind is a tool for running local Kubernetes clusters using Docker container “nodes.” Kind was primarily designed for testing Kubernetes itself but may be used for local development or CI.
With Kind, you can create clusters using a specification YAML file.
kind Cluster
apiVersion kind.x-k8s.io/v1alpha4
nodes
role control-plane
kubeadmConfigPatches
|
kind InitConfiguration
nodeRegistration
kubeletExtraArgs
node-labels"ingress-ready=true"
extraPortMappings
containerPort80
hostPort3008
protocol TCP
containerPort443
hostPort3009
protocol TCP
We can leverage Kind’s extraPortMapping configuration option when creating a cluster to forward ports from the host to an ingress controller running on a node. We will expose the port 80 of the Kind cluster to port 3008 of our local machine, and the 443 to the port 3009 of our local machine. This YAML file is located in the root folder called cluster.yaml
.
You can deploy the cluster running the following commands:
xxxxxxxxxx
kind create cluster --config cluster.yaml
#Get the Kubernetes context to execute kubectl commands
kubectl cluster-info --context kind-kind
If you want to destroy the cluster, just run the following command:
xxxxxxxxxx
kind delete cluster
Creating the Kubernetes Objects
The objects that we are going to create are the following Kubernetes objects so that we can run our Alexa Skill on Kubernetes:
1. Deployment: The deployment is one of the most important Kubernetes objects. A Deployment provides declarative updates for Pods. A Pod (as in a pod of whales or pea pod) is a group of one or more containers with shared storage and network resources and a specification for how to run the containers.
2. Service: An abstract way to expose an application running on a set of Pods as a network service.
3. Ingress: An API object that manages external access to the services in a cluster. It is a layer crated above the Kubernetes services. It requires an Ingress Controller to manage all the incoming requests.
Our Alexa Skill will be a container running, thanks to our Deployment. We will expose the port of the Alexa Skill, thanks to the Kubernetes Service object and finally, we are going to access the Alexa Skill through the port exposed, thanks to the Kubernetes Ingress object.
Deployment
You describe the desired state in a Deployment, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new Pods or to remove existing Deployments and adopt all their resources with new Deployments.
xxxxxxxxxx
apiVersion apps/v1
kind Deployment
metadata
name alexa-skill
labels
helm.sh/chart alexa-skill-1.0.0
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
app.kubernetes.io/version"1.0.0"
app.kubernetes.io/managed-by Helm
spec
replicas1
selector
matchLabels
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
template
metadata
labels
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
spec
containers
name alexa-skill
image"xavidop/alexa-skill-nodejs-express:latest"
imagePullPolicy Always
ports
name http
containerPort3000
protocol TCP
resources
limits
cpu 50m
memory 128Mi
requests
cpu 50m
memory 128Mi
env
name DB_TYPE
value atlas
name DB_HOST
value cluster0.qlqga.mongodb.net
name DB_PORT
value"27017"
name DB_USER
value root
name DB_PASSWORD
value root
name DB_DATABASE
value alexa
As you can see, our Deployment will have 1 replica. In case our Alexa Skill has a lot of requests, we can update the number of replicas or create a HorizontalPodAutoscaler Kubernetes object, but this is optional.
Let’s see the specification of our Deployment by doing the following:
- The first thing you will notice is that in the
containers
list we only have one container defined which is our Alexa Skill. - The image build in the previous step is set in the
containers.image
object. - We expose the container port, which is the 3000, as it is a port that will receive HTTP requests. The protocol we have to set is
tcp
. - We have set the memory of our pod to
128Mi
, which means 128 MegaBytes. In terms of CPU, we have set it to50mi
. It is measured in milicores. - In the
env
object, we will set all the environment variables and its values that our container will use.
With that, we have our deployment ready. This deployment will create a pod with a container.
Finally, we have to modify our NodeJS Alexa Skill app to get all the information from the environment variables that we have set in the Deployment specification:
xxxxxxxxxx
const connOpts = {
host: process.env.DB_HOST ? process.env.DB_HOST : 'cluster0.qlqga.mongodb.net',
user: process.env.DB_USER ? process.env.DB_USER : 'root',
port: process.env.DB_PORT ? process.env.DB_PORT : '27017',
password: process.env.DB_PASSWORD ? process.env.DB_PASSWORD : 'root',
database: process.env.DB_DATABASE ? '/' + process.env.DB_DATABASE : '',
};
let uri = '';
if (process.env.DB_TYPE === 'atlas'){
uri = `mongodb+srv://${connOpts.user}:${connOpts.password}@${connOpts.host}${connOpts.database}`;
} else {
// eslint-disable-next-line max-len
uri = `mongodb://${connOpts.user}:${connOpts.password}@${connOpts.host}:${connOpts.port}${connOpts.database}`;
}
With all the changes done, we only have to deploy our Deployment running the following command:
xxxxxxxxxx
#Create Namespace
kubectl create namespace alexa-skill
#Create deployment
kubectl apply -f deployment.yaml
Service
The next step is to create the Kubernetes Service. With Kubernetes, you don’t need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.
xxxxxxxxxx
apiVersion v1
kind Service
metadata
name alexa-skill
labels
helm.sh/chart alexa-skill-1.0.0
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
app.kubernetes.io/version"1.0.0"
app.kubernetes.io/managed-by Helm
spec
ports
port3000
targetPort3000
protocol TCP
selector
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
The important thing here is the port
, targetPort
and protocol
specifications:
1. Port: The port is the port that will use the Kubernetes Service.
2. TargetPort: The port of the container. This property links the container port running in the pod with the service.
3. Protocol: The protocol of the port. We will use TCP
because this service will receive HTTP requests.
By default, a type of service is ClusterIP
and it exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
With all the changes done, we only have to deploy our Service running the following command:
xxxxxxxxxx
kubectl apply -f service.yaml
Ingress
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL/TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress requires an Ingress controller. As we can see in the schema at the top of this blog post, we will use the Nginx Ingress Controller.
To install this Ingress Controller on our Kind Cluster, we have to run the following command:
xxxxxxxxxx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
Once the pods of the Nginx Ingress Controller are up and running, we can create our Ingress:
xxxxxxxxxx
apiVersion networking.k8s.io/v1beta1
kind Ingress
metadata
name alexa-skill-ingress
labels
helm.sh/chart alexa-skill-1.0.0
app.kubernetes.io/name alexa-skill
app.kubernetes.io/instance alexa-skill
app.kubernetes.io/version"1.0.0"
app.kubernetes.io/managed-by Helm
annotations
# Target URI where the traffic must be redirected
# More info: https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md
nginx.ingress.kubernetes.io/rewrite-target /
kubernetes.io/ingress.class nginx
spec
rules
# Uncomment the below to only allow traffic from this domain and route based on it
# - host: my-host # your domain name with A record pointing to the nginx-ingress-controller IP
http
paths
path / # Everything on this path will be redirected to the rewrite-target
backend
serviceName alexa-skill # the exposed svc name and port
servicePort3000
We can see in the Ingress specification that all the requests to the URI /
will be managed by this Ingress. This Ingress is linked to the Kubernetes Service through the serviceName
and servicePort
properties.
With all the changes done, we only have to deploy our Ingress running the following command:
xxxxxxxxxx
kubectl apply -f ingress.yaml
Testing Requests Locally
I’m sure you already know the famous tool called Postman. REST APIs have become the new standard in providing a public and secure interface for your service. Though REST has become ubiquitous, it’s not always easy to test. Postman makes it easier to test and manage HTTP REST APIs. Postman gives us multiple features to import, test, and share APIs, which will help you and your team be more productive in the long run.
After running your application, you will have an endpoint available at http://localhost:3008. With Postman, you can emulate any Alexa Request.
For example, you can test a LaunchRequest
:
xxxxxxxxxx
{
"version": "1.0",
"session": {
"new": true,
"sessionId": "amzn1.echo-api.session.[unique-value-here]",
"application": {
"applicationId": "amzn1.ask.skill.[unique-value-here]"
},
"user": {
"userId": "amzn1.ask.account.[unique-value-here]"
},
"attributes": {}
},
"context": {
"AudioPlayer": {
"playerActivity": "IDLE"
},
"System": {
"application": {
"applicationId": "amzn1.ask.skill.[unique-value-here]"
},
"user": {
"userId": "amzn1.ask.account.[unique-value-here]"
},
"device": {
"supportedInterfaces": {
"AudioPlayer": {}
}
}
}
},
"request": {
"type": "LaunchRequest",
"requestId": "amzn1.echo-api.request.[unique-value-here]",
"timestamp": "2020-03-22T17:24:44Z",
"locale": "en-US"
}
}
Resources
- The Official Node.js SDK Documentation
- Official Alexa Skills Kit Documentation
- Express Adapter Documentation
- Kind Documentation
- Kubernetes Documentation
Conclusion
Now, we have our Alexa Skill running in a Kubernetes server locally and using Mongo Atlas cloud. Now, we have to package all the Kubernetes objects using Helm.
I hope this example project is useful to you.
You can find the code here.
That’s all folks! Happy coding!
Opinions expressed by DZone contributors are their own.
Comments