Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Building Highly-Available Apps With IBM Container Service, Kubernetes, and Rancher 2.0

DZone's Guide to

Building Highly-Available Apps With IBM Container Service, Kubernetes, and Rancher 2.0

IBM Cloud is a must-watch cloud provider. Let's learn how to build highly-available microservices on IBM Cloud with Kubernetes and Rancher.

· Microservices Zone ·
Free Resource

Learn how modern cloud architectures use of microservices has many advantages and enables developers to deliver business software in a CI/CD way.

This is the first in a four-part series on "Building microservice data lakes with IBM Cloud." In this series, we'll explore how you can use IBM Cloud to deploy microservice applications, store data generated by those microservices in IBM Cloud Object Storage, and then query across that data using IBM Cloud SQL Query.

Building an application in the cloud has never been easier...or harder...

Since the beginning of time (which according to Unix is Jan 1, 1970), server-side developers have been searching for the holy grail of server-side development: a software architecture that scales infinitely, heals automatically, and is always available.

Today, those goals are largely achievable through the use of Docker containers and Kubernetes clusters. Developers can create applications that work together as an ecosystem of microservices, which each perform a specific function. These containers can be turned on and off to accommodate unexpected spikes in server-side traffic, and new servers can be added quickly and easily when the existing cluster runs out of resources. The containers all connect to each other using a VPN that can span data centers across multiple regions all over the world, and containers can even be set up to automatically start up on new servers when they're added.

Of course, this all sounds great, but how does a developer interested in leveraging this technology actually use it?

In this article, we'll dip our toes in the water by creating an IBM managed Kubernetes cluster, and take a look at how we can deploy applications into that cluster using Rancher 2.0.

Let's get started...

Creating a Kubernetes Cluster on IBM Cloud

Building a highly-available Kubernetes cluster is a challenge, even with the many tools available to developers for creating them. Fortunately for developers working in the IBM Cloud, the IBM Cloud Kubernetes Service provides a tool for creating a working highly-available Kubernetes cluster with just a few clicks.

We'll start by logging into the IBM Cloud dashboard at https://console.bluemix.net using your IBM Cloud account. If you don't already have one, you can sign up at https://ibm.com/cloud.

You'll create a new Kubernetes cluster by opening up the Catalog from the top navigation menu and typing kubernetes into the search box.

Select the Containers in Kubernetes Clusters and you'll be prompted to create your cluster. Click on the Create button to set up a new cluster.

From the Create page, you can either try things out with the Free plan, or you can spin up a production-ready cluster. From here you'll also choose the data center region you'd like to use and, if you're deploying a production-ready cluster, how many worker nodes you'd like to have. When you're happy with the results, give your cluster a name and click Create Cluster to start the servers.

Once you click Create Cluster, it will take a few moments for your cluster to finish deploying. As the cluster is deploying, you'll be redirected to the cluster deployment page. The Access tab contains a set of commands that you can use to interact with your new Kubernetes cluster. Follow the instructions on the Access tab in your deployment page to set up the Kubernetes and IBM Cloud command line tools that'll be useful in the next section, and you can use them to test the connection to your new cluster.

Once you are able to receive a response from the kubectl get nodes command, your cluster will be ready to deploy containers into.

Installing Rancher 2.0

Now that you have a working Kubernetes cluster, it's time to start deploying applications into your cluster. While you could use the kubectl command to accomplish this, tools like Rancher can make managing the operations of a Kubernetes cluster much easier.

Rancher itself is deployed as a Docker container. Run it by executing the following docker command:

docker run -d --restart=unless-stopped rancher/rancher -p 80:80 -p 443:443

This will start up Rancher and allow connections on port 80 and 443 of your host machine to pass through to the Rancher container. Once the container has started and the services are activated, you can log in by navigating to https://localhost in your web browser.

Your first order of business is to create an administrative user and to give that user a strong password.

Next, you'll need to configure the URL that other servers in your cluster will use to access your Rancher instance. It's important that this URL is accessible by all machines in the cluster.

Once you've saved the URL, you'll be taken to the Clusters page. Rancher allows you to manage multiple Kubernetes clusters, so this page will serve as the starting point for all of the clusters you deploy and manage with Rancher.

Rancher 2.0 acts as the master of its own Kubernetes cluster, but since we already have a Kubernetes cluster running in IBM Cloud, let's take a look at how we can add this existing cluster to Rancher.

Importing Your IBM Cloud Kubernetes Service Cluster With Rancher

Now that we have Rancher running and a Kubernetes cluster set up in the IBM Cloud, it's time to "connect the dots" and import our existing Kubernetes setup into Rancher. From the Clusters page, click on the Add Cluster button:

You'll see several options for cloud providers. These options allow you to create a new cluster directly within the Rancher environment. Since we've already created our cluster, let's choose the IMPORT option and bring our existing cluster into the Rancher tool. Give the cluster a memorable name and click on the Create button.

Importing the cluster itself is a simple matter of applying a Kubernetes configuration to your IBM Cloud Kubernetes Service cluster. Run the given kubectl command on your IBM Cloud Kubernetes Service cluster using the command line tools you installed earlier.

NOTE: At this stage, having an accessible installation of Rancher is important. If your Rancher instance is not accessible from the general Internet, it's likely that you won't be able to import your cluster.

You'll know that the configuration has been applied successfully when the Clusters page shows the health of the nodes in your cluster.

Now that we have a connection to our Kubernetes cluster, let's create our first container using Rancher.

Projects, Namespaces, and Pods: Oh My!

Before we dive too much further, we should take a moment to talk about the basic vocabulary of Kubernetes. If you're already familiar with these concepts, you can skip ahead to the Rancher Projects section.

Docker containers running in Kubernetes are grouped together using a few different layers. In this section, we'll explore those groupings and the terminology we'll need to understand the process of running a container.

Nodes

A Node in Kubernetes is typically a single server resource or compute unit. It represents the physical or virtual server that will be running all of your containers and is used to help visualize and manage the resources that your entire cluster is consuming. Multiple nodes are usually run together in a High-Availability environment, with master or API nodes providing access and management of multiple worker nodes. The worker nodes use all of their resources to run the applications you deploy into your cluster.

Pods

Containers are run in a single logical unit called a Pod. Pods are the smallest unit of deployment and typically run only a single container (although containers that are VERY TIGHTLY COUPLED may be run in the same pod together). Pods are used to abstract out a single component of your application into something that can be deployed, managed, and run in the Kubernetes cluster.

Namespaces

Pods that are related to each other can be grouped together into Namespaces. These namespaces are more than just a logical grouping - they actually affect how containers are networked together, with containers in the same namespace able to access each other by local name rather than a domain name.

Services

By default, pods are isolated from each other on the network and are unable to access each other. To communicate between pods, you can create a Service for that pod. Services expose a pod to the local network via a local domain name with the format <container_name>.<namespace_name>.svc.cluster.local, and pods within the same namespace can access each by container name alone.

Ingresses

Much like Services expose pods to the local network, Ingresses expose a service to the general Internet. Ingresses are the entry point of the outside world into the VPN that containers use to communicate with each other.

Rancher Projects

The last unit of organization we'll talk about is a Project, which is NOT part of Kubernetes but is a feature of Rancher. Rancher projects are used to control access to resources within Rancher, so any Kubernetes resources that you'd like to manage in Rancher needs to be part of a project. For example, if you'd like to manage applications in the kube-system namespace, you'll need to move them into a project to manage those resources. You can also use projects if you have teams of developers and need to finely control who is able to update and manage the resources for a particular project.

Deploying Your First Container With Rancher

Now that you have a sense of the structure of applications in Rancher, it's time to launch your first container. We'll want to launch our container into a Project within our Rancher instance, and a namespace within our Kubernetes cluster. When we created our cluster in Rancher, a new project called "default" was created. Kubernetes also has a default namespace that we can get started with right away. To deploy into our new namespace, select default from the cluster drop-down menu.

This will drop you into the namespace with a number of menu options. To deploy our application, we'll want to create a workload, which is another name for application. When you select your default namespace, you'll be immediately dropped into the workloads tab. You can also click workloads to navigate to this interface.

From here, you can either import an existing container application using a Kubernetes YAML file (by clicking on the Import YAML button) or use the Rancher container deployment wizard by clicking the Deploy button. Let's try the latter. Click on the Deploy button and you should be taken to the deployment wizard.

Here, you can adjust the settings of your workload before launching it. The image is the Docker image that will be deployed (the image will automatically be pulled from DockerHub if it isn't available locally). There are many settings here you can adjust that are outside of the scope of this article, but for now, we'll leave this as a simple busybox deployment with a name of "test". Click the Launch button and your container will be deployed onto your cluster.

Once your container is launched and active, you can perform actions on your container by selected the context menu on the right-hand side of the deployment.

Clicking on Execute Shell will bring into a bash terminal and allow you to interact directly with the container's underlying application.

You can also toggle the scaling submenu using the arrow on the right of your deployment, which provides options for scaling up or down your deployment to multiple pods and allows you to monitor or control individual pods in your deployment.

Wrapping It Up

In this article, we took a first look at deploying applications into the IBM Cloud Kubernetes Service using Rancher. This should give you a good foundation for getting started with running applications within the IBM Cloud. In the next articles in this series, we'll take a deeper look at how our microservice applications can interact natively with other IBM Cloud services. We'll also learn how to deploy microservice applications within our IBM Cloud Kubernetes Service, deploy a microservice service mesh, store data using IBM Cloud Object Storage, query that data using IBM Cloud SQL Query, execute IBM Cloud Functions from our Kubernetes applications, and connect all of the pieces together seamlessly using Kubernetes-native services.

Discover how to deploy pre-built sample microservices OR create simple microservices from scratch.

Topics:
ibm ,tutorial ,microservices ,containers ,kubernetes

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}