OpenShift Quick Start
OpenShift Quick Start
Getting microservices onto a service like OpenShift can make life a bit easier. Let's take a look at how to get your microservices solution up and going in OpenShift.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Applying a microservices style architecture is a popular option for achieving scalability, agility, and longevity. As experienced practitioners will tell you, one of the hardest and most important elements for success is implementing a platform that automates the acquisition of computing resources, deployment, scaling, and recovery (i.e DevOps).
Conceptually, the target platform characteristics should resemble features of cloud-based platforms on the market, like the following:
All of these examples are slightly different in technology features, but the bottom line is they provide a way to manage computing resources, build, and deployment in an automated fashion. These can all be categorized as a PaaS (Platform As A Service) solutions.
Our previous blog in the series introduced RedHat’s OpenShift solution, which provides a way for enterprise teams to implement their own PaaS. Essentially, it sits atop the Docker-based Kubernetes platform to provide a ready-to-use DevOps platform.
This blog introduces two hands-on exercises (taken from our OpenShift Course), that work to walk you through the following tasks:
- Installing OpenShift locally
- Adding a Container with an API service to a Pod
Unfortunately, it will take more than this quick start blog to get OpenShift installed and enabled in an enterprise. That said, developers, system admins, and any party that may be working on or responsible for the platform will benefit from understanding how to get OpenShift up and running on a local machine as shown in this blog.
Here are some basic elements you should know:
- Cluster – Represents and manages Nodes. Nodes are physical or virtual server(s).
- Node – A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster.
- Pod – A Pod is the basic building block of Kubernetes, the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
- Service – A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them, which is sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector. (See below for why you might want a Service without a selector).
- Controllers – Abstract elements that interact with basic elements (Jobs, Deployment, etc.)
Detailed Kubernetes Concept Documentation – https://kubernetes.io/docs/concepts
1. Install OpenShift Origin locally. Follow the link below for installation instructions for your operating system.
2. Start the cluster from a command window with the following command:
3. Access the web admin from a browser with https://127.0.0.1:8443 You can access the unsecured HTTPS connection and proceed.
4. The Admin Console will appear:
You are now ready to add, build, and run a container to the platform. You can upload a Docker/OpenShift-configured Image (which will be the subject of our next blog), or use some predefined Source-to-Docker Image (STI) templates.
The next Step will walk through how a Node.js API can be added and managed by the platform.
Spinning Up an API Service Project in a Pod
The following steps will create an OpenShift project from a pre-defined Node.js Source-to-Image project. In this exercise, you create a project connected to a
Git repository. OpenShift will build and create and deploy an image using the predefined Node-based source to image mechanism.
1. With the WebAdmin open, select the
My Project project and click
Add to Project.
Node JS option, and “V6 – Latest” from the drop down and click
3. Input name
khs-example-node-api and repo (https://github.com/in-the-keyhole/khs-example-node-api) as shown below.
Create and go to back to the overview.
4. From the
Overview tab, expanding it will display the Pod’s status. It might take a bit for the build to complete. When the Blue Status Circle appears, the container instance is up and ready for requests.
5. Execute the deployed container instance clicking with the following URL from a Browser:
You now have an OpenShift platform installed locally. Play around with the admin console and with scaling up Pods. Peruse the console capabilities.
We will continue this OpenShift training blog series next with the following blogs:
- Managing Docker Containers with OpenShift and Kubernetes
- Scaling Pods and Managing Cluster with the Command Line Interface
- Continuous Build and Deploy with Jenkins 2 Pipelines
- Using an STI (Source to Image) Utility to Create and Deploy Spring Boot Java Image
Published at DZone with permission of David Pitt , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.