How I Switched My Blog From OVH to Google Container Engine
How I Switched My Blog From OVH to Google Container Engine
Follow along one man's journey as he attempts to move his blog to GCE and simplify his deployments with Docker and Kubernetes.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
In this short story, I will relate how I migrated my personal website from a classic VM instance to Google Cloud using Kubernetes, Docker, and Nginx.
One of my personal goals was also to have a cloud-deployed website while spending the minimum amount of money.
Long story made short, I have been using Docker on several projects for one year. I progressively got accustomed with the ease of deployment provided by Docker. The issue? I launched my blog in February 2017, and for time and cost reasons, I picked a VPS instance from OVH.
Why OVH? Clearly, it is one of the cheapest IaaS providers out there and quite popular here in France. I have been using it for several projects without any major issues.
OVH has a public cloud offering: OVH Public cloud. However, the offering looked immature at that time, both in documentation and in reviews. Another reason I've since rejected OVH is because of cloud provider adoption. A lot of experts are turned toward GCloud and AWS. Spending my efforts on OVH would not provide enough visibility in the short term in my job.
To better help my colleagues and customers in adopting the cloud, I have decided to eat my own dog food. And among my personal projects, I have decided to migrate first my blog, switching my blog from OVH to Google Cloud (Container Engine).
Here are some interesting articles about the pricing and functionalities for the major cloud providers :
My blog is hosted on a VPS server (shared instance on OVH). I have have also installed Apache 2, some monitoring and security systems, and Let’s Encrypt to obtain a free SSL certificate.
My blog is not using the classical WordPress. Instead, I am quite fond of static website generators and, more recently, flat/headless CMS.
I am using HexoJS as a CMS. A main feature that I like is when I am writing an article in Markdown, the blog is regenerated to produce the static files, producing quite optimized pages.
Switching From a Legacy Deployment to the Cloud
These are the explanations of how I proceded to migrate my website.
Create My Google Cloud Account
Yes, we have to start from the beginning — I created a new Google Cloud account. Though it is rather easy to create an account, something surprised me: It was impossible to for me to pick an individual account.
It’s even in the Google FAQ (FAQ).
I’m located in Europe and would like to try out Google Cloud Platform. Why can’t I select an Individual account when registering?
The reason (thanks, EU) is dumb: In the European Union, Google Cloud Platform services can be used for business purposes only.
For your information, in Switzerland, the limit is lifted.
Interestingly enough, the free trial on Google Cloud has been expanded to $300 for one year.
Discover Google Cloud
Well, the UI is easy to manipulate, even with this nagging collapsing menu on the side.
The documentation is quite abundant, but I found a few major issues:
- Lack of pictures and schema: Most concepts are described only in text. Fortunately, some very kind people made great presentations (here and here).
- Copy/paste from the Kubernetes website: Yeah, most of the documentation can be found on Kubernetes, logically.
- Lack of information and use cases for some examples, such as using this damn Ingress. Why are people not providing Gists?
I created a cluster with two VM instances, 0.6GB of RAM, and 1 core. Indeed, I wanted to play with the load balancing features of Kubernetes.
Replicate My Server Configuration as a Docker Container
The easiest and most fun part has been reproducing my server configuration with Docker and including an evolution. For example, I wanted to switch from Apache 2 to Nginx.
For the first solution I created, I used a ready-made (and optimized) container image for Nginx and modified my build script to generate the Docker image. The generated website is already integrated into the Docker image.
FROM bringnow/nginx-letsencrypt:latest RUN mkdir -p /data/nginx/cache COPY docker/nginx/nginx.conf /etc/nginx/nginx.conf COPY docker/letsencrypt /etc/letsencrypt COPY docker/nginx/dhparam /etc/nginx/dhparam COPY public /etc/nginx/html
I made several tests using the docker run command to check the configuration on my own machine.
docker run --rm -i -t us.gcr.io/sylvainleroy-blog/blog:latest -name nginx
Hosting My Docker Image
My second question was how to store my Docker container.
Creating my own registry? Using a cloud registry?
I used two different container registries in my tests.
First is Docker Hub.
What I appreciate the most with Docker Hub is that I can delegate the creation of my Docker images to the Hub by triggering a build from GitHub. The mechanism is quite simple to enable and is really convenient. Each modification of my DockerFile is triggering a build to automatically create my Docker image!
Here is a small drawing to explain it:
And some part of the configuration can be found here: Docker Builds Configuration
However, Google Cloud also offers a container engine, and its usage has been redundant. I kept it to use with CircleCI.
Therefore, for the time being, I am storing my Docker container on Google Cloud...
With this command...
gcloud docker -- push us.gcr.io/sylvainleroy-blog/blog:0.1
The Cloud Migration
I have only used the GCloud CLI to perform the migration operations.
Install Google SDK
Everything goes smoothly, but don’t forget to install Kubernetes CLI.
gcloud components install kubectl
I had a problem with the CLI. It could not see my new projects (only some part of them), and I had to auth again.
gcloud auth login
And perform a new login to see the update.
Don’t forget to also add your cluster credentials using the GUI instructions (button connect near each cluster).
gcloud container clusters get-credentials --zone us-central1-a blog
Understanding the Concepts of Pod and Deployment
It took me time to understand what deployments and pods were. Using Docker and Docker-Compose, I could not grasp the concepts.
That is one of my concerns with Kubernetes. Some technical terms are poor and do not really help to understand what is behind them.
Well, I finally created a deployment to create two Docker instances inside my pod (replica=2). This deployment file is basically declaring that it requires my previous Docker image and that I want two copies. The selector and the label mechanism are quite handy.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: blog-deployment spec: replicas: 3 template: metadata: labels: app: nginx role: master tier: frontend spec: containers: - name: nginx image: us.gcr.io/blog/blog:0.9 ports: - containerPort: 80 name: http - containerPort: 443 name: https
I use this command to create it:
̀kubectl create -f pod-blog.yml
Automating the Generation, Docker Image Building, and deployment
I have automated the full cycle of my site generation, Docker building, and container registry and pod reload using CircleCI.
And the good thing is that all these things are free.
After playing with it in my spare time over two weeks, I have the following feedback.
The deployment mechanism and the way rolling updates are performed are impressive and time-savers. Some banks are still using a manual way or semi-automated way like Ansible to deploy their software, and rolling updates are performed awkwardly. Here, Kubernetes is deploying in the background of the new version, controlling its state (roughly) and if the conditions are met, switching from the old version to the new version. I am using this mechanism to bench my Docker images and push the new versions.
Load Balancing Mess
I had to struggle a lot to set up my load balancer. Well, not at the beginning. Kubernetes and GCloud describe precisely how to set up a Level-4 load balancer. It takes few lines of YAML, and it was fine. However, I had huge difficulties when I decided to switch to TLS and my HTTPS connection with Let’s Encrypt.
I ran into several difficulties:
- How do register my SSL certificate on a Docker container that is not deployed?
- What is a NodePort? And what is the difference with ClusterIP, a load balancer, and an Ingress?
- Where should I store my certificate? In the GCloud configuration or in my NGINX?
- Why is Ingress not working with multiple routes?
To address the following issues, I found these temporary solutions:
- I am using Certbot/Let’s Encrypt certification using DNS. That way, I can generate my certificates “offline”.
- I am not sure about the definition of NodePort. Either I need a load balancer for a single container in my pod, or I simply open the firewall. These concepts, introduced with Kubernetes, are still obscure for me, even after several readings.
- I made the decision to implement my HTTPS load balancing by modifying my NGINX configuration to store the certificate and rely on a Level 4 load balancer to dispatch the flow.
- I tried really hard to make Ingress work (the level-7 LB) but even the examples were not working for me (impossible to map the port number 0 error), and it was really badly documented.
The documentation about persistent volumes is not precise in Kubernetes and GCLoud and have important differences even between versions.
You have many possibilities:
- Use a Persistent Volume, PersistentClaim, and attach them to your containers
- Directly generating a volume from your deployment file
Another issue I ran into was that my Docker container was failing (and the pod itself as well) because the persistent volume created is never formatted.
But why ????
Indeed in your deployment file, you have properties to set the required partition format. But no formatting will be performed.
And therefore I had the following issues:
- How to mount something unformatted?
- How to mount something unformatted in a container of the pod without using the deployment?
- Why is there so little documentation in Google Container Engine (in comparison with Google Compute Engine)?
The recommended solution is to create an VM instance by HAND using Google Compute Engine, to mount attach the disk to the instance. To mount it manually and trigger the formatting. WTF
If you have a better way to handle the issue, I am really interested!
After a month of deployment, I haven’t spent a buck. My page response time decreased from 3.4s to 2.56s, and I am not waking up during the night, my eyes full of horror thinking about how to reinstall the site. I only have a container to push.
I am not yet using the Kubernetes UI and I don’t yet see the necessity. The CLI offers almost everything.
Cleaning a cluster, the pods, and deployments require several steps and maybe could be simplified.
One very important aspect of my project was also to decrease my bill to host the site.
Currently, here is my bill for 1,600 visits per month:
- I have a private GitHub repository (~7$/month)
- I am using the free tier of CircleCI, offering me the usage of a Private GitHub repository and a necessary number of builds.
- Docker Hub is free for any number of public repositories and 1 private Docker repository.
- I am using the free tier of Google. I spent $1 in one month and the bill is shared between my blog and my other projects.
- I have a cluster of 2 VMs for my blog
Compared to my 79€/year for my VPS.
Published at DZone with permission of Sylvain Leroy . See the original article here.
Opinions expressed by DZone contributors are their own.