Hosting Static Blog on Kubernetes

Hosting Static Blog on Kubernetes

In my last three blog posts, we focused on creating a Kubernetes cluster you can use for your own personal computing needs. But what good is a Kubernetes cluster if we’re not using it to run applications? Spoiler alert, not much.

Let’s make your Kubernetes cluster worth the cash you’re paying and get some applications running on it. In this post, we’ll walk through deploying your first application to Kubernetes: a static blog.

A static blog makes for a perfect first application because it is dead simple. We can focus entirely on writing Kubernetes configuration and won’t need to worry about application logic.

A static website like a blog also has commonly accessible metrics for measuring success: request response time and uptime. In a later blog post, we’ll establish an SLO for these metrics and add monitoring to ensure we’re meeting our SLO. We’ll also measure how any changes to this application’s code or Kubernetes’ configuration impacts these metrics.

Finally, its easy to customize a static blog for your own use case. While I doubt you’ll want to host this specific blog on your Kubernetes cluster (unless you want to help me out with hosting :P), it is trivial to swap out a container image serving my blog with a container image serving your blog (or any other static website). The Kubernetes configuration will stay almost exactly the same.

via GIPHY

Some Background

With our goal in place, we can start thinking about what Kubernetes components we must define to robustly serve our static website. Our initial use case requires only two high level Kubernetes components: a deployment and a service. A deployment is the recommended method for declaring a desired collection of pods. A pod is the smallest deployable unit of computing that Kubernetes creates and manages. Its a group of one or more containers with shared storage/network. For this application, our pods consist of a single nginx container which is responsible for serving our static site on port 80.

A service is an abstraction by which Kubernetes provides a consistent way to access a set of pods. A service is necessary because pods are ephemeral. You should never communicate with a specific pod, because it could be restarted, moved, or killed at any minute.

If your completely new to Kubernetes, see these great tutorials from Katacoda for some foundational hands on experience.

Let’s Do It!

via GIPHY

We can now start writing your Kubernetes configuration files. There are many fancy ways to create Kubernetes configuration, but for now, we’ll keep everything simple with vanilla yaml. Create a file called bundle.yaml and add the following:

This code block tells Kubernetes there should be a deployment named blog, which consists of two pods with the label app: blog. The deployment should construct these pods via a template. This template specifies the pods contain an instance of the YOUR_IMAGE_HERE container image and expose port 80. Be sure to replace YOUR_IMAGE_HERE with the name of your container image. For this example to work properly, a container running from your container image should serve your static site on port 80. This blog uses the docker.io/mattjmcnaughton/blog container image. Here’s the Dockerfile. Please leave a comment if you would like a tutorial on constructing the container image.

We’ve configured the deployment. Now we just need to add the following section to configure the service:

This section of the code tells Kubernetes there should be a service named blog. The service has the type LoadBalancer, which means Kubernetes allows access to the specified set of pods via an external load balancer provided by our cloud provider. For AWS, Kubernetes will create an ELB and route all inbound traffic accessing the ELB to the pods matched by our service’s selector. Our service selects all pods with the tag app: blog. Finally, we instruct our service to forward port 80 on the load balancer to port 80 on the pod.

Finally, run kubectl apply -f bundle.yaml, which instructs Kubernetes to perform the actions necessary such that the objects defined in bundle.yaml exist. Congrats, you’re running an application on Kubernetes!

via GIPHY

After waiting a minute or two, run kubectl get deployments. You should see a deployment named blog with a DESIRED value of 2, and a CURRENT value between 0 and 2, depending on how many pods the deployment has finished creating from the template. To see these pods, run kubectl get pods. You should see two pods named blog-SOME_UID with a STATUS of Running.

Next, run kubectl get svc. You should see a service named blog with an EXTERNAL-IP. If you navigate to that EXTERNAL-IP, you should see your website. You’ll likely want to setup some additional DNS to direct requests to YOUR_HOSTNAME to this external IP address. For now, we’ll manage this outside of Kubernetes, so I’ll leave it to you to do implement this configuration however works best for you. I use an A record and an Alias Target in Route 53.

Looking to the Future

Again, congratulations! You now have a “production” application running via Kubernetes. Right now, its very simple and fails to take advantage of much of what makes Kubernetes and Cloud Native interesting and special. But, that’s just for now. In future blog posts, I hope to explore a whole bunch of extensions and enhancements to this simple project. Just off the top of my head, I have the following ideas:

  1. Create a Helm chart for this application, and use Helm for deployment. This change allows you all to reuse this work without having to manually edit yaml files with your personal values. It also supports a more robust deploy process then updating a static file and running kubectl apply.
  2. Define an SLO for our static site based on our twin objectives of low response time and high uptime. Add application monitoring to ensure our static site continuously meets said SLO.
  3. Properly configure RBAC for this application, ensuring it has the least possible permissions necessary to function. Verify our success with a tool like kube-hunter
  4. Add centralized logging to provide easy visibility into any potential issues.
  5. Enable SSL connections to our static site via Let’s Encrypt.
  6. Create a CI/CD pipeline for testing (i.e. container image vulnerability scanning, static analysis, etc.), and deploying our application.

While we’ll most likely use this static site as the basis for these explorations, the majority of the learnings will be applicable to any application you deploy on Kubernetes.

If there are any additional topics you’d like me to cover, please leave a comment! Additionally, any feedback on my already existing Kubernetes blog posts would be greatly appreciated, specifically around the level of specificity. Am I explaining too much? Too little? Looking forward to hearing from you and looking forward to continuing to experiment with Kubernetes and Cloud Native together.

via GIPHY