(Part 0) Reducing the Cost of Running a Personal k8s Cluster: Introduction

(Part 0) Reducing the Cost of Running a Personal k8s Cluster: Introduction

For the last couple of months, I’ve spent the majority of my non-work coding time creating a Kubernetes of my own. My central thesis for this work is that Kubernetes is one of the best platforms for individual developers who want to self-host multiple applications with “production” performance needs (i.e. hosting a blog, a private Gitlab, a NextCloud, etc.). Supporting this thesis requires multiple forms of evidence.

via GIPHY

First, we need to show that deploying/maintaining multiple different applications with Kubernetes is doable and enjoyable without quitting our jobs and becoming full time sysadmins for personal projects. Our previous blog posts on deploying this blog via Kubernetes and setting up monitoring and alerting, as well as much of the work outlined in my personal k8s roadmap focus on deploying and maintaining multiple different applications via Kubernetes, so I hope that we’ve demonstrated, and will continue to demonstrate, Kubernetes’ power and ease.

Equally importantly, but so far unexplored on this blog, we need to demonstrate we can run a personal Kubernetes cluster without it breaking the bank. If it cost, for example, $500 a month to run a personal Kubernetes cluster, we’re drastically reducing the segment of developers for whom Kubernetes is an appealing tool. At minimum, we want to show choosing Kubernetes is a cost-neutral decision, and at best, we’d like to show there economic advantages to choosing Kubernetes instead of other comparable methods of deploying applications.

As such, this blog series focuses on exploring Kubernetes’ cost effectiveness.1

Where are we at?

via GIPHY

In part 2 of my series on creating a Kubernetes cluster for personal usage, I outline the steps for creating a Kubernetes cluster using Kops. The steps outlined in that blog post do not derivate from Kops defaults, and as a result we create a Kubernetes cluster with one m3.medium master and two t2.medium nodes. The master has an attached 64GB gp2 EBS volume, and each node has an attached 128GB gp2 EBS volume. Finally, the first application we hosted on Kubernetes was our blog, which contained a LoadBalancer service, which in turn creates an ELB.2 Fortunately, almost all the AWS resources created by Kubernetes/Kops have a KubernetesCluster tag, which makes it relatively easy to use AWS Cost Explorer to examine all Kubernetes/Kops created resources.

In total, we have the following costs:

master ec2 (1 m3.medium): (1 * 0.067 $/hour * 24 * 30) = 48.24
nodes ec2 (2 t2.medium): (2 * 0.0464 $/hour * 24 * 30) = 66.82
master ebs (1 64GB gp2): (1 * 64 * .1 $ per GB/month) = 6.4
nodes ebs (2 128GB gp2): (2 * 128 * .1 $ per GB/month) = 25.6
elb (1 classic) (1 * .25 $/hour * 24 * 30): 18
total: 165.06

From this rough math, we’re spending around $165 a month to run personal Kubernetes cluster. If we run this cluster for 12 months, we will end up spending around $2,000. That’s a lot of money!

via GIPHY

Fortunately, we can take a number of cost optimization steps to decrease our AWS bill. We identified three main types of AWS resources on which we’re spending a significant amount of money. In order of total cost, they are EC2 instances, EBS volumes, and ELB load balancers. We will optimize each resource type independently, beginning with our largest expense, EC2 instances, in the next blog post in the series. I can’t wait to explore these cost-reduction solutions together, in order to make running a personal k8s cluster accessible to as many developers as possible.

via GIPHY


1. As a note, some of the cost reduction ideas proposed by this series are enabled by our choice of AWS as our cloud vendor. Its possible both that some tips we offer won’t be applicable to other vendors, and also that other vendors may have tips which aren’t applicable to AWS.

2. I’m making the assumption that a personal Kubernetes cluster will have at least one publicly available service.

comments powered by Disqus