Since I last posted, I’ve been focusing almost all of my “fun coding” time on contributing to the Kubernetes code base. You can see my contributions on my Github. I’ve been particularly focusing on the components of the code base owned by sig/node. It’s been rewarding and a great learning experience… but it does mean I haven’t been focusing on adding features to my personal Kubernetes cluster. And since that’s what I mainly blogged about, it also means I’ve been blogging less.
Regularly updating our k8s cluster, and the applications running on it, is one of the most powerful tools we have for ensuring our cluster functions securely and reliably. Staying vigilant about applying updates is particularly important for a fast moving project like Kubernetes, which releases new minor versions each quarter. This blog post outlines the process we’re proposing for ensuring our cluster, and the applications running on it, remain up to date.
Kubernetes is a incredibly exciting and fast moving project. Contributing to these types of projects, while quite rewarding, can have a bit of a startup cost. I experienced the start up cost a bit myself, after returning to contributing to the Kubernetes after a couple of months of focusing on running my own Kubernetes cluster, as opposed to contributing source code. So this post is partially for y’all and partially for future me :)
We’re excited to announce all connections to mattjmcnaughton.com, and its subdomains (i.e. blog.mattjmcnaughton.com, etc.), are now able, and in fact forced, to use HTTPS. After reading this post, we hope you’ll be convinced of the merits of using HTTPS for public-internet facing services, and also have the knowledge to easily modify your services to start supporting HTTPS connections. Why do we care about HTTPS? via GIPHY Offering, and defaulting to, HTTPS is good web hygiene.
So far, most of our posts on this blog have been about the exciting parts of running our own Kubernetes cluster. But, running a Kubernetes cluster isn’t all having fun deploying applications. We also have to be responsible for system maintenance. That need became particularly apparent recently with the release of CVE-2019-5736 on 2/11/19. What is CVE-2019-5736 via GIPHY CVE-2019-5736 is the CVE for a container escape vulnerability discovered in runc.
In our last blog post, we increased the stability of our Kubernetes cluster and also increased its available resources. With these improvements in place, we can tackle deploying our most complex application yet: Nextcloud. By the end of this blog post, you’ll have insight into the major architecture decisions we made when deploying Nextcloud to Kubernetes. As always, we’ll link the full source code should you want to dive deeper.
In our blog series on decreasing the cost of our Kubernetes cluster, we suggested replacing on-demand EC2 instances with spot instances for Kubernetes’ nodes. When we first introduced this idea, we mentioned that this strategy could have negative impacts on both our applications’ availability and our ability to monitor our applications’ availability. At the time, we still converted to spot instances because we believed the savings benefits were worth the decrease in reliability.
In the first blog post in this series, we examined how our previous deployment strategy of running kubectl apply -f on static manifests did not meet our increasingly complex requirements for our strategy/system for deploying Kubernetes’ applications. In the second, and final, post in this mini-series, we’ll outline the new deployment strategy and how it fulfills our requirements. The new system via GIPHY Our new deployment strategy makes use of Helm, a popular tool in the Kubernetes ecosystem which describes itself as “a tool that streamlines installing and managing Kubernetes applications.
Over the holiday break, I spent a lot of my leisure coding time rethinking the way we deploy applications to Kubernetes. The blog series this post kicks off will explore how we migrated from an overly simplistic deploy strategy to one giving us the flexibility we need to deploy more complex applications. To ensure a solid foundation, in this first post, we’ll define our requirements for deploying Kubernetes’ applications and evaluate whether our previous systems and strategies met these requirements (spoiler alert… it didn’t).
Overall impact In parts one and two of this series, we sought to reduce our AWS costs by optimizing our computing, networking, and storage expenditures. Since this post is the final one in the series, let’s consider how we did in aggregate. Before any resource optimizations, we had the following bill: master ec2 (1 m3.medium): (1 * 0.067 $/hour * 24 * 30) = 48.24 nodes ec2 (2 t2.medium): (2 * 0.