My last two blog posts enumerated this blog’s SLO and error budget. Our next logical step is adding the monitoring and alerting infrastructure which will transform our SLO usage from theoretical to practical. Like creating a Kubernetes of One’s Own, this project contains multiple steps which we’ll explore over multiple blog posts. While this series focuses on achieving this goal for this blog’s specific SLO, the techniques are applicable to many scenarios.
In my last blog post, I publicized an SLO for this blog. I also mentioned that, in the future, I’d couple the SLO with an error budget and error budget policy. Well, the future is today, because this post will define error budgets and error budget policies and their benefits, before proposing a specific error budget and error budget policy to accompany our previously defined SLO.
What are Error Budget and Error Budget Policies?
Background I recently started reading The Site Reliability Workbook, which is the companion book to the excellent Site Reliability Engineering: How Google Runs Production Systems.
via GIPHY
These books devote considerable attention to Service Level Ojectives (SLOs), which are a way of defining a given level of service that users can expect. More technically, a SLO is a collection of Service Level Indicators (SLIs), metrics that measure whether our service is providing value, and their acceptable ranges.
In my last three blog posts, we focused on creating a Kubernetes cluster you can use for your own personal computing needs. But what good is a Kubernetes cluster if we’re not using it to run applications? Spoiler alert, not much.
Let’s make your Kubernetes cluster worth the cash you’re paying and get some applications running on it. In this post, we’ll walk through deploying your first application to Kubernetes: a static blog.
In my last blog post, we outlined the different methods of creating and maintaining a Kubernetes cluster, before deciding on Kops. In this blog post, we’ll actually create the cluster using Kops. I’ll provide source code and instructions, so by the end of this post, you can have your own Kubernetes cluster!
This tutorial is strongly based on Kops AWS tutorial, although its even simplifier because I’ve written some generic terraform configurations which simplify initial AWS configuration.
In my last blog post, I hope I convinced you why you should be creating your own Kubernetes cluster for personal usage. Now, we can tackle the fun part of creating the cluster.
What is a Kubernetes cluster? via GIPHY
We can start by answering the most important question: what is a Kubernetes cluster? A Kubernetes cluster is a collection of physical resources on which we run the Kubernetes container management software.
From my last blog post, you know Kubernetes manages the blog you are reading right now. But I violated pretty much all the rules of good blogging by only briefly discussing why I wanted to start running my own production Kubernetes cluster in the first place, and how exactly I made that happen. I think I was just excited it was working and I wanted to share…
via GIPHY
Now, I’m writing to remedy my excited self’s mistakes.
I’ve avidly followed the Kubernetes project since it was the basis of my undergraduate thesis in 2015. But despite all my reading and minikube experimentation, I felt I was missing out the important lessons you can only learn from using a technology to run real applications in production. I acutely felt this pain when contributing code to the Kubernetes ecosystem, as I was able to fix bugs, but didn’t have knowledge and empathy around the production user’s experience.
After a Saturday of disk partitioning, a trip to CVS for an emergency thumb drive, and much Googling, I’m writing this blog post from a MacBook Air running Ubuntu 17.10.
I won’t write an entire tutorial on the installation, as some great ones already exist (in particular, I benefitted greatly from this tutorial from CalTech and this tutorial from Christopher Berner). Instead, I’ll share why I installed Ubuntu 17.10 on my MacBook Air, some tips I found helpful during the installation, and the remaining shortcomings I hope to address in the coming weeks.
For the past couple of months, I’ve been working on Sheepdoge, a tool for managing your personal Unix machines with Ansible. It’s like boxen, but for Ansible.
One new sheepdoge feature I’m particularly excited about is the use of concurrent.futures during sheepdoge install. concurrent.futures provides a high-level API for executing code asynchronously, making adding thread/process based concurrency trivial. It is a standard library python module as of 3.2, and is available on python 2.