Been long time that I wrote anything remotely useful for others. Thought to change that by writing about a tool that I have been using a lot lately - Kubernetes, an awesome container orchestration tool
I have been using kubernetes for work from past few months on AWS, I’ve learnt few things about kubernetes the hardway. One of the most common complaints I keep reading about kubernetes is that documentation is bad. I would agree to that half-heartedly as documentation is not bad, but more or less not greatly organized. There is a lot of information available over their github account, just waiting to be found and understood. This is a small attempt to simplify connect few dots. This is by no-means a guide or tutorial or a walk through. This is purely about my journey in getting it to work the way I wanted.
Before you continue and waste more of your time reading this, you should certainly read this if you dont know what is kubernetes. Also, this post doesn’t cover any code or configurations, just hints and links to guides/tools that helped me.
My expectations on k8s (kubernetes) deployment/cluster that I wanted to do were simple
- High available components
- Stable to use for production
- Self healing if something happens
- Easy to deploy cluster, repeatedly
Before going into details, these are the types of nodes you would have in your k8s cluster
- Master is a server that manages the cluster. There can be more than one of these machines.
- Minion is a server that runs your containers. There will always be more more than one or more of these machines.
These are the roles of servers in our kubernetes cluster. A node can play both roles if needed. Typically in a dev environment, you can have one server playing both the roles (a.k.a 1-node k8s cluster). And in a production environment, you would want multiple servers playing these roles.
As my target was to deploy this on AWS, the obvious place of starting my journey was AWS getting started guide by kubernetes team. And it took less than 2 minutes to realize that guide is not very helpful to me and doesn’t meet my #1 requirement of (Highly available components)
It shows on how to install kubernetes on a single AWS availability zone, with 1 master and
- When master goes down, cluster cannot be controlled
- When AZ goes down, entire cluster is down.
And then I went to go read their guide on creating High-Availability Cluster.
I realized that a good start would be to create a
Stability is something I enjoyed from kubernetes without putting any effort. It just worked out of the box. Kudos to Kubernetes team.
I wanted my cluster to heal itself if one of the minion goes down, or even when a master goes down. I achieved this easily using AWS Auto Scaling groups. My process was something like this
- Create an ASG for masters, put them behind an internal ELB (load balancer)
- Create an ASG for minions, make them talk to ELB of masters
Now, when a node goes down, AWS auto scaling group will automatically replace it. But, for this to work, the new node that is being added by Autoscaling group should come with configuration when it boots. I got this idea from a tool called kube-aws from CoreOS team. I managed to do pre-configuration of this part by using cloud-init.
Easy to deploy, repeatedly
There is a tool which I like for managing my infrastructure components. It’s a very nice tool which helps you deal with your infrastructure as if it’s code - Terraform
Now that I already know what I need to create, Using terraform, I simply created
- Different Launch Configurations for master ASG and minion ASG
- Internal Master ELB
- Cloud Init config for masters
- Cloud Init config for minions, pointing towards internal master ELB
- LC for masters using master’s cloud init, and LC for minions using their cloud init
- ASG using appropriate LCs
End result was a k8s cluster spanning across three AZs with multiple masters, multiple minions.
Even though, I managed to meet my requirements fairly easily, thanks to the vast kubernetes community, and articles/tools they wrote. Kubernetes can look daunting when you see their documentation, but its easy to setup and lovely to use once you understand components of it.