Custom Kubernetes bare metal deployment

Published:  14/01/2019 12:13

Introduction

Kubernetes (often abbreviated k8s) has only been around for a few years and considering the relatively slow adoption rate of technologies when it comes to hosting and operations, the general enthrallment around Kubernetes and quick implication of most-if-not-all of the cloud giants speaks for itself.

Clearly anyone working in operations would have heard of it by now and while container infrastructures are not the best choice for everyone they certainly have proven themselves through the years up to now.

The k8s documentation can be confusing at first, especially if you want to deploy your cluster on your own terms and/or outside of the Google cloud or one of the cloud providers who provide automated way to deploy k8s clusters.

Net7 values controlling processes and having clearly defined safeguard and backup procedures but more importantly, we value human to human interaction and understanding the needs of our partners to deliver a tailor made solution.

In this article we attempt to summarize how we would proceed to deploy a k8s cluster on our virtualization infrastructure. In an upcoming article, we'll describe an easy way to cleanly expose HTTP services to the outside world with automatic SSL certificate generation, as well as a quick and effective option for persistent storage.

Even though k8s is considered mature and its usage through an API with retained compatiblity with previous versions, the project is moving fast and some of the information hereunder may become obsolete at some point.

Software versions used for the article:

  • Debian: 9 amd64
  • Docker: 18.06
  • Kubernetes: 1.13.1

Do not hesistate to contact us at support [at] net7.be if you're interested in getting a quote for your own k8s cluster hosted on the Net7 infrastructure.

Brief recap about containers

The Docker project has been around for about 5 years now.

The present article assumes you're already familiar with containers and especially, Docker containers, which are the default container engine used by Kubernetes. However, while we're at it, we might as well try and provide a quick explanation of what containers are. Skip the current section if you're already familiar with the technology.

Containers are a consequence of possibilities offered by modern Linux kernels for isolation of collections of processes.

They basically allow running a fully jailed Linux operating system on the host kernel, but with its own isolated filesystem AND processes.

This is often confused with virtualization, but there is no hypervisor here, just a plain regular Linux kernel doing its normal process scheduling. It's just that some processes are considered in isolation to others. As a consequence, all containers run on a Linux kernel.

No networking is originally involved here, container engines have to provide some kind of virtual networking themselves.

Linux host & container processes | requires SVG browser support

Containers, like virtual machines, can be made easy to distribute as they're just a jailed filesystem that you can move around, attached to container-engine-specific network and process bootstraping configuration.

Development teams and more generally devOps people have appreciated the easy bridge you can get from a container development environment to the actual production environment. Which is pretty much about using the same container with different network configuration, possibly some scaling involved as well.

You might be wondering why all that wouldn't be possible with virtual machines. Actually it is possible, as projects like Vagrant have shown. You can share a Vagrantfile with your dev team and have the same environment built for production.

The main advantages of using containers over such a solution would be the lesser overhead and possibility to run many containers on a single virtual machine rented from a cloud provider rather than having to rent multiple virtual machines.

Containers are composable, you can build your own images and push them to the Docker hub or your own private registry which will allow for versioning production-ready environments for your apps that can easily be tested and distributed.

This makes containers a prime choice if your development team uses continuous integration, as an automated process could build and test production images at every change of the codebase.

Containers are not without disadvantages. They tend to generate a lot of extraneous disk space, CPU and memory usage if you follow the general guidelines of running everything inside containers.

Container images are incremental and built upon a full Linux system (they just borrow the kernel and userspace) and as such they tend to use a considerable amount of extra disk space, which in turn pushes devOps people to try and optimize images to use smaller Linux footprints (like the Alpine distribution).

A Docker image registry, by definition, is something that will grow indefinitely. You need to keep that in mind if you want to run your own on premise image registry for continuous integration and want to project long term storage costs.

People alse tend to forget that a container system by itself adds complexity. Containers have their own isolated filesystem which comes with a "storage driver" alongside the actual container runtime.

On the other hand, the gain in consistency of production and development enviroments is supposed to offset the possible issues due to the added complexity.

Docker comes with two important aspects about how it runs containers:

  • Container storage is by default non-persistent and will get destroyed when the container is shutdown. When managing application runtimes and not actual data backends, this can be argued as being an advantage.
  • The life of the container is bound to a specific process that runs on it and is configured when creating the container. If that process dies, so does the container. That concept is very different to what you would expect of a virtual machine.

An important concept of Docker is that container storage is by default non-persistent and will get destroyed when the container is shutdown. When managing application runtimes and not actual data backends, this can be argued as being an advantage.

Alternative ways to install Kubernetes

We're going to use kubeadm to create a cluster from scratch, but there are other ways to deploy Kubernetes clusters.

A few command line tools apply only to big cloud providers and facilitate deploying a cluster over their cloud infrastructure. The goal of this article is not to use any of the big cloud providers, so these won't be discussed.

If you're just interested in testing Kubernetes locally you have a few quick options:

  • Minikube is a single-node Kubernetes cluster running in a virtual machine - Will allow you to test anything on the latest version.
  • Docker Desktop now includes Kubernetes, although you might have to use ''docker stack'' commands instead of ''kubectl'' (the normal Kubernetes client).

For cluster deployment, a special mention has to go to the Rancher project.

Rancher abstracts all the cluster setup for you and sits on top of it all, allowing you to even register multiple clusters under the same Rancher helmet.

It's an awesome solution to get started quickly on bare metal. We'd also recommend it for a lab. However, we at Net7 like to control everything and if possible avoid stacking too many layers and frameworks and dependency management systems. This article will thus focus on a more lower level solution.

Setting up the nodes

A Kubernetes cluster is a group of Linux host systems.

The hosts do not need to be perfectly homogeneous, they just require a kernel with the process isolation capabilities required by Docker.

In our cases all nodes will be virtual machines but you could have physical machines in the cluster as well.

It's technically possible to mix CPU architecture, but we advise against it and this guide relies on a cluster creation method that requires having the same CPU architecture on all nodes. Do note that the architecture has to be the same, the CPUs could be completely different (e.g. one node use AMD hardware while another runs an Intel CPU).

Recommended resource values for all nodes:

  • 2 vCPU
  • 4 GB RAM
  • 20 GB disk

Nodes with a single vCPU are fine, but you'll want 2 for the master node to alleviate a good amount of containers.

General node setup

We use our typical Debian 9 virtual machine deployment for all nodes.

All our nodes will use the Net7 internal network 192.168.77.0/24 to communicate with each other, they do not have public IP addresses. We use a NAT gateway to have them access the internet. Containers will gain internet access through their node (can be restricted by configuration).

The tool we're going to use to deploy the cluster is kubeadm.

Since we're running the following operations on all nodes, you will either want to use these to create a virtual machine template, or create something like an Ansible cookbook to run operations accross all your nodes.

Install Docker

The easiest way to install Docker CE is to follow the official instructions.

We're basically adding a custom Debian package repository and installing the docker-ce package from there.

If you want to avoid any warning about supported Docker versions, you can search for the release notes of the current Kubernetes version and look at which version of Docker is officially supported.

Then you can look at the available versions by running:

apt-get update && apt-cache madison docker-ce

Copy the version string you need and install that specific version using:

apt-get install docker-ce=<VERSION_STRING>

Where you paste-in your version string from before.

That being said, there is a good chance that any recognized Docker version will work with no issues whatsoever.

Remove the swap

If your node candidate has a swap space, it needs to be removed.

Edit /etc/fstab, find the line for the swap entry and comment it out.

You could now either reboot or use the command to immediately remove current swap spaces:

swapoff -a

Enable IP forwarding & bridging

We're going to need the br_netfilter kernel module so you'll have to add a line with br_netfilter in /etc/modules, and either reboot or use:

modprobe br_netfilter

Now open /etc/sysctl.conf, we're going to need these two lines:

net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1

Either reboot or apply the sysctl change:

sysctl -p /etc/sysctl.conf

Install kubeadm, kubelet and kubectl

We're following the instructions from this page.

For Debian:

apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Create the master node

The master node will run a few components that are central to Kubernetes.

By default the master node, even though it's just like any other node, is marked with a taint that prevents user containers to be scheduled on it.

To create our cluster, we're going to use a kubeadm command such as:

kubeadm init --apiserver-advertise-address=<MASTER_IP_ADDRESS> --pod-network-cidr=10.244.0.0/16

Where <MASTER_IP_ADDRESS> is the IP address we're going to use for the Kubernetes API server. All the nodes have to be able to connect to that IP address on the network. To make things easier to administrate from your desktop machine later on, we're using an IP address that is accessible from the Net7 VPN.

The pod-network-cidr argument should be left unchanged as it's used by the network plugin we're going to setup later on.

If you already have a network attached to your virtual machine that is exactly 10.244.0.0/16 or is a subnetwork of that, you might run into issues and will need to modify the pod network configured in the network plugin configuration, a case which won't be explained in this article.

The pod networks are internal to nodes and although nodes are able to route from and to each other's internal networks, these won't be reachable from outside.

The kubeadm command should tell you the master node was initialized successfully and display a kubeadm join command to copy paste on future member nodes. It should look something like:

kubeadm join <MASTER_IP_ADDRESS>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<CERTIFICATE_HASH>

The token has been created to allow connection to the Kubernetes API on the master node. It's got a default validity of 24h, so if you want to add new nodes past that delay, you can still use the same command but you have to request a new token.

Provided you have access to the master node and a working kubectl client somewhere (we'll set that up later on), you can request new tokens like so:

kubeadm token create

Setting up the network plugin

It's already possible to add member nodes right now but they would not appear as ready unless we have a working network plugin.

Kubernetes offers the choice between a few options, some have very advanced routing capabilities alongside automatic dynamic DNS, etc.

The easiest network plugin we found is Flannel.

After the setup steps you should be able to use kubectl from an SSH connection on the master node, provided you have done these few steps (require privileged mode):

mkdir -p $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Which allows use the use the Kubernetes client on the master node when connected in SSH as the current user. Keep in mind that this basically makes the current SSH user a global cluster administrator.

If you've entered the same pod-cidr-network we did in your kubeadm command used to create the master node, then you can just apply the default configuration from Google:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

Which will start the networking pods on all nodes.

Adding the member nodes

Just SSH to each node and enter the kubeadm join command we copied from the previous step.

That's it. If the network plugin has been installed successfully the nodes will come online and in ready state.

Using Kubernetes

The goal of this article is not to be a tutorial about actually using Kubernetes so we won't go into much details although there will be more information on the next scheduled blog articles on the subject of Kubernetes.

If you're completely new to Kubernetes now is a good time to watch a few videos and/or read about the main k8s concepts such as pods, services and deployments, as well as basic uses of the kubectl client.

At this point we should be able to see the output of a quick kubectl query such as:

kubectl get pods

And all your nodes should be in the ready state.

Since the master node is on the Net7 private network we can also use one of the Net7 VPN systems (OpenVPN or PPTP) to access it from any desktop machine with VPN access.

The installation instructions for kubectl can be found on this page.

For Windows, the easiest is probably to download the exe file directly (all the Kubernetes binaries are written in Go and as such are usually standalone executables) and put it in the PATH for your operating system.

To actually connect to the cluster API server, we need to provide a client configuration to kubectl.

If the master node IP address you gave to the initial kubeadm command is reachable by VPN, you can just copy the configuration file that is on the master node and located at /etc/kubernetes/admin.conf.

You need to create a "kube" folder in your user directory and put the content of the configuration file in a new file named "config".

On Windows you could do this from the command line (make sure you're already on the physical drive where your user profile resides):

cd %USERPROFILE%
mkdir ".kube"
cd .kube
notepad config

Make sure to name the file "config" and not "config.txt".

Keep in mind that the config file identifies you as an administrator, it should be secured properly. There are more complex authentication schemes that you might want to look into if you have a somewhat large organization. To keep security tighter you might want to only use kubectl in SSH on the master node, and keep your copies of the bearer token encrypted.

In any cases, if the master node IP address you chose to create the cluster is not reachable by VPN you will not be able to use kubectl outside of the master node network unless you recreate the API certificates, but since all nodes check these certificates themselves this is not an operation that you can run on a live cluster, as far as I know.

You could still use the Kubernetes Web UI and proxy it through SSH (use kube-proxy on the master node, then create a SSH tunnel from your desktop machine to the master node kube-proxy port).

You could also go as far as to create your own VPN server to expose the master IP. I would suggest doing such a thing if you run your own bare metal k8s cluster outside of Net7 or any cloud provider. The VPN server could even run on the master node.

Conclusion and what's next

You now have a good baseline Kubernetes cluster.

There's a lot more we can do now. Customized DNS, more precise access control and general access security concerns, monitoring and statistics, complex multi-site networking and managing multiple clusters, complex resource planning and scaling, having your own private Docker image registry and many more sofistications.

We also briefly touched on the Web UI but have not deployed it in our cluster yet.

The next article about Kubernetes on this blog will be about exposing services in HTTP/HTTPS with automatic SSL certificate installation, alongside a quick introduction to the Kubernetes application deployment concepts themselves.

The last scheduled article will be about some practical solutions for persistent storage, which ties in to what we propose here at Net7 for your Kubernetes storage needs.

Comments

Loading...