Install Kubernetes on Prim With Ubuntu

By | December 6, 2019

Containerization has become a major trendy topic in the application and IT world as of late.  We could have an entire post based on the merits, pros and cons of containerization and application microservices however I will take the leap of faith that you’re here to setup an environment to test / run yourself and form your own opinion on the subject.

But Bob, there are already several articles out there on this process!

Yes, yes there are. Some of them are pretty well written but in a vacuum and not meant for a production environment (or on prim).  Others are out of data and contain bad information.  The aim of this guide is to end with a working Kubernetes cluster that can be used to fully deploy applications along with edits, changes, and comments I have made to the environment over the last few years as I have migrated more of T3stN3t services to containers hosted through K8s.  Don’t worry, I’m sure in 12 months this guide too will be out date.

For this guide we will be creating a three node cluster, one Kubernetes master and two worker nodes built on Ubuntu 18.04, as this is what we use for all service hosting (sans the stuff we cant for off windows.  I’m staring at you SE).


Installing Docker

Docker is the heart of the containerization platform.  It is what handles application isolation and virtualization making microservices a possibility.  Comparing this to yesteryears virtualization, it would be an ESXi host.

Ubuntu 18.04 ships with docker installed as a snap package.  This works on its own and if you are dipping your toe into building dockerfiles and docker images a great place to start.  When we want to scale this into Kubernetes or expand this to image repos that are not part of the same space as the host OS we will want to use a controlled docker install. 

Remove Ubuntu’s Docker

sudo snap remove docker

Since this cluster will be destined to be used in some sort of fashion outside of launching a single nginx instance, we should assume that the end consumer will make… decisions of the not best sort.  By default docker puts all of its images into /var/lib/docker.  If you are hosting off of this off VMs and have 1TB VMDKs you probably don’t care (though we should talk about your deployment habits).  If this is being placed on baremetal with 20GB SD cards, capacity is a concern. 

First real-world lesson I discovered:  You can quickly run of out space in your environment with poorly built docker images.

To resolve this, you can install docker and then edit configs to redirect to other mount points.  Docker no longer supports have this hosted on NFS, which leaves us with a block solution.  I use iSCSI for our environment, but FC or another VMDK/VHD would also work. 

Regardless of the path you choose for where the capacity resides, the steps will be:

Note: If your OS partition is large you could skip this step.  I would still recommend splitting out docker from your OS.

Confirm path doesn’t already exist

ll /var/lib/docker
ls: cannot access '/var/lib/docker': No such file or directory

Create directory

sudo mkdir /var/lib/docker

Mount a new disk.  A quick script I use to find new disk:

sudo /bobk/build/
	Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
	Units: sectors of 1 * 512 = 512 bytes
	Sector size (logical/physical): 512 bytes / 512 bytes
	I/O size (minimum/optimal): 512 bytes / 512 bytes

Create a partition and filesystem with your utility of choice. 

sudo mkfs.ext4 /dev/sdb1

Add it to your /etc/fstab

/dev/sdb1       /var/lib/docker ext4 defaults 0 0

Kubernetes does not like to have swap enabled on any of its nodes.  In fact it will refuse to start and bury a generic message.  This is part of trying to get 100% utilization out of a worker node given the services it needs to host and expects dynamic adjustments as apps come and go.  Since we need to be in the fstab now is a good time to comment out swap.

#/swap.img      none    swap    sw      0       0

Disable swap

sudo swapoff -a

Reboot the node to have the snap remove, swap off, and new mount take effect. 

sudo reboot

Once the node is back up confirm docker is gone

docker version

	Command 'docker' not found, but can be installed with:

	sudo snap install docker     # version 18.09.9, or
	sudo apt install

	See 'snap info docker' for additional versions.

Install Docker

sudo apt install

Check it installed correctly (Note: version will change based on when you are doing this)

sudo docker version
	 Version:           18.09.7
	 API version:       1.39
	 Go version:        go1.10.1
	 Git commit:        2d0083d
	 Built:             Fri Aug 16 14:20:06 2019
	 OS/Arch:           linux/amd64
	 Experimental:      false

	  Version:          18.09.7
	  API version:      1.39 (minimum version 1.12)
	  Go version:       go1.10.1
	  Git commit:       2d0083d
	  Built:            Wed Aug 14 19:41:23 2019
	  OS/Arch:          linux/amd64
	  Experimental:     false

Enable the service to ensure it runs at boot

sudo systemctl enable docker

Congrats we now have docker running… again!

Run through this process on all worker nodes.  Don’t worry, I’ll wait here and sip on my coffee.

Installing Kubernetes

Now that Docker is installed, its time to move onto Kubernetes itself.  If Docker is the heart of the containerization cluster, Kubernetes is the brain or soul.  Its what elevates the technology from “spiffy and shiny” into enterprise production viable.  In our comparison of last generation of virtualization, if Docker is ESXi then Kubernetes is vCenter.

Install Kubernetes package (kubeadm, kubectl, kubelet)

sudo curl -s | sudo apt-key add
sudo apt-add-repository "deb kubernetes-xenial main"
sudo apt-get install kubeadm kubelet kubectl

One of the reasons I fell in love with Ubuntu is simplification of package management.  That simplification occasionally works against me though when the system decides to update a package without my consent and in doing so blows up the interoperability between Kubernetes services.  This leads us to…

Second real-world lesson:  Don’t allow automatic updates on your container environment to avoid breaking service interactions OR your containers.

sudo apt-mark hold kubeadm kubectl kubelet

Security should normally be somewhere in your thought process when building any environment.  Even if you are running a firewall at the front of your environment you should still be using some sort of system firewall to prevent internetwork sniffing.  I use iptables that comes with Ubuntu.  You will need to open some ports to allow Kubernetes to communicate.  Failure to do so will leave you with a generic unhelpful error.

sudo ufw allow 2379:2380,6443,10250:10255/tcp

Check all the elements installed correctly

kubeadm version
	kubeadm version: &version.Info{Major:"1", Minor:"16", 
GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:20:25Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
kubectl version
	Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

Hey, whats that error on the local host all about?!

You can safely ignore this for now.  The cluster doesn’t exist so kubernetes cant talk to the non-existent cluster on the node.

Run through this process on all worker nodes.  Don’t worry, I’ll sit here again and now sip on another cup of coffee.

Deploy Kubernetes

Now that all of the parts are installed let’s deploy the cluster.  First step is to initialize the cluster.  Kubernetes creates its own virtualized network and has several options. This first decision is which segment you will use for this traffic.  If your in a large network, ask your network admins what segments aren’t routed (maybe 2% of the environments?).  If this is small and self-contained (the other 98%) you can use what I use.  If you want to learn more on pod networks and the black magic that is occurring here read up:

sudo kubeadm init --pod-network-cidr=



To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

	You should now deploy a pod network to the cluster.
	Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

	Then you can join any number of worker nodes by running the following on each as root:

	kubeadm join --token g8oq2c.x972sqtucl5ulc3g \
		--discovery-token-ca-cert-hash sha256:7e6f2cdcfe66f334b907277c4e61af4e51365934108c1687bfdbbd2cbccfc2cc

Kubernetes does a good job trying to make the cluster easy to expand after the initial initialization.  We can break down this wall of text into three task that need to happen.

Task 1. Add kube config to master node.

Remember that error about not being able to talk to yourself?  Lets fix that with the first section that kube init called out.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check that kubectl can see itself

kubectl get nodes
	NAME            STATUS     ROLES    AGE     VERSION
	ubu18k8sdemo0   NotReady   master   2m51s   v1.16.3


Task 2.  Deploy a pod network

You went and read the earlier section on pod networking to make a sound decision on network segments to use for your network and didn’t blindly follow something on the internet.  You can now make a well-informed decision for your infrastructure on what type of pod network service you will use. 


You can use flannel like most environment outs there and just trust that black magic works.

sudo kubectl apply -f

You should now be able to see the kubernetes network services running in your environment.

kubectl get pods --all-namespaces
	NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
	kube-system   coredns-5644d7b6d9-6vvcj                1/1     Running   0          5m27s
	kube-system   coredns-5644d7b6d9-s5qvk                1/1     Running   0          5m27s
	kube-system   etcd-ubu18k8sdemo0                      1/1     Running   0          4m35s
	kube-system   kube-apiserver-ubu18k8sdemo0            1/1     Running   0          4m40s
	kube-system   kube-controller-manager-ubu18k8sdemo0   1/1     Running   0          4m45s
	kube-system   kube-flannel-ds-amd64-n5c2z             1/1     Running   0          94s
	kube-system   kube-proxy-df6lp                        1/1     Running   0          5m27s
	kube-system   kube-scheduler-ubu18k8sdemo0            1/1     Running   0          4m35s

Confirm we can see the master node AND its responding

kubectl get nodes
	NAME            STATUS   ROLES    AGE     VERSION
	ubu18k8sdemo0   Ready    master   6m27s   v1.16.3

Task 3.  Add worker nodes

The kubernetes master node on its own does not service container hosting request directly (ignore that kubernetes is running in a container on this host).  To deploy consumer applications we will need to add worker nodes. 

Note:  This section will change based on your environment.  The token and IP to use will change based on your deployment and can be found in during the earlier kubeadm init.

Run kubeadmn join on each worker node

sudo kubeadm join --token g8oq2c.x972sqtucl5ulc3g --discovery-token-ca-cert-hash sha256:7e6f2cdcfe66f334b907277c4e61af4e51365934108c1687bfdbbd2cbccfc2cc

Confirm we have a functioning cluster

kubectl get nodes
	NAME            STATUS   ROLES    AGE     VERSION
	ubu18k8sdemo0   Ready    master   15m     v1.16.3
	ubu18k8sdemo1   Ready    <none>   5m58s   v1.16.3
	ubu18k8sdemo2   Ready    <none>   104s    v1.16.3

Now is a good time to go read up on kubernetes command syntax as you will be using kubectl for most task from this point forward.

Congrats you have a functioning Kubernetes Cluster!

You could walk away now and would have something you could deploy to.  When I built my first cluster, I was ecstatic when I made it this far.  This is however not truly a fully functioning cluster.  Yes, you could deploy pods and services to it, you just won’t have an easy time reaching any of them.

Connecting to Services

Segue time!  One of the biggest changes in thinking in terms of containerized services is how you connect to them.  In a traditional environment you can throw out an application and give it a range of ports it can use to send and receive on.  The only concern we would give to this methodology is firewalls and what ports need to be opened for inbound request. 

Containers do not have any open ports.  This means when we launch a container it will have no access until we create a mapping.  In Docker we do this as part of the docker run process. 


docker run -it -p 42433-42434:42433-42434/udp -p 42435:42435/tcp -v /app/docker/sandstorm:/sandstorm \
-e INSTANCE_NAME="T3stN3t Pew Bang Blorb" \
--name sandstorm \

-p 42433-42434:42433-42434/udp -p 42435:42435/tcp means that we are taking ports 42433-42434 and binding them to the nodes ports 42433-42434 and will forward the request to the container.  This works on a single docker instance and its manageable.  What happens when we scale this to a second node?  Now we have two challenges, which node is the service residing on and how do we ensure traffic is being routed there.  Kubernetes supports using nodeport in the service but that only solves one issue, your still left with how to route traffic to services as they move around the cluster.  For the longest time this was handled by creating your own dynamic system that would monitor services, then go make updates to iptables and route tables to adjust for services floating around.  Workable but not exactly saleable or enterprise.

Kubernetes solves this challenge with load balancers; however, Kubernetes started its life in the cloud and load balancers were designed with that in mind.  Great when your services will never need to be on prim, less so when you will never be cloud bound. 

There’s a solution finally to this, MetalLB.  It takes the task of creating dynamic route tables in your cluster and automates the process.

TL;DR – if you want your Kubernetes cluster to be useable past a single node on prim, you will need a Load Balancer, we will be installing MetalLB

Install MetalLB

Installing MetalLB is straightforward.

kubectl apply -f

Confirm it installed correctly

kubectl get pods --all-namespaces
	NAMESPACE        NAME                                    READY   STATUS    RESTARTS   AGE
	kube-system      coredns-5644d7b6d9-6vvcj                1/1     Running   0          19m
	kube-system      coredns-5644d7b6d9-s5qvk                1/1     Running   0          19m
	kube-system      etcd-ubu18k8sdemo0                      1/1     Running   0          18m
	kube-system      kube-apiserver-ubu18k8sdemo0            1/1     Running   0          18m
	kube-system      kube-controller-manager-ubu18k8sdemo0   1/1     Running   0          18m
	kube-system      kube-flannel-ds-amd64-c4dmm             1/1     Running   0          5m58s
	kube-system      kube-flannel-ds-amd64-n5c2z             1/1     Running   0          15m
	kube-system      kube-flannel-ds-amd64-plfz6             1/1     Running   1          10m
	kube-system      kube-proxy-6pfs5                        1/1     Running   0          5m58s
	kube-system      kube-proxy-df6lp                        1/1     Running   0          19m
	kube-system      kube-proxy-znprk                        1/1     Running   0          10m
	kube-system      kube-scheduler-ubu18k8sdemo0            1/1     Running   0          18m
	metallb-system   controller-65895b47d4-2nhjb             1/1     Running   0          117s
	metallb-system   speaker-dbh96                           1/1     Running   0          117s
	metallb-system   speaker-j6qlv                           1/1     Running   0          117s
	metallb-system   speaker-vxbk7                           1/1     Running   0          117s

Now the service is in place, we need to give it some intelligence for routing.  It can handle routing in two fashion, Layer 2 or as a BGP router.  For most of your small environments or when you’re not having in depth conversations with a network admin, a Layer 2 config will work.

You can find a sample configmap here

You will want to edit the last section to work in your environment.  Some notes, the IPs need to be available, and they need to be routable to the work nodes. 

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: ip-240-242
      protocol: layer2

Apply the config.  Note that you will want to use your config and not mine, unless your network looks like ours.

kubectl apply -f

SUCCESS! We can now route traffic from the outside to services inside of the cluster without figuring out node mappings!

Automation and Maintenance

You can now deploy services, route traffic, and not worry (as much) about running out of capacity with poorly though out docker builds.  Technically you could stop here and enjoy your new improved virtualization platform, but were better then the average and want to head-off the headache of version control (stop using :latest).  Enter Helm:

Helm is a package manager to helps alleviate some of the issues that reside around more complex deploys or deploys in which the versions are rapidly changing.  All of this can be handled by hand and keeping good notes in your deployments OR we can just install Helm and have helm charts handle this.

Third real-world lesson:  Never use :latest in your deployments.  If your going to ignore this then use Helm charts to keep package updates in sync and clean order.

Deploy Helm

Straightforward process and you should start seeing the value of containerization deployments by now.

The following should be done on the Kubernetes master node. 

Download the deployment script

curl | sudo bash

Create an account for helm to use.

kubectl --namespace kube-system create serviceaccount tiller

Initialize helm

helm init --service-account tiller –wait

Verify setup

helm version	
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
	Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Test Deploy

Time to wrap the entire environment up in a bow and take it for a test drive.  Were in IT, it was bound to happen.  We have to Hello World.

I have a sample deployment setup you can pull down to edit.  Note you will want to change the IP in the loadBalancer, unless your network looks like ours.

kubectl apply -f

Check that it deployed pods and services

kubectl get services
	NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
	helloworld   LoadBalancer   80:32327/TCP   5m2s
	kubernetes   ClusterIP        <none>          443/TCP        69m

	kubectl get pods -o wide
	NAME                                     READY   STATUS    RESTARTS   AGE     IP           NODE            NOMINATED NODE   READINESS GATES
	helloworld-deployment-54c7b8796c-4mbxv   1/1     Running   0          3m28s   ubu18k8sdemo1   <none>           <none>

This deploys a small web server that collect information about the cluster at time of start.  If your networking is correct you should be able to open a web browser of your choice and enter in the load balanced IP (or if your very forward thinking, the addresses you had loaded into your DNS) to see:


You now have a fully functional Kubernetes environment.  Go forth and containerize like a pro, or at least have an environment that will be easier to use and manage.  A few final words and advice.

When setting up the environment, always use a new OS installs.  It can be tempting to attempt to setup a test or proof of concept on a system that’s already in place BUT it’s a terrible idea.  You will inevitably run into conflicts with other services to sort out. 

If you run into an issue do not invest too much time into troubleshooting.  Some issues you can encounter are very generic errors and troubleshooting the cause can be time consuming.  Instead dispose of the environment and start again.  That’s right, throw away the guest and start the process over.  You will save yourself several headaches and aggravations in the long run, and lets face it, your probably doing this off a virtual guest. 

Life Made Easier

Bob!  There are so many words here and I don’t want to read them all, do you have something that’s easier than a TL; DR?

Yes, yes I do.  It comes with a warranty of “No warranty of any sort provided.”

Pulls down two scripts to help automate Kuberntes deployments.  Edit the networking segments as required by your environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.