I installed Tanzu Kubernetes Grid (TKG) this weekend!

Good morning/afternoon/evening, these days have been hectic, for good or for bad during this pandemic situation, there’s a lot of work! Days full of Zoom calls are good to continue having that contact with our customers, but bad for blogging. 🙂

Nevertheless, I couldn’t wait to share my experience installing in my home lab Vmware Tanzu Kubernetes Grid, that is GA since last Thursday 4/9/2020. I have to admit that I did my first tries installing it manually without using Cluster API / Cluster API Provider vSphere (CAPI/CAPV), and I was unsuccessful; but using now the TKG CLI it’s a breeze.

I’m using my vSphere enviroment running vSphere 6.7U3. The first step you need to do is download the Tanzu Kuberbernetes Grid CLI and also the Kubernetes and haproxy OVA’s, the first will be used for both the control plane and worker nodes, and the haproxy will be used to create a load balancer.

You need to import both OVA’s in the vSphere environment and convert to templates.

The first step is to create the management cluster that will be CAPI/CAPV enabled. For that, we will use the TKG CLI tool, that is supported in Linux and MAC. I don’t have any MAC so I downloaded the Linux version to my Ubuntu Desktop VM, gunzipped it and moved it to /user/bin. After that, you can run the initialization process with the –ui flag that opens the 8080 port with installation UI.

jfcoronado@desktop:~$ sudo tkg init --ui
Logs of the command execution can also be found at: /tmp/tkg-20200415T101007207375470.log

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
unable to open browser: exit status 3

It was unable to open the browser automatically, but by opening the browser and opening http://127.0.0.1:8080 you get the installer.

TKG CLI can deploy the management cluster both in vSphere and AWS. We select vSphere and we will get the wizard to add the information of the environment. The first step is to add the vCenter server, username, password and get connected. Then you need to select the Datacenter and insert a ssh public key.

This is how I generated the public key.

jfcoronado@desktop:~# ssh-keygen -t rsa -b 4096 -C "jfcoronado@dell.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jfcoronado/.ssh/id_rsa): 
Created directory '/home/jfcoronado/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:LD6WpsV+DRiRBWdG45qjmNIH5SXAYWrd7O1IPHAuVtI jfcoronado@dell.com
The key's randomart image is:
+---[RSA 4096]----+
| .+.  .=B        |
| oo.+ o= .       |
|.. +oE...        |
|.  oOoo+         |
|  .o.*=+S        |
| ..+o+=+.        |
|. + o.O. o       |
| . . * .. .      |
|    . ..         |
+----[SHA256]-----+
jfcoronado@desktop:~# ssh-add ~/.ssh/id_rsa
Enter passphrase for .ssh/id_rsa: 
Identity added: .ssh/id_rsa (.ssh/id_rsa)

Open the file .ssh/id_rsa.pub in a text editor so that you can easily copy and paste it in the wizard.

Then you need to select the type of control plane configuration. TKG supports two kinds of deployment: Development that deploys a single control plane (master node) and Production that deploys three (3) nodes for availability. There’s also the multiple instance sizes (type). As I’m running everything in may laptop, I’m using the smallest with 1vCPU, 2GB RAM and 20GB of disk.

You can optionally set a name for the cluster, and you need to select the template that you previously imported for the load balancer VM (the wizard automatically detected the template from the inventory).

For step 3, you select the resource pool, the VM Folder and the Datastore that will host both the management cluster as well as the workload cluster that you will create later.

Next, you setup the VM network that will be used for the nodes, as well as the service and pod CIDR that will be used for kubeproxy and CNI (TKG uses Calico).

The last step of the wizard is to select the template of the OS for the control plane, it also automatically showed the options from the previously imported templates.

After you select next, and then review configuration it will show you a summary with all the settings. When you are ready you can proceed to deploy the cluster.



The process of deployment took, in my case, around half an hour. The process will end and you can now close the browser.

If you look into your vSphere enviroment, you will find that 3 virtual machines where deployed. Because I didn’t change the name, they start with the default prefix “tkg-mgmt-vsphere-“, next to control-plane for the master node, tkg-system-lb for the load balancer, and md-X for the worker node.

It also configured the kubeconfig file in the machine that you used to do the deployment. So now you can use kubectl commands to browse your newly created cluster.

jfcoronado@desktop:~/.kube$ kubectl get nodes
NAME                                                   STATUS   ROLES    AGE    VERSION
tkg-mgmt-vsphere-20200411112051-control-plane-vxxfk    Ready    master   4d2h   v1.17.3+vmware.2
tkg-mgmt-vsphere-20200411112051-md-0-cb4dbc69c-tjgjc   Ready    <none>   4d2h   v1.17.3+vmware.2

Now that you deployed your management cluster, you can start deploying workload clusters. For this cluster, we also have the same plans options, Development = dev and Production = prod. The size of the instances is going to be the same that you selected for the management cluster. There’s multiple options that we may discuss in future posts, but as I just wanted to deploy quickly a cluster, I created my test cluster called my-cluster with the following command.

tkg create cluster my-cluster --plan=dev
Logs of the command execution can also be found at: /tmp/tkg-20200415T200627730822906.log
Creating workload cluster 'my-cluster'...

Context set for workload cluster my-cluster as my-cluster-admin@my-cluster

Waiting for cluster nodes to be available...

After a couple of minutes, you will have your new cluster created. TKG CLI also setup the kubeconfig file with the new context for it, so you can now switch between the management cluster and your workload cluster.

And that’s it. TGK CLI really simplifies the deployment of your TKG cluster. You can spin new clusters in minutes and also scale up and down as you need.

3 thoughts on “I installed Tanzu Kubernetes Grid (TKG) this weekend!”

Comments are closed.