Deploying Tanzu Application Service for Kubernetes on Kind

Tanzu Application Service for Kubernetes brings the development experience of Tanzu Application Service (formerly called Pivotal Application Service, and before that Pivotal CloudFoundry) to the Kubernetes container orchestrator. It includes the most important features of Tanzu Application Service, such as cf push for buildpack-based apps, while running both the system components and the application instances as Kubernetes workloads.

TAS for Kubernetes is something I have been trying to test for a while. It moved from alpha version to beta on the 14th of April, so it has been open for testing for more than a month now.

I started as a good boy following the guide to deploy it, and I found it was validated to run on top of Enterprise PKS (Now Tanzu Kubernetes Grid Integrated or TKGI).

Because I don’t have a big enough lab environment to run PKS on-premises (primarily because of NSX), I decided the best approach was to use the public cloud. I had my 100 bucks of Azure credits that came with my MSDN Subscription, so I decided to use them. I was able to deploy PKS successfully, but I had some issues with the Azure Load Balancers… at the end, and to keep the story short, I spent all my credit troubleshooting, and my lab was not finished. =( Well, that was strike one!

Next, this happened a couple of days later:

So, I thought… I can run this thing in any Kubernetes deployment… And Keith used TKG… I have already a TKG deployment in my lab, so let’s try it. Unfortunately, this was strike two!. My TGK deployment, running in my nested vSphere enviroment was already using a lot of the resources from my machine… the load was too big, and the pods started complaining.. my laptop was about to explode!

After that, I decided to move forward with other stuff, until I could get my hands in more resources.

Last week, I attended VMware Empower Event, and one of the tracks was about TAS. It had a demo at the end running it on top of Kind. I finished the session so pumped that I had to try again. Luckily, this time it worked… third time’s the charm.

Environment:

First thing I did was to create an Ubuntu Desktop 18.04 LTS virtual machine, on my VMware Workstation. I created it with 4 vCPUs and 16GB of RAM (I think it could work with less RAM). This is my top after everything is deployed and settled. However, the CPU during the deployment was 100% all the time.

First, we need to install docker:

sudo apt-get install -y docker.io

sudo -i 
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl restart docker
sudo systemctl enable docker

And then, install kubectl and Kind:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.8.1/kind-$(uname)-amd64
chmod +x ./kind
mv ./kind /user/bin

I created a three worker node cluster, using the following kind configuration file.

# kind-cluster.yaml cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

And created my cluster.

sudo kind create cluster --name tas --config kind-cluster.yaml 

Something very handy about using Kind is that TAS deployment require to implement a StorageClass for automatic provision PersistentVolumes, something that comes out of the box with Kind.

Prerequisites

Preparing the Installation Resources

First, you need to download the TAS from the VMware Tanzu Network.

  1. Log into VMware Tanzu Network.
  2. Navigate to the VMware Tanzu Application Service for Kubernetes product listing.
  3. Download the “Tanzu Application Service” file to a local machine.
  4. Create a directory named tanzu-application-service next to the Tanzu Application Service file.
  5. Extract the contents of tanzu-application-service.tar into the tanzu-application-service directory:
tar xvf tanzu-application-service.tar -C tanzu-application-service

Container Registry for App Images

Tanzu Application Service for Kubernetes depends on a container image registry to store images for buildpack-based applications. Supported registries include Harbor, Google Container Registry (GCR), and Dockerhub.

For simplicity, I decided to use my personal Dockerhub account.

System Domain Name

You need a DNS domain name for your Tanzu Application Service system domain. I created the domain tas.vxcoronado.net in my DNS server.

Command-Line Tools

The installation procedure for Tanzu Application Service for Kubernetes requires the following command-line tools installed:

  • The BOSH CLI, bosh.
  • The kapp utility.
  • The kbld utility.
  • The ytt utility.
  • The cf CLI, cf.

You can download the binaries from the links and install them this way:

  1. Rename the downloaded binary to the corresponding simple utility name, bosh, kapp, kbld, ytt and cf .
  2. On the command line, run the following command to make the binary executable:$ chmod +x <binary name>
  3. Move the binary into your PATH. For example:$ mv bosh /usr/bin/bosh

Configuring Installation Values

Before installing, you need to configure some values that will be used during the installation. You need to:

  • Generate internal configuration values using a helper script,
  • Configure values for the system component registry,
  • Configure the application image registry

Generate Configuration Values

Tanzu Application Service for Kubernetes requires a small set of credentials and certificates to coordinate its components, along with a certificate to terminate TLS at the ingress gateway. A helper script in the installation resources generates these credentials and certificates to be supplied to the installation script.

  1. Create a directory named configuration-values in the same directory as the tanzu-application-service directory.
  2. Change to the tanzu-application-service directory in your terminal.
  3. Run the value-generation script with the system domain you selected previously and store its output in the configuration-values directory:
./bin/generate-values.sh -d "tas.vxcoronado.net" > ../configuration-values/deployment-values.yml

Configure System Registry Values

Tanzu Application Service for Kubernetes pulls the container images for system components from the Tanzu Network container registry, using the credentials from a user registered with Tanzu Network.

To configure Tanzu Application Service for Kubernetes with these values:

  1. Change directory to the configuration-values directory you created earlier.
  2. Create a file named system-registry-values.yml in that directory with the contents below, replacing the placeholder values with the Tanzu Network user credentials you wish to use:
#@data/values
---
system_registry:
  hostname: registry.pivotal.io
  username: "PLACEHOLDER-TANZU-NETWORK-REGISTRY-USERNAME"
  password: "PLACEHOLDER-TANZU-NETWORK-REGISTRY-PASSWORD"

Configure App Registry Values

We are going to use Dockerhub for App Registry. To configure Tanzu Application Service for Kubernetes with these values for the application image registry:

  1. In the same configuration-values directory you created earlier, create a file named app-registry-values.yml with the contents shown below, replacing the placeholder values with the username and password for your Dockerhub account. Note that the repository is also the same as the username.
#@data/values
---
app_registry:
  hostname: https://index.docker.io/v1/
  repository: "PLACEHOLDER-USERNAME"
  username: "PLACEHOLDER-USERNAME"
  password: "PLACEHOLDER-PASSWORD"

At the end, you will have three configuration files. You are now ready to proceed with the installation.

jfcoronado@ubuntu:~/Downloads/configuration-values$ ls
app-registry-values.yml  deployment-values.yml  system-registry-values.yml

Installing TAS for Kubernetes

  1. Make sure you are working with the correct Kubernetes cluster. You can check with kubectl command.
  2. Also, make sure you are in the tanzu-application-service directory.
  3. Run the following command:
./bin/install-tas.sh ../configuration-values

This will take a while, depending on your computer. I had some issues with the pivotal registry pulling some images during my deployment, and it gave me time out. I had to monitor the pods until all were running.

jfcoronado@ubuntu:~/Downloads/configuration-values$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                                           READY   STATUS      RESTARTS   AGE
build-service          webhook-server-575ddbd566-8drlf                                1/1     Running     3          2d5h
cf-blobstore           cf-blobstore-minio-8685d7d7d9-j4n6s                            2/2     Running     6          2d5h
cf-db                  cf-db-postgresql-0                                             2/2     Running     7          2d5h
cf-system              capi-api-server-c597bb988-mln4x                                4/4     Running     14         2d5h
cf-system              capi-api-server-c597bb988-xk4x2                                4/4     Running     14         2d5h
cf-system              capi-clock-6765c6fb8c-hgsc4                                    2/2     Running     17         2d5h
cf-system              capi-deployment-updater-6b6dd8bd5b-9btrk                       2/2     Running     17         2d5h
cf-system              capi-kpack-watcher-8fc6b7d5f-t2rpb                             2/2     Running     6          2d5h
cf-system              capi-worker-7fbb849b48-f5m7r                                   2/2     Running     15         2d5h
cf-system              cfroutesync-8496f675b7-hq6tl                                   2/2     Running     6          2d5h
cf-system              eirini-76758c69c7-5k967                                        2/2     Running     6          2d5h
cf-system              fluentd-d2slq                                                  2/2     Running     19         2d5h
cf-system              fluentd-dcmcs                                                  2/2     Running     17         2d5h
cf-system              fluentd-r52f8                                                  2/2     Running     17         2d5h
cf-system              fluentd-xdlwz                                                  2/2     Running     23         2d5h
cf-system              log-cache-64c6c8bdf-dbbq6                                      5/5     Running     34         2d5h
cf-system              uaa-5945d89776-r4dfs                                           2/2     Running     16         2d5h
cf-workloads-staging   169b5d9f-897f-45e5-a30a-29b96c9d06c6-build-1-qc8bv-build-pod   0/1     Completed   0          2d1h
cf-workloads           test-app-test-space-e60fceafd2-0                               2/2     Running     10         2d1h
istio-system           istio-citadel-85bbf5b99c-gsdzc                                 1/1     Running     7          2d5h
istio-system           istio-galley-7fbc5dff58-z24pd                                  2/2     Running     7          2d5h
istio-system           istio-ingressgateway-kkx2c                                     2/2     Running     6          2d5h
istio-system           istio-ingressgateway-p58sb                                     2/2     Running     6          2d5h
istio-system           istio-ingressgateway-thq59                                     2/2     Running     6          2d5h
istio-system           istio-pilot-54d8b5bc95-jfd9w                                   2/2     Running     6          2d5h
istio-system           istio-policy-7974c56cbb-wrd2p                                  2/2     Running     16         2d5h
istio-system           istio-sidecar-injector-74fdbb8f4-6hgmz                         1/1     Running     7          2d5h
istio-system           istio-telemetry-684ff85959-zmw4d                               2/2     Running     15         2d5h
kpack                  kpack-controller-6699f56645-5n28g                              1/1     Running     3          2d5h
kpack                  kpack-webhook-7dd74d7695-sxmht                                 1/1     Running     7          2d5h
kube-system            coredns-66bff467f8-q7fwx                                       1/1     Running     3          2d6h
kube-system            coredns-66bff467f8-sh5c7                                       1/1     Running     4          2d6h
kube-system            etcd-tas-control-plane                                         1/1     Running     0          3h28m
kube-system            kindnet-4qqzs                                                  1/1     Running     5          2d6h
kube-system            kindnet-7qpts                                                  1/1     Running     8          2d6h
kube-system            kindnet-8rbw4                                                  1/1     Running     5          2d6h
kube-system            kindnet-bhvkp                                                  1/1     Running     7          2d6h
kube-system            kube-apiserver-tas-control-plane                               1/1     Running     0          3h28m
kube-system            kube-controller-manager-tas-control-plane                      1/1     Running     13         2d6h
kube-system            kube-proxy-44ntr                                               1/1     Running     3          2d6h
kube-system            kube-proxy-gd72p                                               1/1     Running     3          2d6h
kube-system            kube-proxy-k6x6w                                               1/1     Running     3          2d6h
kube-system            kube-proxy-wx7s2                                               1/1     Running     3          2d6h
kube-system            kube-scheduler-tas-control-plane                               1/1     Running     7          2d6h
local-path-storage     local-path-provisioner-bd4bb6b75-mcq5b                         1/1     Running     7          2d6h
metacontroller         metacontroller-0                                               2/2     Running     6          2d5h

Then, running the installation script again produced the successful outcome.

Changes

Namespace             Name                               Kind                   Conds.  Age  Op      Wait to    Rs  Ri  
build-service         webhook-server                     Deployment             2/2 t   2d   update  reconcile  ok  -  
cf-blobstore          cf-blobstore-minio                 PersistentVolumeClaim  -       2d   update  reconcile  ok  -  
cf-workloads-staging  cc-kpack-registry-service-account  ServiceAccount         -       2d   update  reconcile  ok  -  

Op:      0 create, 0 delete, 3 update, 0 noop
Wait to: 3 reconcile, 0 delete, 0 noop

2:28:57PM: ---- applying 1 changes [0/3 done] ----
2:28:57PM: update serviceaccount/cc-kpack-registry-service-account (v1) namespace: cf-workloads-staging
2:28:57PM: ---- waiting on 1 changes [0/3 done] ----
2:28:57PM: ok: reconcile serviceaccount/cc-kpack-registry-service-account (v1) namespace: cf-workloads-staging
2:28:57PM: ---- applying 2 changes [1/3 done] ----
2:28:57PM: update persistentvolumeclaim/cf-blobstore-minio (v1) namespace: cf-blobstore
2:28:57PM: update deployment/webhook-server (apps/v1) namespace: build-service
2:28:57PM: ---- waiting on 2 changes [1/3 done] ----
2:28:58PM: ok: reconcile persistentvolumeclaim/cf-blobstore-minio (v1) namespace: cf-blobstore
2:28:58PM: ok: reconcile deployment/webhook-server (apps/v1) namespace: build-service
2:28:58PM: ---- applying complete [3/3 done] ----
2:28:58PM: ---- waiting complete [3/3 done] ----

Succeeded

The last step was to setup my DNS, as I’m not using a LoadBalancer resource, I had to point it to all my IP addresses of the worker nodes:

jfcoronado@ubuntu:~/Downloads/tanzu-application-service$ kubectl get nodes -o wide
NAME                STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION       CONTAINER-RUNTIME
tas-control-plane   Ready    master   2d6h   v1.18.2   172.18.0.4    <none>        Ubuntu 19.10   4.15.0-101-generic   containerd://1.3.3-14-g449e9269
tas-worker          Ready    <none>   2d6h   v1.18.2   172.18.0.2    <none>        Ubuntu 19.10   4.15.0-101-generic   containerd://1.3.3-14-g449e9269
tas-worker2         Ready    <none>   2d6h   v1.18.2   172.18.0.5    <none>        Ubuntu 19.10   4.15.0-101-generic   containerd://1.3.3-14-g449e9269
tas-worker3         Ready    <none>   2d6h   v1.18.2   172.18.0.3    <none>        Ubuntu 19.10   4.15.0-101-generic   containerd://1.3.3-14-g449e9269

So I pointed *.vxcoronado.net to 172.18.0.2, 172.18.0.5, and 172.18.0.3.

Post-Installation System Configuration

There’s some initial configurations required to start pushing code to TAS.

  1. Use the CF CLI to target the installation at the api subdomain of the system domain:$ cf api api.tas.vxcoronado.net --skip-ssl-validation
  2. In your terminal, change to the directory containing the tanzu-application-service and configuration-values directories.
  3. Set the CF_ADMIN_PASSWORD environment variable to the CF administrative password, stored in the cf_admin_password key in the configuration-values/deployment-values.yml file:$ CF_ADMIN_PASSWORD="$(bosh interpolate configuration-values/deployment-values.yml --path /cf_admin_password)"
  4. Log into the installation as the admin user:$ cf auth admin "$CF_ADMIN_PASSWORD"
  5. Enable the Diego-Docker feature flag to allow buildpack-based apps to run on the Kubernetes cluster:$ cf enable-feature-flag diego_docker

Post-Installation Validation

1. Create and target an organization and space for the verification application:

$ cf create-org test-org 
$ cf create-space -o test-org test-space 
$ cf target -o test-org -s test-space

2. Clone the Cloud Foundry test application from GitHub to your workstation and change directory to the resulting test-app directory.
3. Push the test app to the installation:$ cf push test-app --hostname test-app
4. After the cf push command succeeds, you can check your application at test-app.tas.vxcoronado.net

Well, I finally have an environment to play with this tool. I will be reviewing it more in-depth as I work with it, and probably write another post about my experience.