VMware Tanzu MySQL Operator on vSphere with Tanzu

This weeking I decided to play a little with the VMware Tanzu MySQL that is still on beta deploying it to my vSphere with Tanzu Basic infrastructure. I used as guide the official documentation so if you want more details you can go there. In this blog I’m documenting the specifics or my testing so if you want to follow you will require the following prerequisites:

Prerequisites:

  • Access to Tanzu MySQL for Kubernetes in VMware Tanzu Network Registry.
  • Helm installed in your machine.
  • Kubectl
  • Tanzu Kubernetes Grid Service in my case, vSphere with Tanzu Basic (without NSX-T)

Installing Operator

Deploying the solution as a Kubernetes Operator allow us to create on demand MySQL clusters using Kubernetes API and YAML code.

First we need to logging to our Tanzu Kubernetes Grid Service using this command, and switching context:

kubectl vsphere login --server=192.168.68.33 -u administrator@vsphere.local --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name tkc-01 --tanzu-kubernetes-cluster-namespace playground
kubectl config use-context tkc-01

I’m using MetalLB for load balancer. It’s pretty easy to deploy, you just need to execute the following manifests, and they will create namespace and resources:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml 
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml 
# On first install only 
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

We are going to deploy the operator in a namespace. We are going to name it tanzu-mysql-for-kubernetes-system and we are going to set it as default in our current context:

kubectl create ns tanzu-mysql-for-kubernetes-system
kubectl config set-context --current --namespace tanzu-mysql-for-kubernetes-system

Because we are using a vSphere with Tanzu TKG Cluster it comes with Pod Security Policies enabled by default so we need to create a rolebiding for the service account to be able to create pods. We can do that using this command:

kubectl create rolebinding psp:default --clusterrole=psp:vmware-system-privileged --serviceaccount=tanzu-mysql-for-kubernetes-system:default

We also need to create a secret to store your credentials for the VMware Tanzu Network Registry. You can try first with docker and if it works correctly you can create your secret in Kubernetes:

docker login -u jose_coronado@dell.com registry.pivotal.io
Password: <TANZU-NETWORK-PASS>
Login Succeeded

kubectl create secret docker-registry tanzu-mysql-image-registry --docker-server=https://registry.pivotal.io/ --docker-username=jose_coronado@dell.com --docker-password=<TANZU-NETWORK-PASS>

Now let’s download the helm chart to our local filesystem. We do that using the following commands that will create a tanzu-mysql-operator folder in the local path.

helm chart pull registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:0.2.0
helm chart export registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator-chart:0.2.0

You need to modify the values.yaml inside the folder to your preference. In my case I’m using the default settings. It looks this way:

 imagePullSecret: tanzu-mysql-image-registry
 operatorImage: "registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-operator:0.2.0"
 tanzuMySQLImage: "registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:0.2.0
 resources:
   limits:
     cpu: 100m
     memory: 30Mi
   requests:
     cpu: 100m
     memory: 20Mi

After that we are ready to deploy our operator the following way:

helm install tanzu-mysql-operator ./tanzu-mysql-operator/

Let’s check if all the components are running:

kubectl get all
❯ kubectl get all -n tanzu-mysql-for-kubernetes-system
 NAME                                                          READY   STATUS    RESTARTS   AGE
 pod/mysql-for-kubernetes-controller-manager-578998ff9-l8q2b   1/1     Running   5          2d
 NAME                                                      READY   UP-TO-DATE   AVAILABLE   AGE
 deployment.apps/mysql-for-kubernetes-controller-manager   1/1     1            1           2d
 NAME                                                                DESIRED   CURRENT   READY   AGE
 replicaset.apps/mysql-for-kubernetes-controller-manager-578998ff9   1         1         1       2

We can see that we have our mysql-for-kubernetes-controller-manager pod in running state. We can now go ahead and deploy our first MySQL cluster.

Deploying a Database

The first thing we will do is to create a new namespace to deploy our MySQL cluster. I’m are going to name it mysq-prod, and the same way we did previouly we will set it a default namespace for our current context and create rolebindings to add permisions to the service accounts to create pods. I know in advanced that we are going to create a database named tanzu-myql, and when we create it,the operator will also create two service accounts with the <NAME-OF-DATABASE> and <NAME-OF-DATABASE>-backup-cron names, so we are going to create rolebindings for the default service account and for this other two.

kubectl create ns mysql-prod
kubectl config set-context --current --namespace=mysql-prod
kubectl create rolebinding psp:default --clusterrole=psp:vmware-system-privileged --serviceaccount=mysql-prod:default
kubectl create rolebinding psp:tanzu-mysql --clusterrole=psp:vmware-system-privileged --serviceaccount=mysql-prod:tanzu-mysql
kubectl create rolebinding psp:tanzu-mysql-backup-cron --clusterrole=psp:vmware-system-privileged --serviceaccount=mysql-prod:tanzu-mysql-backup-cron

We also need to create a docker-registry secret to access the Tanzu Network registry with your credentials.

kubectl create secret docker-registry tanzu-mysql-image-registry --docker-server=https://registry.pivotal.io --docker-username=<TANZU-NETWORK-USER> --docker-password=<TANZU-NETWORK-PASS>

Now we are ready to create our first test database. I’m using the default from the documentation, we are setting up the correct storageclass and setting up the service type to LoadBalancer. At this time I didn’t setup TLS or resource allocation.

apiVersion: mysql.tanzu.vmware.com/v1alpha1
kind: TanzuMySQL
metadata:
  name: tanzumysql-sample
spec:
  storageSize: 1Gi
  imagePullSecret: tanzu-mysql-image-registry

#### Set the storage class name to change storage class of the PVC associated with this resource
  storageClassName: tanzu-gold-storage-policy

#### Set the type of Service used to provide access to the MySQL database.
  serviceType: LoadBalancer

#### Set the name of the Secret used for TLS
#  tls:
#    secret:
#      name: mysql-tls-secret

#### Examples to set resource limit/request for mysql/backups containers.

#  resources:

#### This is the container running the mysql server.
#    mysql:
#      limits:
#        cpu: 3
#        memory: 800Mi
#      requests:
#        cpu: 2
#        memory: 500Mi
#### This is the sidecar container that takes a backup and streams to the storage backend.
#    backups:
#      limits:
#        cpu: 2
#        memory: 500Mi
#      requests:
#        cpu: 1
#        memory: 200Mi

We can now apply or manifest this way:

kubectl apply -f tanzumysql.yaml

And we can check that our database pod is now running, controlled by a statefulset.

❯ kubectl get all
 NAME                      READY   STATUS    RESTARTS   AGE
 pod/tanzumysql-sample-0   2/2     Running   6          47h
 NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                          AGE
 service/tanzumysql-sample   LoadBalancer   195.60.53.179   192.168.68.35   3306:31504/TCP,33060:30268/TCP   47h
 NAME                                 READY   AGE
 statefulset.apps/tanzumysql-sample   1/1     47h

We can check that it have two containers, the main database and also the backup sidecar container:


❯ kubectl describe pod tanzumysql-sample-0
 Name:         tanzumysql-sample-0
 Namespace:    mysql-prod
 Priority:     0
 Node:         tkc-01-workers-ps7ph-c5746456d-rsxmd/192.168.68.54
 Start Time:   Tue, 16 Feb 2021 10:07:35 -0500
 Labels:       app.kubernetes.io/instance=tanzumysql-sample
               app.kubernetes.io/name=mysql
               controller-revision-hash=tanzumysql-sample-78f687ff78
               statefulset.kubernetes.io/pod-name=tanzumysql-sample-0
 Annotations:  kubernetes.io/psp: vmware-system-privileged
 Status:       Running
 IP:           193.0.1.4
 IPs:
   IP:           193.0.1.4
 Controlled By:  StatefulSet/tanzumysql-sample
 Containers:
   mysql:
     Container ID:  containerd://e165f98d88396fe6014270377e2229f9f4c4678db67114316f9634a96822e78c
     Image:         registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:0.2.0
     Image ID:      registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance@sha256:07d09ff7afb6e86dc6d6b8485a490b95aa2543ff57ead52ef7c59d67ab056f35

.
.
.
❯ kubectl describe pod tanzumysql-sample-0
 Name:         tanzumysql-sample-0
 Namespace:    mysql-prod
 Priority:     0
 Node:         tkc-01-workers-ps7ph-c5746456d-rsxmd/192.168.68.54
 Start Time:   Tue, 16 Feb 2021 10:07:35 -0500
 Labels:       app.kubernetes.io/instance=tanzumysql-sample
               app.kubernetes.io/name=mysql
               controller-revision-hash=tanzumysql-sample-78f687ff78
               statefulset.kubernetes.io/pod-name=tanzumysql-sample-0
 Annotations:  kubernetes.io/psp: vmware-system-privileged
 Status:       Running
 IP:           193.0.1.4
 IPs:
   IP:           193.0.1.4
 Controlled By:  StatefulSet/tanzumysql-sample
 Containers:
   mysql:
     Container ID:  containerd://e165f98d88396fe6014270377e2229f9f4c4678db67114316f9634a96822e78c
     Image:         registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance:0.2.0
     Image ID:      registry.pivotal.io/tanzu-mysql-for-kubernetes/tanzu-mysql-instance@sha256:07d09ff7afb6e86dc6d6b8485a490b95aa2543ff57ead52ef7c59d67ab056f35
.
.
.

It’s using a persitent volume created using the specified storage class:

❯ kubectl get pvc
 NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
 mysql-data-tanzumysql-sample-0   Bound    pvc-ce135b0e-6e35-4cea-81cf-9dc8ead63134   1Gi        RWO            tanzu-gold-storage-policy   4d1h

❯ kubectl get pv
 NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS                REASON   AGE
 pvc-ce135b0e-6e35-4cea-81cf-9dc8ead63134   1Gi        RWO            Delete           Bound    mysql-prod/mysql-data-tanzumysql-sample-0   tanzu-gold-storage-policy            4d1h

So, if we want to test our database first we need to get the root credentials. It’s storared as a secret with the name <DATABASE-NAME>-credentials. We can extract the password this way:

kubectl get secret tanzumysql-sample-credentials -o go-template='{{.data.rootPassword | base64decode}}'

And we can check the IP of our Load Balancer service:

❯ kubectl get svc
 NAME                TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                          AGE
 tanzumysql-sample   LoadBalancer   195.60.53.179   192.168.68.35   3306:30562/TCP,33060:32596/TCP   4d

And now we can connect to our database:

mysql -uroot -p<ROOT-PASSWORD> -h 192.168.68.35
mysql: [Warning] Using a password on the command line interface can be insecure.
 Welcome to the MySQL monitor.  Commands end with ; or \g.
 Your MySQL connection id is 30750
 Server version: 8.0.22-13 Percona Server (GPL), Release '13', Revision '6f7822f'
 Copyright (c) 2000, 2021, Oracle and/or its affiliates.
 Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 mysql>                                                                                  

So that’s it for now. In the next post we are going use the backup sidecar and the CRD’s of the operator to create ondemand and sheduled backups to a S3 bucket in this case we are going to be using Dell EMC ECS.