CI/CD Pipeline with Jenkins running on Kubernetes and Deploying to Kubernetes.

Jenkins is the way

This week I had a call with a customer to discuss about how we can help them with their DevOps initiatives. I introduced to them the concepts of microservices, containers and Kubernetes, but at the end they where still struggling to understand how all the different technologies fit together.

I proposed to do a workshop and show them the whole process of placing an app in a container and running it in Kubernetes. But hey, that’s not they way the should be doing it a the end right? The whole idea of DevOps is to have an automated pipeline: to go from code, to container, to production automatically, using the concept of CI/CD – Continuous Integration / Continuous Deployment.

Image title
Continuous Integration / Continuous Deployment

I decided to run Jenkins considering it’s OSS and a very popular CI/CD tool with a lot of possibilities. I wanted to use Jenkins running on top of Kubernetes, so it could scale and deploy to Tanzu Kubernetes Grid, and I could show the customer the possibilities.

I’m going to assume that you have a Kubernetes Cluster ready to work, if not you can follow my previous guides for deploying Kubernetes or deploying TKG

Infrastructure used for the lab

I’m using my well known corporate laptop (Latitude 7840) running Windows 10 with VMware workstation, and on top of that I’m running the lab.

At the end what I wanted to create was the following pipeline with Jenkins:

Pipeline we will be creating
  1. Jenkins pulls instructions from SCM (GitHub)
  2. Builds docker image based in source code in code repo (apache webapp)
  3. Pushes the docker image to container repo (DockerHub)
  4. Posts Kubernetes manifest (YAML) to TKG API Server
  5. TKG create app resources and pull image from DockerHub.

That being said, let’s begin…

1. Deploy Jenkins

The easier way I found to deploy Jenkins, which is also aligned with the fact that I’m studying this tool right now, was to use Helm.

Helm have repos with charts to deploy a lot of different OSS projects to Kubernetes. The first thing you need to do is install helm in your workstation. There’s multiple ways depending on your OS. Since I’m using Ubuntu 18.04 WSL on Windows 10, I’m using the script method:

❯ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
❯ chmod 700 get_helm.sh
❯ ./get_helm.sh

After that, we add the Helm Stable Chart Repo:

❯ helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories

Now, if you do a search in the repo you will find the Jenkins chart:

❯ helm search repo jenkins
NAME CHART VERSION APP VERSION DESCRIPTION
stable/jenkins 1.18.1 lts Open source continuous integration server. It s…

Helm will deploy to the current in use Kubernetes context in your workstation machine, so you need to make sure you are in the right one.

❯ kubectl config current-context
kubernetes-admin@kubernetes

Then, you can install the chart this way:

❯ helm install jenkins stable/jenkins
NAME: jenkins
LAST DEPLOYED: Sun May 10 11:16:08 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins" -o jsonpath="{.items[0].metadata.name}")
  echo http://127.0.0.1:8080
  kubectl --namespace default port-forward $POD_NAME 8080:8080

3. Login with the password from step 1 and the username: admin

Now you can get the admin password running the command:

❯ printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
5TsWc6tuGG

Helm installs the service for Jenkins using a ClusterIP service. I prefer to use a NodePort, to access it directly without using port-forwarding, so let’s change that:

❯ kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
jenkins         ClusterIP   100.71.179.76   <none>        8080/TCP    2m31s
jenkins-agent   ClusterIP   100.65.224.56   <none>        50000/TCP   2m31s
kubernetes      ClusterIP   100.64.0.1      <none>        443/TCP     5d15h
❯ kubectl get svc jenkins -o yaml > jenkins.yaml
❯ kubectl delete svc jenkins
❯ vi jenkins.yaml

Change the type to NodePort and add the nodePort Port to the spec of the service.

spec:
  clusterIP: 100.71.179.76
  ports:
  - name: http
    port: 8080
    protocol: TCP
    targetPort: 8080
    nodePort: 31000
  selector:
    app.kubernetes.io/component: jenkins-master
    app.kubernetes.io/instance: jenkins
  sessionAffinity: None
  type: NodePort

Now you can apply the new service manifest and check the IP of your worker nodes.

❯ kubectl create -f jenkins.yaml
service/jenkins created
❯ kubectl get svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
jenkins         NodePort    100.71.179.76   <none>        8080:31000/TCP   5s
jenkins-agent   ClusterIP   100.65.224.56   <none>        50000/TCP        10m
kubernetes      ClusterIP   100.64.0.1      <none>        443/TCP          5d15h
❯ kubectl get nodes -o wide
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-master      Ready    master   49d   v1.17.4   192.168.231.60   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   docker://19.3.6
k8s-worker-01   Ready    <none>   49d   v1.17.4   192.168.231.61   <none>        Ubuntu 18.04.4 LTS   4.15.0-99-generic   docker://19.3.6

Open your browser and navigate to any worker node IP with the port 31000 (http://192.168.231.61:31000) or whatever port you selected, and you will be able to get to your Jenkins instance.

You can login with user: admin and the password you got in the previous steps.

Jenkins Login Page

2. Configure Jenkins

CONFIGURE KUBERNETES PLUGIN

It’s time now to configure Jenkins for our environment. One important thing to note is that Jenkins architecture is fundamentally “Master+Agent”. The master is designed to do co-ordination and provide the GUI and API endpoints, and the Agents are designed to perform the work. The Helm chart we deployed already comes with the Kubernetes plugin that handles the creation of the Slave Agents as Pods in the cluster.

The issue we have is that in this pipeline we want to build a docker image, so in this case, the Jenkins slave pod needs to have docker exec and also access to a docker runtime to create the image. The default image that is been used doesn’t have docker installed. We need to provide a customized one that has docker. To do that we need to go to Manage Jenkins -> Configure System. There you need to scroll to the end and you will find the Cloud Options (in the latest Jenkins version now you need to open a new configuration page).

You will find the Configure Cloud page there. You will see the Kubernetes Cloud details… You can check the configuration that’s already there, we will focus in the Pod Template…

If you open that, and go to Pod Template details you will be able to change the Docker image the Pod it’ll be using. We need to change it to joao29a/jnlp-slave-alpine-docker:latest

Depending on your environment, you may also want to decrease the Requests for CPU and Memory for the Pod. You can simply go to Advanced… and decrease them. I’m using CPU: 200m and memory 256Mi, I’m leaving the Limits the same.

In this example, instead of using a runtime inside the container, we will be using the one from the Kubernetes Worker. For that, we need to map the /var/run/docker.sock of the host as a container volume.

For this trick to work, we need to modify the default permissions of the file. We can change them by doing ssh to the worker(s) nodes and do the following changes:

jfcoronado@k8s-worker-01:~$ ls -la /var/run/docker.sock
srw-rw---- 1 root docker 0 May 10 21:44 /var/run/docker.sock
jfcoronado@k8s-worker-01:~$ sudo chmod 666 /var/run/docker.sock
[sudo] password for jfcoronado:
jfcoronado@k8s-worker-01:~$ ls -la /var/run/docker.sock
srw-rw-rw- 1 root docker 0 May 10 21:44 /var/run/docker.sock

install and configure kubernetes continuous deploy plugin

Now we will be installing and configuring the Kubernetes Continuous Deploy Plugin.

You need to go to Manage Jenkins -> Manage Plugins -> Available, and look for the right plugin:

Then select and click the Install without restart button.

Next, we need to configure the access to the Kubernetes cluster. It uses a kubeconfig file for this.

To provide the kubeconfig file we need to create a Credential; so from the dashboard, go to Credentials -> System -> Global Credentials -> Add Credentials

In Kind you choose Kubernetes configuration (kubeconfig). You can leave Scope: Global (Jenkins, nodes, items, all child items, etc). In ID you need to put a name that you can remember because it’s the one that we will use in our pipeline. In our example I used mykubeconfig. In Description you can put anything you want.

Kubeconfig gives you the options to enter directly the file, get a file from the Jenkins master, or a file from the Kubernetes master node. Our Jenkins master is a pod, pulling the file from the master node we need to connect using ssh; so the easier way to do this is just to paste the content of the file directly, from the one in your workstation.

create docker hub credentials

We will be pushing our custom container to docker hub; so we need to create the credential of our account. We will be doing something similar to what we did for the kubeconfig file, so we go again to Credentials -> System -> Global Credentials -> Add Credentials.

This time we will be creating a credential of kind Username with password. We are using ‘dockerhub’ as ID. You need to take note of this name for your pipeline.

This will be the last configuration required. Now we are ready to start creating our pipeline.

3. Configure the pipeline

All configuration files will be stored in the GitHub repo. I have my test repo public for reference, https://github.com/jfcoronado/playjenkins and actually it’s a fork from Venkat Nagappan’s, with my modifications.

Let’s look at the files…

FROM httpd:2.4
COPY ./index.html /usr/local/apache2/htdocs/

This is a simple Dockerfile. We are using apache webserver image as the base, and then we are copying our custom index.html file to the default website root.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mywebclaim
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: space-efficient
  volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      volumes:
        - name: myvolume
          persistentVolumeClaim:
            claimName: mywebclaim
      containers:
      - image: jfcoronado/myweb:1
        imagePullPolicy: Always
        name: myweb
        volumeMounts:
        - mountPath: "/mnt/volume"
          name: myvolume
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  ports:
  - nodePort: 32223
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: myweb
  type: NodePort

The YAML file will be our manifest to deploy the app to Kubernetes. I wanted to showcase the Cloud Native Storage functionality of vSphere 6.7U3 that comes preconfigured with Tanzu Kubernetes Grid, so I’m first including a persistent volume claim for no reason, a deployment calling the custom image that our pipeline will be pushing to docker hub and the service for our web application. Type NodePort listening in 32223 port.

pipeline {

  environment {
    registry = "jfcoronado/myweb"
    dockerImage = ""
  }

  agent any

  stages {

    stage('Checkout Source') {
      steps {
        git 'https://github.com/jfcoronado/playjenkins.git'
      }
    }

    stage('Build image') {
      steps{
        script {
          dockerImage = docker.build registry + ":$BUILD_NUMBER"
        }
      }
    }

    stage('Push Image') {
      environment {
               registryCredential = 'dockerhub'
           }
      steps{
        script {
          docker.withRegistry( "", registryCredential ) {
            dockerImage.push()
          }
        }
      }
    }

    stage('Deploy App') {
      steps {
        script {
          kubernetesDeploy(configs: "myweb.yaml", kubeconfigId: "mykubeconfig")
        }
      }
    }
  }
}

The Jenkinsfile is where we really define our pipeline. It has 4 stages:

  • Checkout Source: It clones our github repo to the Jenkins Agent Pod.
  • Build Image: It builds our docker container based in the Dockerfile of the repo.
  • Push Image: It pushes the image to our docker hub account. It uses the credentials ‘dockerhub’.
  • Deploy App: It applies ‘myweb.yaml’ to the Kubernetes Cluster, using the kubeconfig file ‘mykubeconfig’.

Now it’s just about creating the pipeline in Jenkins. We are going to use the classic pipeline from Jenkins. For that we go to the dashboard and select ‘New Item’.

You name your item, select the Pipeline type, and click OK.

In the next screen the only part you need to fill out is the Pipeline section. Here you need to select:

  • Definition: Pipeline script from SCM (Source Control Manager, in this case Git Hub).
  • SCM: Git
  • Repository URL: Your repo, in this example https://github.com/jfcoronado/playjenkins.git
  • Credentials: We can leave blank if your repo is public. If you are using a private repo you need to create the right credentials and use them here.
  • Branches to build: The default is master, but you need to change it if you are using a different branch than master. In our case I’m using master so we are ok.

That’s it! Now you can save your Pipeline, and execute it. For that you can select Build Now.

In the Build History you will see the first build starting to run. If you go to your Jenkins Kubernetes cluster, you will also see a new Pod for the Jenkins slave start to run.

❯ kubectl get pods
NAME                                  READY   STATUS    RESTARTS   AGE
default-5tbzj                         1/1     Running   0          73s
jenkins-1588304811-694744ffc9-877rp   1/1     Running   7          4d6h

You can monitor the pipeline execution in the dashboard:

And when finished, you will see that your application is deployed.

❯ kubectl get all
NAME                         READY   STATUS    RESTARTS   AGE
pod/myweb-6f69678d78-qj7kl   1/1     Running   0          3m20s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   100.64.0.1      <none>        443/TCP        7d2h
service/myweb        NodePort    100.66.246.95   <none>        80:32223/TCP   3m20s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/myweb   1/1     1            1           3m20s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/myweb-6f69678d78   1         1         1       3m20s

Well… this was a long post… the longer I have done so far… but I hope it’s helpful for somebody. Until the next one 🙂