Deploy Dell EMC Isilon CSI Driver for Kubernetes

In this post I’m going to explain how I configured my homelab to integrate a Kubernetes cluster with the Dell EMC Isilon CSI driver. I had to create a new cluster as the plugin has specific requirements:

  • support for Kubernetes 1.14
  • supports Red Hat Enterprise Linux 7.6 host operating system
  • supports Isilon OneFS versions 8.1 and 8.2

So, let’s start with Isilon… if you don’t have a physical system, which was my case, you can download and install the Dell EMC Isilon Simulator from here, https://www.delltechnologies.com/en-us/products-solutions/trial-software-download/isilon.htm, and follow the installation guide to deploy it in your virtual environment. I created a single node cluster that looks this way:

https://www.dellemc.com/en-us/collaterals/unauth/technical-guides-support-information/products/storage/isilon-onefs-simulator-install-guide.pdf

After that, you need to install your hosts. I created a 2 node cluster with one master node and one worker node.

  • k8s-master-isi – 192.168.231.111
  • k8s-worker-isi – 192.168.231.112

You need to have a subscription to download the RHEL installation media, and you also need to register your systems for yum to work and download the required installation packages. If you don’t have a commercial subscription you can create an account and request a developer subscription, that is what I did from here: https://access.redhat.com

After you install normally the OS, setup the network personality and hostname you need to register the system; it will ask for your credentials and then you are ready to install Kubernetes.

subscription-manager register

Install Kubernetes

First we need to install docker… these instructions are from the official Kubernetes documentation with some tweaks I had to do… and need to be followed in both master and worker node.

# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2

### Add Docker repository.
yum-config-manager \
  --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo

## Modify notify_only setting from 1 to 0
vi /etc/yum/pluginconf.d/search-disabled-repos.conf

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker.service
systemctl restart docker

Disable System Swap

swapoff -a
## Comment the swap line in /etc/fstab
vi /etc/fstab

Installing kubeadm, kubelet and kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

Disable Firewall Daemon

systemctl disable firewalld
systemctl stop firewalld

Create kubeadm config file for installation of supported Kubernetes version (v1.14.7)

##kubeadm_config.yaml##
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.7

And then bootstrap master and worker node. We will use kubeadm init in the master and then use the resulting kubeadm join command to join the worker node, and also setup kubeconfig file for kubectl to work.

kubeadm init --config kubeadm_config.yaml 
.......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.231.60:6443 --token 4zgqvs.ne34lhz9qp2t8wbv \
    --discovery-token-ca-cert-hash sha256:557448ff3ad340c109bf0755dc412c5132d36e98ebbed21dc878ad80fac51758

Then you need to install the CNI of your liking to finalize the installation, I installed weave net.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

After this you will have a working Kubernetes cluster running v1.14.7, so we are ready to start configuring the Isilon CSI driver.

[root@k8s-master-isi ~]# kubectl get nodes
NAME             STATUS   ROLES    AGE     VERSION
k8s-master-isi   Ready    master   3d13h   v1.14.7
k8s-worker-isi   Ready    <none>   3d13h   v1.14.7

Install the CSI driver for Dell EMC Isilon

Before we install the CSI driver for Isilon, we must complete the following task:

  • Enable Kubernetes feature gates
  • Configure Docker service
  • Install the Helm and Tiller package manager

You can see the full guide from here: https://github.com/dell/csi-isilon/blob/master/CSI%20Driver%20for%20Dell%20EMC%20Isilon%20Product%20Guide.pdf. I’m including just the steps I personally did to install the driver.

Enable Kubernetes feature gates

Edit /var/lib/kubelet/config.yaml on all nodes and add the following lines at the end to set feature-gate settings for kubelet:

VolumeSnapshotDataSource: true
KubeletPluginsWatcher: true
CSINodeInfo: true
CSIDriverRegistry: true

On the master node, set the feature gate settings of the /etc/kubernetes/ kube-apiserver.yaml, kube-controller-manager.yaml, kube-scheduler.yaml and file as follows:

--feature-gates=VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true

Example, on kube-scheduler.yaml:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --feature-gates=VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true
....

On master and worker node, edit the variable KUBELET_KUBECONFIG_ARGS of/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf file:

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrapkubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --allow-privileged=true --feature-gates=VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSINodeInfo=true,CSIDriverRegistry=true
systemctl daemon-reload
systemctl restart kubelet

Configure docker service

You need to configure mount propagation in Docker on both Kubernetes nodes before installing the CSI driver. You need to edit the service section of /etc/systemd/system/multi-user.target.wants/docker.service file and add the following lines, then restart the docker service:

[Service]…
MountFlags=shared
systemctl daemon-reload
systemctl restart docker

Install helm and tiller

You need to install Helm and Tiller package managers on the master node. I had to add /usr/local/bin to the path and then do the following:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
helm init

Then you need to create a Service Account for Tiller and assign cluster-admin role. You can do it creating the following yaml file:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

You need to apply the yaml file and then configure helm to use the tiller service account.

kubectl create -f rbac-config.yaml
helm init --upgrade --service-account tiller

INSTALL the csi driver

  1. Run git clone https://github.com/dell/csi-isilon.git to clone the git repository on the master node.
  2. Run cd csi-isilon/helm && cp csi-isilon/values.yaml ./myvalues.yaml to change the directory to the top-level helm directory and copy the values file from the driver configuration.
  3. Edit the file using vi myvalues.yaml and configure the Isilon endpoint. You can change multiple settings from here. I just changed the isiIP: to the right IP address. I also had to create the path of isiPath (/ifs/data/csi) in the Isilon: File System > File System Explorer > Create Directory
  1. You will need a secret with the Isilon credentials, there’s a secret. yaml file in the top directory that you need to edit. You can use base64 to encode it or write it directly in the yaml file, it will be encoded anyways.
  2. Before creating the secret you need to create a namespace for Isilon running kubectl create namespace isilon command.
  3. Run kubectl create -f secret.yaml to create the secret.
  4. Run sh install.isilon to install the driver.

The installation guide mentions that the installation may fail depending on the network speed, the delays on the script may be too small for the images of the pods been created to be downloaded. It happened to me, so I had to run sh uninstall.isilon and then again sh install.isilon. Since the images are now in the local docker cache, the installation was successful.

One important thing that is not mentioned in the guide but it’s common sense is that as the driver uses NFS, the nodes need to have support to mount NFS volumes. My lab installation didn’t had this installed, so I lost some time troubleshotting until I figured it out. You just need to install nfs utils with the following command:

yum install nfs-utils

After that, you are ready to start using the driver. The driver folder have some examples on Helm packages that you can run. But in a nutshell, what you need to do is to create PersistentVolumeClaims using the storageClassName: isilon like this:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 100Mi
  storageClassName: isilon
[root@k8s-master-isi ~]# kubectl create -f test_pvc.yaml
persistentvolumeclaim/test created
[root@k8s-master-isi ~]# kubectl get pvc
NAME   STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test   Bound    k8s-995fce4974   100Mi      RWO            isilon         10s
[root@k8s-master-isi ~]# kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   REASON   AGE
k8s-995fce4974   100Mi      RWO            Delete           Bound    default/test   isilon                  11s

That easy! You can auto-magically get persistent volumes from the Isilon =)