Global Storage Forum

Global Storage Forum

Connect, collaborate, and stay informed with insights from across Storage

Β View Only

Setting up a local Kubernetes environment

By Randhir Singh posted Sun September 08, 2024 02:16 AM

  

There are many ways to provision a Kubernetes (K8s) cluster:

  • Use a managed Kubernetes service like IBM Kubernetes Service or Amazon Elastic Kubernetes Service
  • Use kubeadm to install Kubernetes on an on-premise VM
  • Use tools like Minikube to create a cluster on your local machine

Installing Kubernetes locally gives us full control over the cluster configuration and infrastructure. In this article, we'll use last two options to create a Kubernetes cluster in a production-like set-up that we can use for local development and testing. 

Create a K8s cluster with Minikube on Mac

There are a number of tools to create a K8s cluster on local machine like Minikube, MicroK8s or k3s. Minikube is a good choice as it comes prepackaged with all the most common components that we might need.

The pre-requisites necessary for setting up and using Minikube are:

  • Docker or Podman for container orchestration
  • kubectl to interact with the K8s cluster

Installing Podman

You can install Podman for Mac with:

brew install podman
podman machine init
podman machine start

Allow Podman to run containers as root.

podman system connection default podman-machine-default-root

Test that your Podman installation was successful.

% podman run hello-world
Resolved "hello-world" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob sha256:1ff9adeff4443b503b304e7aa4c37bb90762947125f4a522b370162a7492ff47
Copying config sha256:83fc7ce1224f5ed3885f6aaec0bb001c0bbb2a308e3250d7408804a720c72a32
Writing manifest to image destination
!... Hello Podman World ...!

         .--"--.           
       / -     - \         
      / (O)   (O) \        
   ~~~| -=(,Y,)=- |         
    .---. /`  \   |~~      
 ~/  o  o \~~~~.----. ~~   
  | =(X)= |~  / (O (O) \   
   ~~~~~~~  ~| =(Y_)=-  |   
  ~~~~    ~~~|   U      |~~ 

Project:   https://github.com/containers/podman
Website:   https://podman.io
Desktop:   https://podman-desktop.io
Documents: https://docs.podman.io
YouTube:   https://youtube.com/@Podman
X/Twitter: @Podman_io
Mastodon:  @Podman_io@fosstodon.org

Installing minikube

Download Minikube with brew.

brew install minikube

Test minikube installation with:

% minikube start --driver=podman                             
πŸ˜„  minikube v1.33.1 on Darwin 14.4 (arm64)
✨  Using the podman (experimental) driver based on existing profile
πŸ‘  Starting "minikube" primary control-plane node in "minikube" cluster
πŸƒ  Updating the running podman "minikube" container ...
πŸ”Ž  Verifying Kubernetes components...
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Install kubectl to interact with the minikube cluster.

brew install kubectl

Test if your minkube is up and running.

% kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:38641
CoreDNS is running at https://127.0.0.1:38641/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

% kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION            CONTAINER-RUNTIME
minikube   Ready    control-plane   2d10h   v1.30.0   192.168.49.2   <none>        Ubuntu 22.04.4 LTS   6.9.12-200.fc40.aarch64   docker://26.1.1

The Kubernetes Cluster is now ready on your machine.

You can deploy an application, e.g., PodInfo on your cluster.

% kubectl run podinfo --restart=Never --image=ghcr.io/stefanprodan/podinfo:6.2.2 --port=9898
pod/podinfo created

Check that the cluster has a new Pod.

% kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
podinfo   1/1     Running   0          21s

Create a K8s cluster with Kubeadm on Ubuntu

kubeadm is a tool for creating a production-like setup. We'll create a single node K8s cluster using kubeadm on Ubuntu 22.04.

Start by setting up the terminal for shortcuts, e.g., k for kubectl, and command completion, e.g., k <<tab>>,  for kubectl commands.

### setup terminal
apt-get install -y bash-completion binutils
echo 'colorscheme ron' >> ~/.vimrc
echo 'set tabstop=2' >> ~/.vimrc
echo 'set shiftwidth=2' >> ~/.vimrc
echo 'set expandtab' >> ~/.vimrc
echo 'source <(kubectl completion bash)' >> ~/.bashrc
echo 'alias k=kubectl' >> ~/.bashrc
echo 'alias c=clear' >> ~/.bashrc
echo 'complete -F __start_kubectl k' >> ~/.bashrc
sed -i '1s/^/force_color_prompt=yes\n/' ~/.bashrc

Uninstall previously installed Kubernetes components and install essential packages that will be used later.

apt-get remove -y docker.io kubelet kubeadm kubectl kubernetes-cni
apt-get autoremove -y
apt-get install -y etcd-client vim build-essential

At this point, it will help to segue into Kubernetes architecture so that we understand the different components of Kubernetes that we'll install.

A Kubernetes cluster consists of a control plane plus a set of worker machines, called nodes, that run containerized applications. Here we'll install all the components on a single node. 

The Kubernetes components can be divided into control-plane and data-plane.

  • The control-plane consists of API server, controller manager, etcd and scheduler. Each of these runs in a container inside a pod. 
  • The application pods make up the data-plane. To run an application pod on a node, two Kubernetes agents must be running on each node - kube-proxy and kubelet.

When a node is created, the kubelet delegates:

  • Creating the container to the Container Runtime (CRI)
  • Attaching the container to the network to the Container Networking Interface (CNI)
  • Mounting volumes to the Container Storage Interface (CSI).

This is depicted below.

Based on the above understanding of different Kubernetes components, we'll bring them up in following sequence:
  1. Install CRI. We'll use containerd for CRI. This is required to run a container.
  2. Install tools - kubeadm, kubelet, kubectl.
  3. Install the Kubernetes control-plane. We'll use kubeadm tool to bring up the control-plane.
  4. Install CNI. We'll use Weave Net to enable pod networking.
  5. Install CSI. We'll install OpenEBS for CSI driver. This will provision volumes for any application that asks for it.

Disable swap as kubelet will fail if it is enabled.

swapoff -a

Install CRI

Step 1: Installing containerd

% curl -LO https://github.com/containerd/containerd/releases/download/v1.7.21/containerd-1.7.21-linux-amd64.tar.gz
% tar Cxzvf /usr/local  containerd-1.7.21-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-shim
bin/ctr
bin/containerd-shim-runc-v1
bin/containerd
bin/containerd-stress

% curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
% mkdir -p /usr/local/lib/systemd/system/
% cp containerd.service /usr/local/lib/systemd/system/
% systemctl daemon-reload
% systemctl enable --now containerd

Step 2: Installing runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.14/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc

Step 3: Installing CNI plugins

% curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.5.1.tgz
% mkdir -p /opt/cni/bin
% tar Cxzvf /opt/cni/bin  cni-plugins-linux-amd64-v1.5.1.tgz
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth

Install tools - kubeadm, kubelet, kubectl

Note that we're installing Kubernetes v1.31 as illustrated below.

sudo systemctl restart containerd
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
systemctl enable --now kubelet

Install the Kubernetes control-plane

kubeadm init --ignore-preflight-errors=NumCPU --skip-token-print

On successful completion, to make kubectl work for your non-root user, run these commands

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install CNI

Configure pod networking by installing CNI - Weave Net. Note that the K8s version 1.31 is used.

kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.31/net.yaml

We want to be able to schedule Pods on the control plane nodes, as we have a single node Kubernetes cluster, we have to untaint the node:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

At this point, all the pods should be running:

% k get po -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS        AGE
kube-system   coredns-74cbf5bcdb-84qd2                       1/1     Running   0               5h33m
kube-system   coredns-74cbf5bcdb-pjg9p                       1/1     Running   0               5h33m
kube-system   etcd-sips                                      1/1     Running   0               6h12m
kube-system   kube-apiserver-sips                            1/1     Running   0               6h12m
kube-system   kube-controller-manager-sips                   1/1     Running   0               6h12m
kube-system   kube-proxy-69h6v                               1/1     Running   0               6h12m
kube-system   kube-scheduler-sips                            1/1     Running   0               6h12m
kube-system   weave-net-2qrc7                                2/2     Running   1 (5h53m ago)   5h53m

To install our packages, we’re installing helm v3.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Install CSI

We need to install a Container Storage Interface (CSI) driver for the storage to work. We’ll install OpenEBS.

# Add openebs repo to helm
helm repo add openebs https://openebs.github.io/charts

kubectl create namespace openebs

helm --namespace=openebs install openebs openebs/openebs

Check if all the pods are running.

k get po -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS        AGE
kube-system   coredns-74cbf5bcdb-84qd2                       1/1     Running   0               5h33m
kube-system   coredns-74cbf5bcdb-pjg9p                       1/1     Running   0               5h33m
kube-system   etcd-sips                                      1/1     Running   0               6h12m
kube-system   kube-apiserver-sips                            1/1     Running   0               6h12m
kube-system   kube-controller-manager-sips                   1/1     Running   0               6h12m
kube-system   kube-proxy-69h6v                               1/1     Running   0               6h12m
kube-system   kube-scheduler-sips                            1/1     Running   0               6h12m
kube-system   weave-net-2qrc7                                2/2     Running   1 (5h53m ago)   5h53m
openebs       openebs-localpv-provisioner-74b58b5c5c-nv8lr   1/1     Running   0               5h28m
openebs       openebs-ndm-jcz4t                              1/1     Running   0               5h28m
openebs       openebs-ndm-operator-7dc87c8874-65v8q          1/1     Running   0               5h28m

To test the cluster, you can deploy WordPress. Note that we need to specify the storage class provided by our CSI.

# Add bitnami repo to helm
helm repo add bitnami https://charts.bitnami.com/bitnami

helm install wordpress bitnami/wordpress \
  --set=global.storageClass=openebs-hostpath

We have successfully created a single-node Kubernetes Cluster version 1.31 using kubeadm on Ubuntu 22.04, and the cluster has everything we need to install any application.

0 comments
67 views

Permalink