Containers, Kubernetes, OpenShift on Power

Containers, Kubernetes, OpenShift on Power

Connect, learn, share, and engage with IBM Power.

 View Only

Create a kubernetes cluster with kubeadm on Power RHEL/CentOS operating system

By Manjunath Kumatagi posted Wed November 09, 2022 04:33 AM

  

There are many different ways of creating kubernetes clusters and this blog talks about creating cluster using kubeadm tool.

Kubeadm is a tool built to provide best-practice "fast paths" for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way.

Prerequisite

  • One or more ppc64le machines running a RHEL/CentOS stream operating system.
  • A valid RHEL subscription if RHEL operating system
  • 4 GiB or more of RAM per machine--any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.
  • Full network connectivity among all machines in the cluster. You can use either a public or a private network.

Instructions

Prepare the machines

Make sure all the machines are upto date

# Caution: this will reboot the machine
$ sudo yum -y update && sudo systemctl reboot

If you have SELinux in enforcing mode, turn it off or use Permissive mode.

$ sudo setenforce 0 
$ sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config


Turn off swap

$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
$ sudo swapoff -a

Forwarding IPv4 and letting iptables see bridged traffic

Verify that the br_netfilter module is loaded by running lsmod | grep br_netfilter.

To load it explicitly, run sudo modprobe br_netfilter.

In order for a Linux node's iptables to correctly view bridged traffic, verify that net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config. For example:

$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
$ sudo modprobe overlay
$ sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
$ sudo sysctl --system

Installing container runtime

You need to install a container runtime into each node in the cluster so that Pods can run there. This section talks about how to install and configure the containerd as a runtime.

Download the containerd-<VERSION>-linux-ppc64le.tar.gz archive from https://github.com/containerd/containerd/releases , verify its sha256sum, and extract it under /usr/local:

Note: Release tar.gz is shipped from version 1.6.7 onwards

$ sudo tar Cxzvf /usr/local containerd-1.6.9-linux-ppc64le.tar.gz
bin/
bin/ctr
bin/containerd
bin/containerd-shim
bin/containerd-stress
bin/containerd-shim-runc-v2
bin/containerd-shim-runc-v1

Generate the containerd config file

$ sudo mkdir -p /etc/containerd
$ containerd config default > /etc/containerd/config.toml

systemd service for containerd

If you intend to start containerd via systemd, you should also download the containerd.service unit file and start the service

$ sudo mkdir -p /usr/local/lib/systemd/system/
$ sudo curl https://raw.githubusercontent.com/containerd/containerd/main/containerd.service > /usr/local/lib/systemd/system/containerd.service
$ sudo systemctl daemon-reload
$ sudo systemctl enable --now containerd

Install the runc

Download the runc.<ARCH> binary from https://github.com/opencontainers/runc/releases , verify its sha256sum, and install it as /usr/local/sbin/runc.

$ sudo install -m 755 runc.ppc64le /usr/local/sbin/runc


Using the systemd cgroup driver To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Update the relevant proxy image in the following section for the field name sandbox_image

[plugins]
...
 [plugins."io.containerd.grpc.v1.cri"]
...
   sandbox_image = "registry.k8s.io/pause:3.9"

Restart containerd again:

$ sudo systemctl restart containerd

Install/Configure kubernetes


Install kubernetes packages on all the machine from kubernetes yum repository

Note: Please change the version to appropriate one in the below section(e.g: 1.30)

$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl
EOF

$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

$ sudo systemctl enable --now kubelet


Initialize the control plane node

$ sudo kubeadm init


To start using your cluster, you need to run the following as a regular user:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install the CNI plugin


CNI plugin is required for the pod networking for the intra-pod communication, and calico CNI is installed in the blog:

$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Node will become ready post this step and can be confirmed

$ kubectl get nodes


Joining your node

To add new nodes to your Kubernetes cluster do the following for each machine

  • SSH to the machine
  • Prepare the machine
  • Install the runtime
  • Run the command that was output by kubeadm init. For example:
$ kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

If you do not have the token, you can create one by following command on the control-plane node:

$ kubeadm token create --print-join-command

References:

  • https://github.com/containerd/containerd/blob/main/docs/getting-started.md
  • https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
0 comments
65 views

Permalink