File and Object Storage

 View Only

Installing Kubernetes 1.12 on SUSE Linux using kubeadm

By Michael Diederich posted Fri March 01, 2019 01:33 PM

  
This is a Storage post - so while working on Storage for Containers (or Storage Enabler for Containers) we found significant difficulty just to set up a decent Kubernetes cluster.

we wanted


    Kubernetes (a decent version of it ie K8S 1.12)
    on SLES 12 SP3
    reproducible and not too complicated
    ( on s390x )



SUSE seems to be stuck with K8S 1.6
https://software.opensuse.org/download.html?project=Virtualization%3Acontainers&package=kubernetes

There in an article on the open mainframe project -
it uses K8S 1.9 and does everything by hand (interesting read, but not practical)

Google maintains a nice repo with all the versions you would want
like here
https://packages.cloud.google.com/yum/repos
or
https://packages.cloud.google.com/apt/

- but these exist only for Red Hat and Ubuntu!



Installing Kubernetes in 6 simple steps - for SUSE 12 SP3






General preparation


IMPORTANT


SLES uses btrfs by default. The docker "overlay" driver is not supported with this file-system, so it is sensible to use etx4 in /var/lib/docker.


The Kubernetes kubeadm installer will stop if it finds btrfs.



Note -
you can tell Docker about BTRFS :
https://docs.docker.com/storage/storagedriver/btrfs-driver/
Kubeadm will still complain, so for the purpose of this exercise we assume ext4 !



install docker (from the SLES repo)


instructions https://www.suse.com/documentation/sles-12/singlehtml/book_sles_docker/book_sles_docker.html#cha.docker.installation

you need internet connection for the steps that follow.


set


net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.bridge.bridge-nf-call-iptables=1

in



/etc/sysctl.conf

activate these for the running system :



sysctl -p

Prepare system to access the Google Cloud repository


We need "zypper" to access the Google Cloud Repository that has all the Kubernetes packages.



zypper addrepo --type yum --gpgcheck-strict --refresh https://packages.cloud.google.com/yum/repos/kubernetes-el7-s390x google-k8s

Google seems to provide the signing keys in a way zypper can not handle (zypper throws error when importing the keys), to avoid warnings or errors when working with the Google Cloud Repository we can import them manually prior to start working with the repository. This allows even strict gpg checking (repo and rpm check) without errors.




rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg

rpm -q gpg-pubkey --qf '%{name}-%{version}-%{release} --> %{summary}\n'
gpg-pubkey-39db7c82-5847eb1f --> gpg(SuSE Package Signing Key <build@suse.de>)
gpg-pubkey-50a3dd1c-50f35137 --> gpg(SuSE Package Signing Key (reserve key) <build@suse.de>)
gpg-pubkey-3e1ba8d5-558ab6a8 --> gpg(Google Cloud Packages RPM Signing Key <gc-team@google.com>) <<< the new signing keys
gpg-pubkey-ba07f4fb-5ac168db --> gpg(Google Cloud Packages Automatic Signing Key <gc-team@google.com>) <<<


The google k8s repository can be used now similar to a regular SLES repository


zypper refresh google-k8s
Retrieving repository 'google-k8s' metadata ..................................................................................................................................[done]
Building repository 'google-k8s' cache .......................................................................................................................................[done]
Specified repositories have been refreshed.


we should see the different versions available :


zypper packages --repo google-k8s
Loading repository data...
Reading installed packages...
S | Repository | Name | Version | Arch
--+------------+----------------+----------------+------
| google-k8s | cri-tools | 1.12.0-0 | s390x
| google-k8s | cri-tools | 1.11.1-0 | s390x
| google-k8s | cri-tools | 1.11.0-0 | s390x
| google-k8s | cri-tools | 1.0.0_beta.1-0 | s390x
| google-k8s | kubeadm | 1.13.2-0 | s390x
| google-k8s | kubeadm | 1.13.1-0 | s390x
| google-k8s | kubeadm | 1.13.0-0 | s390x
| google-k8s | kubeadm | 1.12.5-0 | s390x
...

Install packages required to run kubernetes on SLES


Install the packages using zypper ignore the dependency on "conntrack"


i.e. for 1.12.5


zypper install kubelet-1.12.5-0.s390x kubernetes-cni-0.6.0-0.s390x kubeadm-1.12.5-0.s390x cri-tools-1.12.0-0 kubectl-1.12.5-0.s390x
Loading repository data...
Reading installed packages...
Resolving package dependencies...

Problem: nothing provides conntrack needed by kubelet-1.12.5-0.s390x
Solution 1: do not install kubelet-1.12.5-0.s390x
Solution 2: break kubelet-1.12.5-0.s390x by ignoring some of its dependencies

Choose from above solutions by number or cancel [1/2/c] (c): 2
Resolving dependencies...
Resolving package dependencies...

The following 7 NEW packages are going to be installed:
cri-tools ebtables kubeadm kubectl kubelet kubernetes-cni socat

The following 5 packages have no support information from their vendor:
cri-tools kubeadm kubectl kubelet kubernetes-cni

7 new packages to install.
Overall download size: 60.6 MiB. Already cached: 0 B. After the operation, additional 346.9 MiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package socat-1.7.2.4-3.1.s390x (1/7), 194.3 KiB (623.6 KiB unpacked)
Retrieving: socat-1.7.2.4-3.1.s390x.rpm ......................................................................................................................................[done]
Retrieving package ebtables-2.0.10.4-13.3.1.s390x (2/7), 96.0 KiB (341.0 KiB unpacked)
Retrieving: ebtables-2.0.10.4-13.3.1.s390x.rpm ...............................................................................................................................[done]
Retrieving package cri-tools-1.12.0-0.s390x (3/7), 4.2 MiB ( 18.4 MiB unpacked)
Retrieving: bb34adae6727984066fd59accb9c7b47f4d69d93ac675e9cbaa106bf53172b0d-cri-tools-1.12.0-0.s390x.rpm ..........................................................[done (921 B/s)]
Retrieving package kubectl-1.12.5-0.s390x (4/7), 9.9 MiB ( 56.3 MiB unpacked)
Retrieving: ea57e78e50b2effc18a5c7453d9ebc5a17c01bdb1f1c3251d53456ce62f24744-kubectl-1.12.5-0.s390x.rpm ..........................................................[done (4.0 MiB/s)]
Retrieving package kubernetes-cni-0.6.0-0.s390x (5/7), 12.0 MiB ( 46.4 MiB unpacked)
Retrieving: 6498221e993d80cc4b297364682c42ffbe559c8c69f30f2b9a0582f89552a573-kubernetes-cni-0.6.0-0.s390x.rpm ....................................................[done (6.0 MiB/s)]
Retrieving package kubelet-1.12.5-0.s390x (6/7), 25.0 MiB (172.0 MiB unpacked)
Retrieving: 11bda8f79e78312ebf913f630e276dc5fcc48e56d16a9278f7e8afa299d74e2b-kubelet-1.12.5-0.s390x.rpm ..........................................................[done (8.0 MiB/s)]
Retrieving package kubeadm-1.12.5-0.s390x (7/7), 9.3 MiB ( 52.9 MiB unpacked)
Retrieving: ad31b6cc58ff1680a756d3e255e396d932e01ba491f869b4c3a94de70aac0390-kubeadm-1.12.5-0.s390x.rpm ..........................................................[done (2.0 MiB/s)]
Checking for file conflicts: .................................................................................................................................................[done]
(1/7) Installing: socat-1.7.2.4-3.1.s390x ....................................................................................................................................[done]
(2/7) Installing: ebtables-2.0.10.4-13.3.1.s390x .............................................................................................................................[done]
(3/7) Installing: cri-tools-1.12.0-0.s390x ...................................................................................................................................[done]
(4/7) Installing: kubectl-1.12.5-0.s390x .....................................................................................................................................[done]
(5/7) Installing: kubernetes-cni-0.6.0-0.s390x ...............................................................................................................................[done]
(6/7) Installing: kubelet-1.12.5-0.s390x .....................................................................................................................................[done]
(7/7) Installing: kubeadm-1.12.5-0.s390x .....................................................................................................................................[done]


Enable kubelet and docker service, start docker



systemctl start docker.service
systemctl enable docker.service
systemctl enable kubelet.service



Run kubeadm


Some options are required for "kubeadm init"


Command options :


The "CoreDNS" component has a crash issue with some "/etc/resolv.conf" entries - so did the machine used here. It is a known issue in Kubernetes before 1.13, it can be worked around by replacing "CoreDNS" with "KubeDNS", by using the "feature-gate" switch :


 --feature-gates=CoreDNS=false

Select the pod network -Select exactly this one ! Others seem to cause issues ie


--pod-network-cidr=10.244.0.0/16

Even though this is the default, this option should not be ommited.
The network layer pod may end up in "CrashLoopBackOff" state without it.



Select the version for the Kubernetes control pane : (this can
be different to the kubelet / worker node component - with limitations)


--kubernetes-version=1.12.3

Run kubeadm :


kubeadm init --feature-gates=CoreDNS=false --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.12.3
[init] using Kubernetes version: v1.12.3
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Using the existing sa key.
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 16.011787 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node vlxssl10 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node vlxssl10 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "vlxssl10" as an annotation
[bootstraptoken] using token: 58trj5.4o0c36w2az5rzq1z
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.20.145.21:6443 --token 58trj5.4o0c36w2az5rzq1z --discovery-token-ca-cert-hash sha256:ef1ac8ef130b70f48437e4a746a01074dd2d3a51a6e5fa51d5fdaf3410675700

save the above "join" command for later use !


make kubectl usable as root -


export KUBECONFIG=/etc/kubernetes/admin.conf

your cluster should now come up


check using


kubectl version
kubectl get cs
kubectl get no
kubectl get pods -n kube-system

Add network layer


We are using flannel, so we can add the network component by simply running


kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

Add nodes


single node cluster


if you intend to use this as single node cluster, enable scheduling of pods on the master node :


kubectl taint nodes --all node-role.kubernetes.io/master-
node/vlxssl10 untainted

More nodes


on the additional nodes install the same kubernetes rpms.


Add the repo as described above.


then


 kubeadm join 172.20.145.21:6443 --token 58trj5.4o0c36w2az5rzq1z --discovery-token-ca-cert-hash sha256:ef1ac8ef130b70f48437e4a746a01074dd2d3a51a6e5fa51d5fdaf3410675700

with the command as shown in the kubeadm init result.


Test DNS


every worker node should be able to talk dns to the kube-dns listening at 10.96.0.10 ie


vlxssl10:~ # nslookup kubernetes.default.svc.cluster.local 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53

Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1

note that the 10.96.0.10 is a default address for the kube dns service. It is not related to the pod network !


in addition use the default nameserver inside a pod (will only validate the network for the worker node hosting the pod)


kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
#Wait for the pod to start
kubectl exec -ti busybox -- nslookup kubernetes.default.svc.cluster.local
# This should result in returning the Kubernetes cluster IP


Summary



This short procedure allows to consistently build a Kubernetes cluster on SLES SUSE Linux.
The actual architecture - System Z (s390x) - does not matter at all - the same would apply to x86_64 or PPC.

From here on the real fun can begin - by adding the IBM Storage Enabler for Containers with IBM Spectrum Connect or Spectrum Scale.
see
https://www.ibm.com/blogs/systems/ibm-storage-systems-go-cloud-native/

or
IBM Storage Enabler for Containers is available as a free software solution for IBM storage system customers.

Can the install be even simpler ? Try out IBM Cloud Private CE
CE as in "Community Edition" -
https://www.ibm.com/blogs/bluemix/2019/02/ibm-cloud-private-version-3-1-2-is-now-available/


Please leave a comment if you like this post !

Michael


PS -
Kubernetes 1.13.4 seems to be the latest stable version as of writing this.
The same instructions work with 1.13.4 -
with newer packages in zypper :

kubelet-1.13.4-0.s390x kubeadm-1.13.4-0.s390x


The new version will also allow you, even force you to use the default DNS setup -

kubeadm init --feature-gates=CoreDNS=false --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.12.3

becomes

kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.13.4














1 comment
7 views

Permalink

Comments

Sun January 19, 2020 08:00 PM

I'm using the latest stable versions (1.17.1 at the moment of writing this) and I'm getting few errors.