MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with Rocky Linux 9
Introduction
MicroShift is a Red Hat-led open-source community project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. Red Hat Device Edge delivers an enterprise-ready and supported distribution of MicroShift. Red Hat Device Edge is planned as a developer preview early next year and expected to be generally available with full support later in 2023.
Over the last 27 parts, we have worked with MicroShift on multiple distros of Linux on the Raspberry Pi 4 and Jetson Nano. Specifically, we have used upto the 4.8.0-microshift-2022-04-20-141053 branch of MicroShift in this blogs series. In Part 14, we worked with MicroShift on Rocky Linux 8.5. In this Part 28, we will work with MicroShift on Rocky Linux 9. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and Containerized Data Imported (CDI), the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also use .NET to drive a Raspberry Pi Sense HAT. We will build and run a python Operator using kopf to connect to MongoDB. Finally, we will setup MicroShift with Kata Containers Runtime.
Rocky Linux is a community-maintained and freely available enterprise Linux distribution. It is managed by the Rocky Enterprise Software Foundation (RESF), a Public Benefit Corporation (PBC). Red Hat discontinued development of CentOS, which was downstream version of Red Hat Enterprise Linux, in favor of a newer upstream development variant of that operating system known as "CentOS Stream". Rocky Linux is intended to be a downstream, complete binary-compatible release using the Red Hat Enterprise Linux operating system source code.
Setting up the Raspberry Pi 4 with Rocky Linux
Run the following steps to download the Rocky Linux image and setup the Raspberry Pi 4.
- Download the Rocky Linux image.
- Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
- Optionally, have a Keyboard and Monitor connected to the Raspberry Pi 4
- Insert Microsdxc into Raspberry Pi4 and poweron
- Find the ethernet dhcp ip address of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
$ sudo nmap -sn 192.168.1.0/24
Nmap scan report for 192.168.1.209
Host is up (0.0043s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
- Login using rocky/rockylinux. You may add your public ssh key to login without password.
$ ssh rocky@$ipaddress
[rocky@localhost ~]# mkdir ~/.ssh
[rocky@localhost ~]# vi ~/.ssh/authorized_keys
[rocky@localhost ~]# chmod 700 ~/.ssh
[rocky@localhost ~]# chmod 600 ~/.ssh/authorized_keys
[rocky@localhost ~]# # ls -lZ ~/.ssh;chcon -R -v system_u:object_r:usr_t:s0 ~/.ssh/;ls -lZ ~/.ssh
[rocky@localhost ~]# # ls -lZ ~/.ssh;chcon -R -v system_u:object_r:ssh_home_t:s0 ~/.ssh/;ls -lZ ~/.ssh
Check that your key is RSA 2048 or larger. The RSA 1024 will not work after updates.
ssh-keygen -l -v -f ~/.ssh/id_rsa.pub
If it is 1024, you will get the error “Invalid key length” instead of “Accepted publickey”
[rocky@localhost ~]# cat /var/log/secure | grep RSA
Dec 3 19:09:20 rocky sshd[407]: refusing RSA key: Invalid key length [preauth]
- Extend the disk
rootfs-expand
Output:
[rocky@localhost ~]$ sudo su -
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for rocky:
[root@localhost ~]#
/dev/mmcblk0p3 /dev/mmcblk0 3
Extending partition 3 to max size ....
CHANGED: partition=3 start=1593344 old: size=5469157 end=7062501 new: size=120545247 end=122138591
Resizing ext4 filesystem ...
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mmcblk0p3 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 8
The filesystem on /dev/mmcblk0p3 is now 15068155 (4k) blocks long.
Done.
- Optionally, enable wifi
nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask
- Check the release
cat /etc/os-release
NAME="Rocky Linux"
VERSION="9.0 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.0"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.0 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.0"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.0"
- Set the hostname with a domain and add the ipv4 address to /etc/hosts
hostnamectl set-hostname rocky.example.com
echo "$ipaddress rocky rocky.example.com" >> /etc/hosts
- Update the kernel parameters - Concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.
Install the updates and reboot
dnf -y update
reboot
Verify
ssh rocky@$ipaddress
sudo su –
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present
Output (hugetlb is not present):
[root@rocky neofetch-master]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
[root@rocky neofetch-master]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name hierarchy num_cgroups enabled
cpuset 0 65 1
cpu 0 65 1
cpuacct 0 65 1
blkio 0 65 1
memory 0 65 1
devices 0 65 1
freezer 0 65 1
net_cls 0 65 1
perf_event 0 65 1
net_prio 0 65 1
pids 0 65 1
Optionally, install neofetch to display information about your Raspberry Pi and add a script to watch the cpu temperature
sudo dnf config-manager --set-enabled crb
sudo dnf -y install epel-release
sudo dnf -y install neofetch bc
cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
# Update the /etc/sysconfig/cpupower to use ondemand
systemctl enable cpupower; systemctl start cpupower
cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
Output:
[rocky@rocky ~]$ cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
powersave
[root@rocky neofetch-master]# cat /etc/sysconfig/cpupower
# See 'cpupower help' and cpupower(1) for more info
CPUPOWER_START_OPTS="frequency-set -g performance"
CPUPOWER_STOP_OPTS="frequency-set -g ondemand"
[root@rocky neofetch-master]# systemctl enable cpupower; systemctl start cpupower
Created symlink /etc/systemd/system/multi-user.target.wants/cpupower.service → /usr/lib/systemd/system/cpupower.service.
[root@rocky neofetch-master]# cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
performance
You may keep a watch on the temperature of the Raspberry Pi using the following script rpi_temp.sh
#!/bin/bash
cpu=$(</sys/class/thermal/thermal_zone0/temp)
echo "$(bc <<< "${cpu} / 1000") C ($(bc <<< "${cpu} / 1000 * 1.8 + 32") F)"
Install sense_hat and RTIMULib on Rocky Linux 9
The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.
Install sensehat
dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip cmake
pip3 install Cython Pillow numpy sense_hat smbus
Install RTIMULib
cd
dnf -y install git
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break
# Optional
dnf -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install
Check the Sense Hat with i2cdetect
i2cdetect -y 1
[root@rocky ~]# i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
Test the SenseHat samples for the Sense Hat's LED matrix and sensors.
cd
git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot
# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt
# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt
# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py
# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt
# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py
# Find Magnetic North
python3 compass.py
Install MicroShift on the Raspberry Pi 4 Rocky Linux host
Setup crio and MicroShift Nightly CentOS Stream 9 aarch64
rpm -qi selinux-policy # selinux-policy-34.1.43
dnf -y install 'dnf-command(copr)'
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/centos-stream-9/group_redhat-et-microshift-nightly-centos-stream-9.repo -o /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo
cat /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo
VERSION=1.24 # Using 1.21 or 1.24 from CentOS_8_Stream.
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Fedora_36/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8_Stream/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo
dnf -y install firewalld cri-o cri-tools microshift containernetworking-plugins # Be patient, this takes a few minutes
# You can alternatively use the 1.25 from CentOS_9_Stream
# If you want to install a different version of crio, you can remove the previous version and then reinstall
# dnf -y remove cri-o cri-tools microshift
# VERSION=1.25 # Using 1.25 from CentOS_9_Stream.
# curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_9_Stream/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
# dnf -y install cri-o cri-tools microshift
Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.
sudo dnf -y install libvirt-client libvirt-nss qemu-kvm virt-manager virt-install virt-viewer
systemctl enable --now libvirtd
virt-host-validate qemu
Check that cni plugins are present and start MicroShift
ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins
We will have systemd start and manage MicroShift on this rpm-based host. Refer to the microshift service for the three approaches.
systemctl enable --now crio microshift
You may read about selecting zones for your interfaces.
sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload
Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:
sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp
For access to services through NodePort, add the port range 30000-32767:
sudo firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
Check the microshift and crio logs
journalctl -u microshift -f
journalctl -u crio -f
The microshift service references the microshift binary in the /usr/bin directory
[root@rocky ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service
[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root
[Install]
WantedBy=multi-user.target
Install the kubectl and the openshift client
ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
tar -xzvf oc.tar.gz && \
rm -f oc.tar.gz && \
install -t /usr/local/bin {kubectl,oc} && \
rm -f {README.md,kubectl,oc}
It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"
Samples to run on MicroShift
We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.
1. InfluxDB/Telegraf/Grafana
The source code is available for this influxdb sample in github.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb
Replace the coreos nodename in the persistent volume claims with the rocky.example.com (our current nodename)
sed -i "s|coreos|rocky.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rocky.example.com|" grafana/grafana-data-dynamic.yaml
This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.
annotations:
kubevirt.io/provisionOnNode: rocky.example.com
spec:
storageClassName: kubevirt-hostpath-provisioner
We create and push the “measure:latest” image using the Dockerfile. If you want to run all the steps in a single command, just execute the runall-balena-dynamic.sh.
./runall-balena-dynamic.sh
The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.
Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click on Skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Go back and open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.
Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh
./deleteall-balena-dynamic.sh
Deleting the persistent volume claims automatically deletes the persistent volumes.
2. Node Red live data dashboard with SenseHat sensor charts
We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered
Build and push the arm64v8 image “karve/nodered:arm64”
cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..
Deploy Node Red with persistent volume for /data within the node red container
mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f
Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/
The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.
Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. You can see the screenshots for these dashboards in previous blogs.
We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:
cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered
3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red
This example requires the same Node Red setup as in the previous Sample 2.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection
We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.
podman build -f Dockerfile -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest
Update the env: WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.209 shown below).
env:
- name: WebSocketURL
value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
- name: ImageUploadURL
value: http://nodered-svc-nodered.cluster.local/upload
hostAliases:
- hostnames:
- nodered-svc-nodered.cluster.local
ip: 192.168.1.209
Create the deployment
oc project default
oc apply -f object-detection.yaml
oc -n default wait deployment object-detection-deployment --for condition=Available --timeout=300s
We will see pictures being sent to Node Red when a person is detected. Chat messages are sent to http://nodered-svc-nodered.cluster.local/chat
When we are done testing, we can delete the deployment
oc delete -f object-detection.yaml
4. Running a Virtual Machine Instance on MicroShift
Install the latest version of the KubeVirt Operator.
LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt
We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.
cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}' | grep console-token
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}' | grep console-token
Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.
vi okd-web-console-install.yaml
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system
Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/
We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)
crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.
cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods
The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password.
Output:
[root@rocky vmi]# ssh fedora@`oc get vmi vmi-fedora -o jsonpath="{ .status.interfaces[0].ipAddress }"` 'bash -c "ping -c 2 google.com"'
The authenticity of host '10.42.0.17 (10.42.0.17)' can't be established.
ED25519 key fingerprint is SHA256:DgowLpqM+4pb2oJ2hasn2VZlqPqCenhdlMOAINmaSac.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.17' (ED25519) to the list of known hosts.
fedora@10.42.0.17's password:
PING google.com (142.251.35.174) 56(84) bytes of data.
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=1 ttl=118 time=4.48 ms
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=2 ttl=118 time=4.39 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.390/4.432/4.475/0.042 ms
Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:
oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client
or
oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t
Then, ssh to the Fedora VMI from this openssh-client container.
Output:
[root@rocky vmi]# oc get vmi vmi-fedora -o jsonpath='{ .status.interfaces[0].ipAddress }{"\n"}'
10.42.0.17
[root@rocky vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.17 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.17 (10.42.0.17)' can't be established.
ED25519 key fingerprint is SHA256:DgowLpqM+4pb2oJ2hasn2VZlqPqCenhdlMOAINmaSac.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.17' (ED25519) to the list of known hosts.
fedora@10.42.0.17's password:
PING google.com (142.251.40.110) 56(84) bytes of data.
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=1 ttl=117 time=3.63 ms
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=2 ttl=117 time=4.14 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.632/3.887/4.143/0.255 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted
A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.
dnf -y install podman
id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora
Output:
[root@rocky vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@rocky vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@rocky vmi]# podman rm -v $id
59b4f00b2dde80c4e0fc9ce2b11d246f782337494c3f6fca263fee14806792cb
[root@rocky vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]
fedora
Password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.81.238) 56(84) bytes of data.
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=1 ttl=118 time=5.35 ms
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=2 ttl=118 time=4.38 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.384/4.867/5.351/0.483 ms
[fedora@vmi-fedora ~]$ # Ctrl ] to exit
[root@rocky vmi]#
We can also create and start the CentOS 9 Stream VM on the Raspberry Pi 4 using the image we create in Part 26.
crictl pull docker.io/karve/centos-stream-genericcloud-9-20221206:arm64
cd /root/microshift/raspberry-pi/vmi
Update the vm-centos9.yaml with your public key in ssh_authorized_keys so you can ssh
vi vm-centos9.yaml
oc apply -f vm-centos9.yaml
virtctl start vm-centos9
We can ssh to the centos VM with user:cloud-user and password:centos. Then stop the VM
virtctl stop vm-centos9
If done with the Fedora VM and CentOS VM, we can delete the VMIs
oc delete -f vmi-fedora.yaml -f vm-centos.yaml
You may continue to the next sample 5 that will use the kubevirt operator and the OKD console. You can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9.
When done with KubeVirt, you may delete kubevirt operator.
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
When done with the OKD Web Console, delete it:
cd /root/microshift/raspberry-pi/console
oc delete -f okd-web-console-install.yaml
5. Containerized Data Importer (CDI)
CDI is a utility designed to import Virtual Machine images for use with Kubevirt. At a high level, a PersistentVolumeClaim (PVC) is created. A custom controller watches for importer specific claims, and when discovered, starts an import process to create a raw image with the desired content into the associated PVC.
When we use the default microshift binary, the cdi-apiserver logs show “no valid subject specified” error. We fix this issue by setting the requestheader-allowed-names="" in the ~/microshift/pkg/controllers/kube-apiserver.go; update the line requestheader-allowed-names to empty and recompile microshift binary. This blank option indicates to an extension apiserver that any CN is acceptable.
"--requestheader-allowed-names=",
We will use the prebuilt binary with the microshift image containing the above fix.
[root@rocky microshift]# systemctl stop microshift
[root@rocky microshift]# id=$(podman create docker.io/karve/microshift:arm64)
Trying to pull docker.io/karve/microshift:arm64...
Getting image source signatures
Copying blob 3747df4fb07d done
Copying blob 5d1d750e1695 done
Copying blob 7d5d04141937 done
Copying blob 92cc81bd9f3b done
Copying blob 209d1a1affea done
Copying config b21a0a79b2 done
Writing manifest to image destination
Storing signatures
[root@rocky microshift]# podman cp $id:/usr/bin/microshift /usr/bin/microshift
[root@rocky microshift]# restorecon /usr/bin/microshift
[root@rocky microshift]# # Run cleanup.sh and start microshift
[root@rocky microshift]# systemctl start microshift
If you want to build the microshift binary, use the following steps. Use the golang 1.17.2 as shown in Part 26 to build the microshift binary.
[root@rocky microshift]# systemctl stop microshift
[root@rocky microshift]# # Update "--requestheader-allowed-names=",
[root@rocky microshift]# vi pkg/controllers/kube-apiserver.go
[root@rocky microshift]# make
[root@rocky microshift]# cp microshift /usr/bin/microshift
cp: overwrite '/usr/bin/microshift'? y
[root@rocky microshift]# restorecon -R -v /usr/bin/microshift
[root@rocky microshift]# # Run cleanup.sh and start microshift
[root@rocky microshift]# systemctl start microshift
Check the latest arm64 version at https://quay.io/repository/kubevirt/cdi-operator?tab=tags&tag=latest
VERSION=v1.55.2
ARM_VERSION=20221210_a6ebd75e-arm64 # Use the arm64 tag from https://quay.io/repository/kubevirt/cdi-operator?tab=tags&tag=latest
# The version does not work with arm64 images
# oc apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
# So we use the ARM_VERSION
curl -sL https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml | sed "s/$VERSION/$ARM_VERSION/g" | oc apply -f -
# Wait for cdi-operator to start
# Next create the cdi-cr that will create the apiserver, deployment and uploadproxy
oc apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
oc get apiservices
oc api-resources --api-group=cdi.kubevirt.io
In this section, we show the concise instructions for creating the CentOS 9 Stream and Ubuntu Jammy VMs. Have a look at uploading CentOS and Ubuntu images using Containerized Data Importer (CDI) and creating VMs for other distros in Part 26 for detailed instructions.
CentOS 9 Stream VM using URL as datavolume source
cd ~/microshift/raspberry-pi/vmi
vi centos9-dv.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name rocky.example.com
oc apply -f centos9-dv.yaml # Create a persistent volume by downloading the CentOS image
# The cloning when creating VM below will for the above import to be completed
vi vm-centos9-datavolume.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name and the ssh_authorized_keys
oc apply -f vm-centos9-datavolume.yaml # Create a CentOS VM by cloning the persistent volume with the above CentOS image
# The centos9instance1 dv will stay in CloneScheduled state until the centos9-dv import is completed
watch oc get dv,pv,pvc,pods,vm,vmi
Ubuntu Jammy VM using the virtctl image-upload
vi example-upload-dv.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name
oc apply -f example-upload-dv.yaml
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-arm64.img
Add the “$ipaddress cdi-uploadproxy-cdi.cluster.local” to /etc/hosts where the $ipaddress is your Raspberry Pi’s IP address
virtctl image-upload dv example-upload-dv --namespace default --size 10Gi --image-path jammy-server-cloudimg-arm64.img --wait-secs 1200 --no-create --uploadproxy-url=https://cdi-uploadproxy-cdi.cluster.local --insecure
The --insecure in command above is required to avoid the following error
Post "https://cdi-uploadproxy-cdi.cluster.local/v1alpha1/upload": x509: certificate is valid for router-internal-default.openshift-ingress.svc, router-internal-default.openshift-ingress.svc.cluster.local, not cdi-uploadproxy-cdi.cluster.local
Wait for the jammy image to be uploaded and processed. You can login to the VMI after it is started with user ubuntu and password ubuntu set with cloud-init in vm-ubuntujammy-uploadvolume.yaml. Note that it takes a few minutes after the VM is started for the password to apply.
vi vm-ubuntujammy-uploadvolume.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name and the ssh_authorized_keys
oc apply -f vm-ubuntujammy-uploadvolume.yaml
watch oc get dv,pv,pvc,pods,vm,vmi
You may see the source-pod show CreateContainerConfigError, just wait for a few seconds and it will go to Running state.
NAME PHASE PROGRESS RESTARTS AGE
datavolume.cdi.kubevirt.io/ubuntujammy1 CloneInProgress 3.11% 40s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
REASON AGE
persistentvolume/pvc-5afb5eb5-2775-4461-acba-f34db7895063 56Gi RWO Delete Bound default/ubuntujammy1 kubevirt-hostpath-provision
er 40s
persistentvolume/pvc-a1e5125d-0c7d-459e-8163-bfe9dec202ad 56Gi RWO Delete Bound default/centos9-dv kubevirt-hostpath-provision
er 67m
persistentvolume/pvc-a579276e-0e25-42c1-b8f8-ebcac5ec1022 56Gi RWO Delete Bound default/centos9instance1 kubevirt-hostpath-provision
er 59m
persistentvolume/pvc-fbdb7478-fe28-47d5-b1cd-9984f4d30291 56Gi RWO Delete Bound default/example-upload-dv kubevirt-hostpath-provision
er 58m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/centos9-dv Bound pvc-a1e5125d-0c7d-459e-8163-bfe9dec202ad 56Gi RWO kubevirt-hostpath-provisioner 67m
persistentvolumeclaim/centos9instance1 Bound pvc-a579276e-0e25-42c1-b8f8-ebcac5ec1022 56Gi RWO kubevirt-hostpath-provisioner 59m
persistentvolumeclaim/example-upload-dv Bound pvc-fbdb7478-fe28-47d5-b1cd-9984f4d30291 56Gi RWO kubevirt-hostpath-provisioner 58m
persistentvolumeclaim/ubuntujammy1 Bound pvc-5afb5eb5-2775-4461-acba-f34db7895063 56Gi RWO kubevirt-hostpath-provisioner 40s
NAME READY STATUS RESTARTS AGE
pod/5afb5eb5-2775-4461-acba-f34db7895063-source-pod 1/1 Running 0 34s
pod/cdi-upload-ubuntujammy1 1/1 Running 0 39s
pod/virt-launcher-centos9instance1-m4658 1/1 Running 0 52m
NAME AGE STATUS READY
virtualmachine.kubevirt.io/centos9instance1 59m Running True
virtualmachine.kubevirt.io/ubuntujammy1 41s Provisioning False
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/centos9instance1 52m Running 10.42.0.40 rocky.example.com True
You can check the OKD console for these Virtual Machines.
If done, you can delete the VMs, VMI, and the source PVs created by the CDI
[root@rocky console]# oc delete vm --all
virtualmachine.kubevirt.io "centos9instance1" deleted
virtualmachine.kubevirt.io "ubuntujammy1" deleted
[root@rocky console]# oc delete vmi --all
No resources found
[root@rocky console]# oc delete pvc --all
persistentvolumeclaim "centos9-dv" deleted
persistentvolumeclaim "example-upload-dv" deleted
Finally, you can delete the CDI resource and Operator as follows:
VERSION=v1.55.2
oc delete -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
oc delete -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
6. Use .NET to drive a Raspberry Pi Sense HAT
We will run the .NET sample to retrieve sensor values from the Sense HAT, respond to joystick input, and drive the LED matrix. The source code is in github.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/dotnet
You may build the image using the Dockerfile that uses the sensehat-quickstart-1.sh to install dot net and build the SenseHat.Quickstart sample and test it directly using podman as shown in Part 25. Now, let’s run the sample in MicroShift using the prebuilt arm64v8 image “docker.io/karve/sensehat-dotnet”.
oc new-project dotnet
oc apply -f dotnet.yaml
oc -n dotnet wait deployment dotnet-deployment --for condition=Available --timeout=300s
oc logs deployment/dotnet-deployment -f
We can observe the console log output as sensor data is displayed. The LED matrix displays a yellow pixel on a field of blue. Holding the joystick in any direction moves the yellow pixel in that direction. Clicking the center joystick button causes the background to switch from blue to red.
Temperature Sensor 1: 38.2°C
Temperature Sensor 2: 37.4°C
Pressure: 1004.04 hPa
Altitude: 83.29 m
Acceleration: <-0.024108887, -0.015258789, 0.97961426> g
Angular rate: <2.8270676, 0.075187966, 0.30827066> DPS
Magnetic induction: <-0.15710449, 0.3963623, -0.51342773> gauss
Relative humidity: 38.6%
Heat index: 43.2°C
Dew point: 21.5°C
…
When we are done, we can delete the deployment
oc delete -f dotnet.yaml
7. MongoDB and Python Operator using kopf
The Kubernetes Operator Pythonic Framework (kopf) is part of the Zalando-incubator github repository. This project is well documented at https://kopf.readthedocs.io
We will deploy and use the mongodb database using the image: docker.io/arm64v8/mongo:4.4.18. Do not use the latest tag for the image. It will result in "WARNING: MongoDB 5.0+ requires ARMv8.2-A or higher, and your current system does not appear to implement any of the common features for that!" and fail to start. Raspberry Pi 4 uses an ARM Cortex-A72 which is ARM v8-A.
A new PersistentVolumeClaim mongodb will use the storageClassName: kubevirt-hostpath-provisioner for the Persistent Volume. The mongodb-root-username uses the root user with a the mongodb-root-password set to a default of mongodb-password. Remember to update the selected-node in the mongodb-pv.yaml
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/mongodb
oc project default
vi mongodb-pv.yaml # Update the node name in the annotation
oc apply -f .
Now we will build and install the Operator, and test it
cd ~/microshift/raspberry-pi/python-mongodb-writer-operator
# Build the operator image
podman build -t docker.io/karve/mongodb-writer:latest .
# Optionally push the image to registry
podman login docker.io
podman push docker.io/karve/mongodb-writer:latest
# Install the Operator
oc apply -f kubernetes/
# Create sample entries
oc apply -f sample.yaml -f sample2.yaml
# Modify
cat sample2.yaml | sed "s/age: 24/age: 26/g" | sed "s/country: .*/country: germany/g" | oc apply -f -
# Delete
oc delete -f sample.yaml -f sample2.yaml
The session below shows the output for the creation of two students, updating of a student and finally deleting of both students.
Client - Install the operator
[root@rocky python-mongodb-writer-operator]# oc apply -f kubernetes/
clusterrolebinding.rbac.authorization.k8s.io/admin-access created
customresourcedefinition.apiextensions.k8s.io/mongodb-writers.demo.karve.com created
MongoDB - Login to mongodb, the school db still does not exist
[root@rocky lvm-operator]# oc exec -it statefulset/mongodb -- bash
groups: cannot find name for group ID 1001
1001@mongodb-0:/$ mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password
MongoDB shell version v4.4.18
connecting to: mongodb://mongodb.default.svc.cluster.local:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a44fa5e8-054d-4687-a5ff-803df70a4814") }
MongoDB server version: 4.4.18
Welcome to the MongoDB shell.
…
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
Operator Logs - Operator is started
[root@rocky python-mongodb-writer-operator]# oc logs deployment/mongodb-writer-operator -f
/usr/local/lib/python3.9/site-packages/kopf/_core/reactor/running.py:170: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2022-12-15 17:42:57,294] kopf._core.engines.a [INFO ] Initial authentication has been initiated.
[2022-12-15 17:42:57,396] kopf.activities.auth [INFO ] Activity 'login_via_client' succeeded.
[2022-12-15 17:42:57,449] kopf._core.engines.a [INFO ] Initial authentication has finished.
Client - Add students using the two sample yaml files
[root@rocky python-mongodb-writer-operator]# oc apply -f sample.yaml -f sample2.yaml
mongodbwriter.demo.karve.com/sample-student created
mongodbwriter.demo.karve.com/sample-student2 created
Operator Logs - Shows two creates
[2022-12-15 17:48:54,092] kopf.objects [INFO ] [default/sample-student2] Handler 'create_fn' succeeded.
[2022-12-15 17:48:54,100] kopf.objects [INFO ] [default/sample-student2] Creation is processed: 1 succeeded; 0 failed.
[2022-12-15 17:48:54,317] kopf.objects [INFO ] [default/sample-student] Handler 'create_fn' succeeded.
[2022-12-15 17:48:54,320] kopf.objects [INFO ] [default/sample-student] Creation is processed: 1 succeeded; 0 failed.
MongoDB - Shows the school db is created and the two students that were added
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
school 0.000GB
> use school
switched to db school
> db.students.find()
{ "_id" : ObjectId("639b5e05a3641c32f6568702"), "id" : "default/sample-student2", "name" : "alex2", "age" : 24, "country" : "usa" }
{ "_id" : ObjectId("639b5e05a3641c32f6568701"), "id" : "default/sample-student", "name" : "alex", "age" : 23, "country" : "canada" }
Client - Update the sample-student2
[root@rocky python-mongodb-writer-operator]# cat sample2.yaml | sed "s/age: 24/age: 26/g" | sed "s/country: .*/country: germany/g" | oc apply -f -
mongodbwriter.demo.karve.com/sample-student2 configured
Operator Logs - Shows that an update was processed
[2022-12-15 18:05:41,939] kopf.objects [INFO ] [default/sample-student2] Handler 'update_fn' succeeded.
[2022-12-15 18:05:41,945] kopf.objects [INFO ] [default/sample-student2] Updating is processed: 1 succeeded; 0 failed.
MongoDB - sample-student2 age and country is updated
> db.students.find()
{ "_id" : ObjectId("639b5e05a3641c32f6568702"), "id" : "default/sample-student2", "name" : "alex2", "age" : 26, "country" : "germany" }
{ "_id" : ObjectId("639b5e05a3641c32f6568701"), "id" : "default/sample-student", "name" : "alex", "age" : 23, "country" : "canada" }
Client - Delete the students using the two yaml files
[root@rocky python-mongodb-writer-operator]# oc delete -f sample.yaml -f sample2.yaml
mongodbwriter.demo.karve.com "sample-student" deleted
mongodbwriter.demo.karve.com "sample-student2" deleted
Operator Logs - Shows the two students successfully deleted
[2022-12-15 18:12:05,496] kopf.objects [INFO ] [default/sample-student] Handler 'delete_fn' succeeded.
[2022-12-15 18:12:05,499] kopf.objects [INFO ] [default/sample-student] Deletion is processed: 1 succeeded; 0 failed.
[2022-12-15 18:12:05,539] kopf.objects [INFO ] [default/sample-student2] Handler 'delete_fn' succeeded.
[2022-12-15 18:12:05,541] kopf.objects [INFO ] [default/sample-student2] Deletion is processed: 1 succeeded; 0 failed.
MongoDB - No students left
> db.students.find()
>
Client - Finally, delete the Operator and MongoDB
[root@rocky python-mongodb-writer-operator]# oc delete -f kubernetes/
clusterrolebinding.rbac.authorization.k8s.io "admin-access" deleted
customresourcedefinition.apiextensions.k8s.io "mongodb-writers.demo.karve.com" deleted
deployment.apps "mongodb-writer-operator" deleted
serviceaccount "mongodb-writer" deleted
[root@rocky python-mongodb-writer-operator]# cd ../mongodb
[root@rocky mongodb]# oc delete -f .
service "mongodb" deleted
secret "mongodb-secret" deleted
statefulset.apps "mongodb" deleted
persistentvolumeclaim "mongodb" deleted
serviceaccount "mongodb" deleted
Cleanup MicroShift
We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.
cd ~/microshift/hack
./cleanup.sh
Containerized MicroShift on Rocky Linux 9
We can run MicroShift within containers in two ways:
- MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume (we can store it in /var/lib/microshift and /var/lib/kubelet on the host as shown in previous blogs).
- MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.
Microshift Containerized
If you did not already install podman, you can do it now.
dnf -y install podman
We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.
cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--replace \
--sdnotify=container \
--label io.containers.autoupdate=registry \
--network=host \
--privileged \
-d \
--name microshift \
-v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
-v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
-v microshift-data:/var/lib/microshift:rw,rshared \
-v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
-v /var/log:/var/log \
-v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target default.target
EOF
systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"
Output:
[root@rocky hack]# systemctl daemon-reload
[root@rocky hack]# systemctl enable --now crio microshift
Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.
Created symlink /etc/systemd/system/default.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.
[root@rocky hack]# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cd18c43e8239 quay.io/microshift/microshift:latest run 37 seconds ago Up 37 seconds ago microshift
[root@rocky hack]# podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
[
{
"Name": "microshift-data",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
"CreatedAt": "2022-12-03T21:18:01.620435919Z",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true
}
]
Now that microshift is started, we can run the samples shown earlier.
After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.
podman stop microshift && podman volume rm microshift-data
After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.
MicroShift Containerized All-In-One
Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.
systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes
We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Use a different name for the microshift all-in-one pod (with the -h parameter for podman below) than the hostname for the Raspberry Pi 4.
sudo setsebool -P container_manage_cgroup true
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64
Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.
cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target default.target
EOF
systemctl daemon-reload
systemctl start microshift
Note that if port 80 is in use by haproxy from the previous run, just restart the Raspberry Pi 4. Then delete and recreate the microshift pod. We can inspect the microshift-data volume to find the path for kubeconfig.
podman volume inspect microshift-data
On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"
The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.
To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.
podman exec -it microshift mount --make-shared /
We may also preload the virtual machine images using "crictl pull".
podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Now, we can run the samples shown earlier.
For the Virtual Machine Instance Sample 4, after it is started, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 32601 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 32601 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:
[root@rocky vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@rocky vmi]# oc get svc vmi-fedora-ssh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmi-fedora-ssh NodePort 10.43.178.223 <none> 22:32601/TCP 16s
[root@rocky vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.2
[root@rocky vmi]# oc run -i --tty ssh-proxy --rm --image=ubuntu --restart=Never -- /bin/sh -c "apt-get update;apt-get -y install openssh-client;ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 32601"
If you don't see a command prompt, try pressing enter. …
Warning: Permanently added '[10.88.0.2]:32601' (ED25519) to the list of known hosts.
fedora@10.88.0.2's password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.80.78) 56(84) bytes of data.
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=1 ttl=117 time=5.55 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=2 ttl=117 time=4.75 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.747/5.148/5.549/0.401 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted
[root@rocky vmi]#
We can install the QEMU guest agent, a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.
After you are done, you can delete the VMI, kubevirt operator and console as shown previously. Finally delete the all-in-one microshift container.
#podman stop -t 120 microshift
podman rm -f microshift && podman volume rm microshift-data
or if started using systemd, then
systemctl stop microshift
podman volume rm microshift-data
Kata Containers
Let’s install Kata.
dnf -y install wget pkg-config
# Install golang
wget https://golang.org/dl/go1.19.3.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.3.linux-arm64.tar.gz
rm -f go1.19.3.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH
We can build and install Kata from source as shown in Part 25, that however takes a long time. I have created a docker image with precompiled binaries and kernel that we can copy.
cd /root
id=$(podman create docker.io/karve/kata-go-directory:arm64 -- ls)
podman cp $id:kata-go-directory.tgz kata-go-directory.tgz
tar -zxvf kata-go-directory.tgz && rm -f kata-go-directory.tgz
podman rm $id
For reference, I used the following Dockerfile to create this image after I built the binaries. We can directly skip to Install kata-runtime section below to install without building from source.
Dockerfile
FROM scratch
WORKDIR /
COPY kata-go-directory.tgz kata-go-directory.tgz
Build the kata-go-directory:arm64 image
cd /root
tar -czf kata-go-directory.tgz go
podman build -t docker.io/karve/kata-go-directory:arm64 .
podman push docker.io/karve/kata-go-directory:arm64
Install kata-runtime
cd /root/go/src/github.com/kata-containers/kata-containers/src/runtime/
make install
Output:
[root@rocky runtime]# make install
kata-runtime - version 3.1.0-alpha0 (commit 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1)
• architecture:
Host:
golang:
Build: arm64
• golang:
go version go1.19.3 linux/arm64
• hypervisors:
Default: qemu
Known: acrn cloud-hypervisor firecracker qemu
Available for this architecture: cloud-hypervisor firecracker qemu
• Summary:
destination install path (DESTDIR) : /
binary installation path (BINDIR) : /usr/local/bin
binaries to install :
- /usr/local/bin/kata-runtime
- /usr/local/bin/containerd-shim-kata-v2
- /usr/local/bin/kata-monitor
- /usr/local/bin/data/kata-collect-data.sh
configs to install (CONFIGS) :
- config/configuration-clh.toml
- config/configuration-fc.toml
- config/configuration-qemu.toml
install paths (CONFIG_PATHS) :
- /usr/share/defaults/kata-containers/configuration-clh.toml
- /usr/share/defaults/kata-containers/configuration-fc.toml
- /usr/share/defaults/kata-containers/configuration-qemu.toml
alternate config paths (SYSCONFIG_PATHS) :
- /etc/kata-containers/configuration-clh.toml
- /etc/kata-containers/configuration-fc.toml
- /etc/kata-containers/configuration-qemu.toml
default install path for qemu (CONFIG_PATH) : /usr/share/defaults/kata-containers/configuration.toml
default alternate config path (SYSCONFIG) : /etc/kata-containers/configuration.toml
qemu hypervisor path (QEMUPATH) : /usr/bin/qemu-system-aarch64
cloud-hypervisor hypervisor path (CLHPATH) : /usr/bin/cloud-hypervisor
firecracker hypervisor path (FCPATH) : /usr/bin/firecracker
assets path (PKGDATADIR) : /usr/share/kata-containers
shim path (PKGLIBEXECDIR) : /usr/libexec/kata-containers
INSTALL install-scripts
INSTALL install-completions
INSTALL install-configs
INSTALL install-configs
INSTALL install-bin
INSTALL install-containerd-shim-v2
INSTALL install-monitor
Check hardware requirements
kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
which kata-runtime
kata-runtime --version
containerd-shim-kata-v2 --version
Output:
[root@rocky runtime]# kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
ERRO[0000] /usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist arch=arm64 name=kata-runtime pid=108172 source=runtime
/usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist
[root@rocky runtime]# which kata-runtime
/usr/local/bin/kata-runtime
[root@rocky runtime]# kata-runtime --version
kata-runtime : 3.1.0-alpha0
commit : 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1
OCI specs: 1.0.2-dev
[root@rocky runtime]# containerd-shim-kata-v2 --version
Kata Containers containerd shim: id: "io.containerd.kata.v2", version: 3.1.0-alpha0, commit: 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1
Configure to use initrd image
Since, Kata containers can run with either an initrd image or a rootfs image, we will install both images but initially use the initrd. We will switch to rootfs in later section. So, make sure you add initrd = /usr/share/kata-containers/kata-containers-initrd.img in the configuration file /usr/share/defaults/kata-containers/configuration.toml and comment out the default image line with the following:
sudo mkdir -p /etc/kata-containers/
sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
sudo sed -i 's/^\(image =.*\)/# \1/g' /etc/kata-containers/configuration.toml
sudo sed -i 's/^# \(initrd =.*\)/\1/g' /etc/kata-containers/configuration.toml
The /etc/kata-containers/configuration.toml now looks as follows:
# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
One of the initrd and image options in Kata runtime config file must be set, but not both. The main difference between the options is that the size of initrd (10MB+) is significantly smaller than rootfs image (100MB+).
Install the initrd image
cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-initrd-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers-initrd.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd.img)
Install the rootfs image (we will use this later)
cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)
Install Kata Containers Kernel
yum -y install flex bison bc
go env -w GO111MODULE=auto
cd $GOPATH/src/github.com/kata-containers/packaging/kernel
Install the kernel to the default Kata containers path (/usr/share/kata-containers/)
./build-kernel.sh install
Output:
[root@rocky image-builder]# go env -w GO111MODULE=auto
[root@rocky image-builder]# cd $GOPATH/src/github.com/kata-containers/packaging/kernel
[root@rocky kernel]# ./build-kernel.sh install
package github.com/kata-containers/tests: no Go files in /root/go/src/github.com/kata-containers/tests
~/go/src/github.com/kata-containers/tests ~/go/src/github.com/kata-containers/packaging/kernel
~/go/src/github.com/kata-containers/packaging/kernel
INFO: Config version: 92
INFO: Kernel version: 5.4.60
HOSTCC scripts/dtc/dtc.o
HOSTCC scripts/dtc/flattree.o
HOSTCC scripts/dtc/fstree.o
HOSTCC scripts/dtc/data.o
HOSTCC scripts/dtc/livetree.o
HOSTCC scripts/dtc/treesource.o
HOSTCC scripts/dtc/srcpos.o
HOSTCC scripts/dtc/checks.o
HOSTCC scripts/dtc/util.o
HOSTCC scripts/dtc/dtc-lexer.lex.o
HOSTCC scripts/dtc/dtc-parser.tab.o
HOSTLD scripts/dtc/dtc
HOSTCC scripts/kallsyms
HOSTCC scripts/mod/modpost.o
HOSTCC scripts/mod/sumversion.o
HOSTLD scripts/mod/modpost
CALL scripts/atomic/check-atomics.sh
CALL scripts/checksyscalls.sh
HOSTCC usr/gen_init_cpio
CHK include/generated/compile.h
UPD include/generated/compile.h
CC init/version.o
GEN usr/initramfs_data.cpio
AS usr/initramfs_data.o
AR init/built-in.a
AR usr/built-in.a
GEN .version
CHK include/generated/compile.h
UPD include/generated/compile.h
CC init/version.o
AR init/built-in.a
LD vmlinux.o
MODPOST vmlinux.o
MODINFO modules.builtin.modinfo
LD .tmp_vmlinux.kallsyms1
KSYM .tmp_vmlinux.kallsyms1.o
LD .tmp_vmlinux.kallsyms2
KSYM .tmp_vmlinux.kallsyms2.o
LD vmlinux
SORTEX vmlinux
SYSMAP System.map
OBJCOPY arch/arm64/boot/Image
GZIP arch/arm64/boot/Image.gz
lrwxrwxrwx. 1 root root 17 Dec 3 22:19 /usr/share/kata-containers/vmlinux.container -> vmlinux-5.4.60-92
lrwxrwxrwx. 1 root root 17 Dec 3 22:19 /usr/share/kata-containers/vmlinuz.container -> vmlinuz-5.4.60-92
The /etc/kata-containers/configuration.toml has the following:
# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/libexec/virtiofsd"
Output:
[root@rocky kernel]# cat /etc/kata-containers/configuration.toml | grep virtio_fs_daemon
virtio_fs_daemon = "/usr/libexec/virtiofsd"
valid_virtio_fs_daemon_paths = ["/usr/libexec/virtiofsd"]
Check the output kata-runtime, it gives an error:
[root@rocky kernel]# kata-runtime check --verbose
ERRO[0000] /etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist arch=arm64 name=kata-runtime pid=113821 source=runtime
/etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist
Let’s fix this with:
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64
Output:
[root@rocky kernel]# ls -las /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64
0 lrwxrwxrwx. 1 root root 21 Dec 3 22:21 /usr/bin/qemu-system-aarch64 -> /usr/libexec/qemu-kvm
12068 -rwxr-xr-x. 1 root root 12354504 Nov 16 20:17 /usr/libexec/qemu-kvm
Check the hypervisor.qemu section in configuration.toml:
[root@rocky kernel]# cat /etc/kata-containers/configuration.toml | awk -v RS= '/\[hypervisor.qemu\]/'
[hypervisor.qemu]
path = "/usr/bin/qemu-system-aarch64"
kernel = "/usr/share/kata-containers/vmlinux.container"
# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
machine_type = "virt"
Check the initrd image (kata-containers-initrd.img), the rootfs image (kata-containers.img), and the kernel in the /usr/share/kata-containers directory:
[root@rocky kernel]# ls -las /usr/share/kata-containers
total 171696
4 drwxr-xr-x. 2 root root 4096 Dec 3 22:19 .
4 drwxr-xr-x. 131 root root 4096 Dec 3 22:18 ..
68 -rw-r--r--. 1 root root 68536 Dec 3 22:19 config-5.4.60
131072 -rw-r-----. 1 root root 134217728 Dec 3 22:17 kata-containers-2022-12-03-22:17:33.941445390+0000-9bde32daa
4 lrwxrwxrwx. 1 root root 60 Dec 3 22:17 kata-containers.img -> kata-containers-2022-12-03-22:17:33.941445390+0000-9bde32daa
26144 -rw-r-----. 1 root root 26770481 Dec 3 22:17 kata-containers-initrd-2022-12-03-22:17:16.299283236+0000-9bde32daa
4 lrwxrwxrwx. 1 root root 67 Dec 3 22:17 kata-containers-initrd.img -> kata-containers-initrd-2022-12-03-22:17:16.299283236+0000-9bde32daa
9820 -rw-r--r--. 1 root root 10246656 Dec 3 22:19 vmlinux-5.4.60-92
0 lrwxrwxrwx. 1 root root 17 Dec 3 22:19 vmlinux.container -> vmlinux-5.4.60-92
4576 -rw-r--r--. 1 root root 4684125 Dec 3 22:19 vmlinuz-5.4.60-92
0 lrwxrwxrwx. 1 root root 17 Dec 3 22:19 vmlinuz.container -> vmlinuz-5.4.60-92
Create the file /etc/crio/crio.conf.d/50-kata
cat > /etc/crio/crio.conf.d/50-kata << EOF
[crio.runtime.runtimes.kata]
runtime_path = "/usr/local/bin/containerd-shim-kata-v2"
runtime_root = "/run/vc"
runtime_type = "vm"
privileged_without_host_devices = true
EOF
We will run Kata using non-containerized approach for MicroShift. Let’s cleanup.
cd ~/microshift/hack
./cleanup.sh
Replace microshift.service to allow non-containerized MicroShift. Restart crio and start microshift.
cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service
[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl restart crio
systemctl start microshift
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
Running some Kata samples
After MicroShift is started, you can apply the kata runtimeclass and run the samples.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-runtimeclass.yaml
We execute the runall-balena-dynamic.sh for Rocky Linux 9 after updating the deployment yamls to use the runtimeclass: kata.
cd ~/microshift/raspberry-pi/influxdb/
Update the influxdb-deployment.yaml, telegraf-deployment.yaml and grafana/grafana-deployment.yaml to use the runtimeClassName: kata. With Kata containers, we do not directly get access to the host devices. So, we run the measure container as a runc pod. In runc, '--privileged' for a container means all the /dev/* block devices from the host are mounted into the guest. This will allow the privileged container to gain access to mount any block device from the host.
sed -i '/^ spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml telegraf-deployment.yaml grafana/grafana-deployment.yaml
Now, get the nodename
[root@rocky influxdb]# oc get nodes
NAME STATUS ROLES AGE VERSION
rocky.example.com Ready <none> 2m51s v1.21.0
Replace the annotation kubevirt.io/provisionOnNode with the above nodename microshift.example.com and execute the runall-balena-dynamic.sh. This will create a new project influxdb.
nodename=rocky.example.com
sed -i "s|kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" influxdb-data-dynamic.yaml
sed -i "s| kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" grafana/grafana-data-dynamic.yaml
./runall-balena-dynamic.sh
Let’s watch the stats (CPU%, Memory, Disk and Inodes) of the kata container pods:
watch "oc get nodes;oc get pods -A;crictl stats -a"
Output:
NAME STATUS ROLES AGE VERSION
rocky.example.com Ready <none> 11m v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE
influxdb grafana-855ffb48d8-f98m4 1/1 Running 0 6m1s
influxdb influxdb-deployment-6d898b7b7b-97kvs 1/1 Running 0 7m33s
influxdb measure-deployment-58cddb5745-kjllf 1/1 Running 0 7m
influxdb telegraf-deployment-d746f5c6-4f5p6 1/1 Running 0 6m25s
kube-system kube-flannel-ds-cqt7s 1/1 Running 0 11m
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-79gcp 1/1 Running 0 11m
openshift-dns dns-default-pnkj9 2/2 Running 0 11m
openshift-dns node-resolver-zvxc9 1/1 Running 0 11m
openshift-ingress router-default-85bcfdd948-82c2n 1/1 Running 0 11m
openshift-service-ca service-ca-7764c85869-6hwqs 1/1 Running 0 11m
CONTAINER CPU % MEM DISK INODES
024876ab7fd0c 0.00 0B 12B 18
07aa8185bd777 1.12 18.01MB 265B 11
369f7f8b0bb83 0.00 0B 6.969kB 11
4d7af43f87952 0.00 0B 12.1kB 13
60d06eb552b6f 0.00 0B 138B 15
67c7504802208 1.11 11.64MB 186kB 11
92cdfaed40a59 0.00 0B 0B 3
a4f374ac2ef7a 0.00 0B 13.29kB 22
baa28133512f4 0.00 0B 12B 19
dd96b71045094 0.02 24.35MB 4.026MB 70
ffab19b66893b 0.00 0B 0B 4
We can look at the RUNTIME_CLASS using custom columns:
oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A
[root@rocky influxdb]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A
NAME STATUS RUNTIME_CLASS IP IMAGE
grafana-855ffb48d8-f98m4 Running kata 10.42.0.42 docker.io/grafana/grafana:5.4.3
influxdb-deployment-6d898b7b7b-97kvs Running kata 10.42.0.39 docker.io/library/influxdb:1.7.4
measure-deployment-58cddb5745-kjllf Running <none> 10.42.0.40 docker.io/karve/measure:latest
telegraf-deployment-d746f5c6-4f5p6 Running kata 10.42.0.41 docker.io/library/telegraf:1.10.0
kube-flannel-ds-cqt7s Running <none> 192.168.1.209 quay.io/microshift/flannel:4.8.0-0.okd-2021-10-10-030117
kubevirt-hostpath-provisioner-79gcp Running <none> 10.42.0.37 quay.io/microshift/hostpath-provisioner:4.8.0-0.okd-2021-10-10-030117
dns-default-pnkj9 Running <none> 10.42.0.38 quay.io/microshift/coredns:4.8.0-0.okd-2021-10-10-030117
node-resolver-zvxc9 Running <none> 192.168.1.209 quay.io/microshift/cli:4.8.0-0.okd-2021-10-10-030117
router-default-85bcfdd948-82c2n Running <none> 192.168.1.209 quay.io/microshift/haproxy-router:4.8.0-0.okd-2021-10-10-030117
service-ca-7764c85869-6hwqs Running <none> 10.42.0.36 quay.io/microshift/service-ca-operator:4.8.0-0.okd-2021-10-10-030117
Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.
Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh
cd ~/microshift/raspberry-pi/influxdb/
./deleteall-balena-dynamic.sh
Deleting the persistent volume claims automatically deletes the persistent volumes.
Configure to use the rootfs image
We have been using the initrd image when running the samples above, now let’s switch to the rootfs image instead of using initrd by changing the following lines in /etc/kata-containers/configuration.toml
vi /etc/kata-containers/configuration.toml
Replace as shown below:
image = "/usr/share/kata-containers/kata-containers.img"
#initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
Also disable the image nvdimm by setting the following:
disable_image_nvdimm = true # Default is false
Restart crio and test with the kata-alpine sample
systemctl restart crio
cd ~/microshift/raspberry-pi/kata/
oc project default
oc apply -f kata-alpine.yaml
Output:
[root@rocky kata]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -n default
NAME STATUS RUNTIME_CLASS IP IMAGE
kata-alpine Running kata 10.42.0.43 docker.io/karve/alpine-sshclient:arm64
When done, you can delete the pod
oc delete -f kata-alpine.yaml
We can also run MicroShift Containerized as shown in Part 18 and execute the Jupyter Notebook samples for Digit Recognition, Object Detection and License Plate Recognition with Kata containers as shown in Part 23.
Error Messages in MicroShift Logs
Message: failed to fetch hugetlb info
The following journalctl command will continuously show warning messages “failed to fetch hugetlb info”. The default kernel for the Raspberry Pi OS does not support HugeTLB hugepages.
journalctl -u microshift -f
Output:
...
Dec 16 15:41:52 rocky.example.com microshift[45148]: W1216 15:41:52.314425 45148 container.go:586] Failed to update stats for container "/system.slice/crio.service": error while statting cgroup v2: [open /sys/kernel/mm/hugepages: no such file or directory
Dec 16 15:41:52 rocky.example.com microshift[45148]: failed to fetch hugetlb info
Dec 16 15:41:52 rocky.example.com microshift[45148]: github.com/opencontainers/runc/libcontainer/cgroups/fs2.statHugeTlb
Dec 16 15:41:52 rocky.example.com microshift[45148]: /builddir/build/BUILD/microshift-fa4bc871c7a0e20d5011fbf84cda21e5dbfad11f/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/hugetlb.go:35
...
To remove these messages, we can recompile the microshift binary using the changes from hugetlb.go.
dnf -y install wget pkg-config
# Install golang 1.17.2 (Do not use 1.18.x or 1.19.x)
wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH
git clone https://github.com/thinkahead/microshift.git
cd microshift
Edit the file vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/hugetlb.go and remove the return on err.
func statHugeTlb(dirPath string, stats *cgroups.Stats) error {
hugePageSizes, _ := cgroups.GetHugePageSize()
//hugePageSizes, err := cgroups.GetHugePageSize()
//if err != nil {
// return errors.Wrap(err, "failed to fetch hugetlb info")
//}
hugetlbStats := cgroups.HugetlbStats{}
Build and replace the microshift binary. Restart MicroShift.
make
# Check which binary the /usr/lib/systemd/system/microshift.service points to
mv microshift /usr/bin/.
#mv microshift /usr/local/bin/.
restorecon -R -v /usr/bin/microshift # See below
systemctl restart microshift
You can use the prebuilt binary with the microshift image containing the above fix.
[root@rocky hack]# id=$(podman create docker.io/karve/microshift:arm64)
Trying to pull docker.io/karve/microshift:arm64...
Getting image source signatures
Copying blob 7d5d04141937 done
Copying blob 5d1d750e1695 done
Copying blob 3747df4fb07d done
Copying blob 209d1a1affea done
Copying blob 92cc81bd9f3b done
Copying config b21a0a79b2 done
Writing manifest to image destination
Storing signatures
[root@rocky hack]# podman cp $id:/usr/bin/microshift /usr/bin/microshift
[root@rocky hack]# restorecon /usr/bin/microshift
[root@rocky hack]# systemctl start microshift
[root@rocky hack]# podman rm $id
3c3a71f3d3725723678ffb44d0a8b7796b24dad35ed50f0a86cfe4d4c93ede7b
Conclusion
In this Part 28, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Rocky Linux 9 (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt and use the Containerized Data Importer (CDI) on MicroShift with Rocky Linux. We used .NET to drive a Raspberry Pi Sense HAT. Finally, we installed and configured Kata containers to run with MicroShift and ran samples to use Kata containers.
Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.
References
#MicroShift#Openshift#containers#crio#Edge#node-red#raspberry-pi#centos #mongodb