Infrastructure as a Service

 View Only

MicroShift – Part 14: Raspberry Pi 4 with Rocky Linux 8.5

By Alexei Karve posted Wed April 27, 2022 06:04 PM

  

MicroShift and KubeVirt on Raspberry Pi 4 with Rocky Linux 8.5 Green Obsidian (64 bit)

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit) and in Part 13 with Ubuntu 22.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively. In this Part 14, we will set up and deploy MicroShift on Rocky Linux. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift.

Rocky Linux is a community-maintained and freely available enterprise Linux distribution. It is managed by the Rocky Enterprise Software Foundation (RESF), a Public Benefit Corporation (PBC). Red Hat discontinued development of CentOS, which was downstream version of Red Hat Enterprise Linux, in favor of a newer upstream development variant of that operating system known as "CentOS Stream". Rocky Linux is intended to be a downstream, complete binary-compatible release using the Red Hat Enterprise Linux operating system source code.

Setting up the Raspberry Pi 4 with Rocky Linux 8.5 (64 bit)

Run the following steps to download the Rocky Linux image and setup the Raspberry Pi 4.:

  1. Download the Rocky Linux image. You may alternatively use the path “/8.5/rockyrpi/aarch64/images/ RockyRpi_8.5_20211116.img.xz” within of the mirrors.
  2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
  3. Optionally, have a Keyboard and Monitor connected to the Raspberry Pi 4
  4. Insert Microsdxc into Raspberry Pi 4 and poweron
  5. Find the ethernet dhcp ip address of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
$ sudo nmap -sn 192.168.1.0/24
Nmap scan report for 192.168.1.209
Host is up (0.0043s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
  1. Login using the keyboard attached to the Raspberry Pi 4 or ssh to the ethernet ip address above with rocky/rockylinux.
$ ssh rocky@192.168.1.209
The authenticity of host '192.168.1.209 (192.168.1.209)' can't be established.
ED25519 key fingerprint is SHA256:JSBaBmEwJ0wuuBRqp/90o+0XjrZSbROkewjK/kl+cO4.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.209' (ED25519) to the list of known hosts.
rocky@192.168.1.209's password:
[rocky@localhost ~]$ sudo su -

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for rocky:
[root@localhost ~]#
  1. Extend the disk
rootfs-expand
  1. Optionally, enable wifi
nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask
  1. Check the release
cat /etc/os-release

[root@rocky ~]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.5 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.5 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
  1. Set the hostname with a domain and add the ipv4 address to /etc/hosts
hostnamectl set-hostname rocky.example.com
echo $ipaddress rocky.example.com rocky >> /etc/hosts
  1. Update the cgroup kernel parameters - Concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

Install the updates and reboot
dnf -y update
reboot
Verify
ssh rocky@$ipaddress
sudo su –
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present
ls -l /sys/fs/cgroup/cpu/cpu.cfs_quota_us # This needs to be present for MicroShift to work
Output:
[root@rocky ~]# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
[root@rocky ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        8          1            1
cpu           7          1            1
cpuacct       7          1            1
blkio         9          61           1
memory        10         91           1
devices       2          61           1
freezer       5          1            1
net_cls       6          1            1
perf_event    3          1            1
net_prio      6          1            1
pids          4          67           1
[root@rocky ~]# ls -l /sys/fs/cgroup/cpu/cpu.cfs_quota_us # This needs to be present
-rw-r--r--. 1 root root 0 Apr 26 09:24 /sys/fs/cgroup/cpu/cpu.cfs_quota_us

Install sense_hat and RTIMULib on Rocky Linux 8.5

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip cmake
pip3 install Cython Pillow numpy sense_hat

Install the RTIMULib. This is required to use the SenseHat.

dnf -y install git
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
dnf -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Check the Sense Hat with i2cdetect

i2cdetect -y 1
[root@rocky ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd
git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

Install MicroShift on the Raspberry Pi 4 Rocky Linux host

Setup crio and MicroShift Nightly CentOS Stream 8 aarch64

rpm -qi selinux-policy # selinux-policy-3.14.3-80.el8_5.2
dnf -y install 'dnf-command(copr)'
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/epel-8/group_redhat-et-microshift-nightly-epel-8.repo -o /etc/yum.repos.d/microshift-nightly-epel-8.repo
cat /etc/yum.repos.d/microshift-nightly-epel-8.repo

VERSION=1.22
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo
sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo

dnf -y install cri-o cri-tools microshift

Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

sudo dnf -y install libvirt-client libvirt-nss qemu-kvm virt-manager virt-install virt-viewer
# Works with nftables
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd
virt-host-validate qemu

Check that cni plugins are present and start MicroShift

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

We will have systemd start and manage MicroShift on this rpm-based host. Refer to the microshift service for the three approaches. To check the microshift systemd service, check the file /lib/systemd/system/microshift.service. It shows that the microshift binary is in /usr/local/bin/ directory.

root@rocky:# cat /lib/systemd/system/microshift.service 
[Unit]
Description=MicroShift
After=crio.service

[Service]
WorkingDirectory=/usr/local/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

Start microshift

systemctl enable --now crio microshift

You may read about selecting zones for your interfaces.

sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload
sudo systemctl enable microshift --now

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

sudo firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

sudo firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

Install the oc and kubectl client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/influxdb

Replace the coreos nodename in the persistent volume claims with the rocky.example.com (our current nodename)

sed -i "s|coreos|rocky.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rocky.example.com|" grafana/grafana-data-dynamic.yaml

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: rocky.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner 

We create and push the “measure:latest” image using the Dockerfile. If you want to run all the steps in a single command, just execute the runall-balena-dynamic.sh.

./runall-balena-dynamic.sh

The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. The screenshots for these dashboards are similar to those shown in previous blogs.

SenseHat dashboard on Node Red


We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

Update the env: WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.223 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209

Create the deployment

oc project default
oc apply -f object-detection.yaml

We will see pictures being sent to Node Red when a person is detected and chat messages as follows at http://nodered-svc-nodered.cluster.local/chat

Pictures sent to Node Red with Chat messages


You can see the potted plant, person and book being detected in the image above. The chat messages show the bounding box for the objects. It also shows the temperature from Sensehat. When we are done testing, we can delete the deployment

oc delete -f object-detection.yaml

4. Running a Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

Note that LATEST version gave me errors. You may need to use a different version if the LATEST version continues to give the message "Still missing PID for" in the virt-handler logs when starting the VMI. I used the following version:

LATEST=20220331 # If the latest version does not work
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password.

Output:

[root@rocky vmi]# oc get vmi -o wide
NAME         AGE     PHASE     IP           NODENAME            READY   LIVE-MIGRATABLE   PAUSED
vmi-fedora   3m42s   Running   10.42.0.19   rocky.example.com   True    False 
[root@rocky vmi]# ssh fedora@10.42.0.19 "bash -c \"ping -c 2 google.com\""
fedora@10.42.0.19's password:
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=116 time=3.06 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=116 time=3.09 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.063/3.075/3.087/0.012 ms

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

[root@rocky vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ #
/ # ssh fedora@10.42.0.19 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.19 (10.42.0.19)' can't be established.
ED25519 key fingerprint is SHA256:Yv9FBVLoOSm1r+DXzJAj9d9RRP1IeDdba5BJLs47JRI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.19' (ED25519) to the list of known hosts.
fedora@10.42.0.19's password:
PING google.com (142.251.40.206) 56(84) bytes of data.
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=1 ttl=116 time=4.57 ms
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=2 ttl=116 time=4.27 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.271/4.421/4.571/0.150 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

Output:

[root@rocky vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@rocky vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@rocky vmi]# podman rm -v $id
df97dfdc7424b4ee01fe81d6c64e4d39bafbc1ec1d8ce87c664a55ffa7fa1ea6
[root@rocky vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
Last login: Tue Apr 26 10:57:19 from 10.42.0.1
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.80.78) 56(84) bytes of data.
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=1 ttl=116 time=5.33 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=2 ttl=116 time=5.02 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.016/5.170/5.325/0.154 ms
[fedora@vmi-fedora ~]$ # Ctrl ] to exit
[root@rocky vmi]#

When done, we can delete the VMI

oc delete -f vmi-fedora.yaml

Also delete kubevirt operator

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

5. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

dnf -y install wget jq
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml
kubectl apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
kubectl get deployment metrics-server -n kube-system
kubectl get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
apt-get install -y jq
kubectl get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

watch "kubectl top nodes;kubectl top pods -A"
watch "oc adm top nodes;oc adm top pods -A"

Output:

NAME                CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
rocky.example.com   721m         18%    2589Mi          33%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     console-deployment-bf5bd8498-cr4jb    1m           8Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-m662s   1m           6Mi
openshift-dns                   dns-default-jxq9w                     5m           19Mi
openshift-ingress               router-default-85bcfdd948-g9jqm       3m           27Mi
openshift-service-ca            service-ca-7764c85869-l2m4t           13m          39Mi

We can delete the metrics server using

oc delete -f metrics-server-components.yaml

6. Handwritten Digit Recognition Jupyter Notebook

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook

We will create a password for accessing the jupyter notebook.

export JUPYTER_PASSWORD=mysecretpassword
pip3 install ipython_genutils
python3 jupyterpass.py

Output:

[root@rocky tensorflow-notebook]# export JUPYTER_PASSWORD=mysecretpassword
[root@rocky tensorflow-notebook]# pip3 install ipython_genutils
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Requirement already satisfied: ipython_genutils in /usr/local/lib/python3.6/site-packages
[root@rocky tensorflow-notebook]# python3 jupyterpass.py
sha1:1892355e7f9d:b32195b75cb32b0c164e9a6fcf8543e73fa80692
True

We create a pod with the image jupyter/scipy-notebook that is available for arm64 with the digit-recognition.yaml. You may update the password for the notebook using the password sha1 that was generated above in the digit-recognition.yaml args.

vi digit-recognition.yaml # Update the password
oc apply -f digit-recognition.yaml

Output:

[root@rocky tensorflow-notebook]# oc apply -f digit-recognition.yaml
pod/digit-recognition created
service/digit-recognition-svc created
route.route.openshift.io/digit-recognition-route created
[root@rocky tensorflow-notebook]# oc get pods
NAME                READY   STATUS    RESTARTS   AGE
digit-recognition   1/1     Running   0          102m 
[root@rocky tensorflow-notebook]# oc get routes
NAME                      HOST/PORT                                       PATH   SERVICES                PORT   TERMINATION   WILDCARD
digit-recognition-route   digit-recognition-route-default.cluster.local          digit-recognition-svc   5001                 None

This digit-recognition pod downloads the handwritten-digits sample notebook in the initContainer, then runs the “jupyter notebook” command in the Container.

Add the ipaddress of the Raspberry Pi 4 device for digit-recognition-route-default.cluster.local to /etc/hosts on your Laptop. When the pod status shows Running, browse to http://digit-recognition-route-default.cluster.local/notebooks/work/digits.ipynb. The default password is mysecretpassword. We can run the cells in the notebook. The notebook loads a simple dataset of 8×8 gray level images of handwritten digits. We visualize the dataset in 2D and 3D using Principal Component Analysis and show PCA can also be used as a filtering approach for noisy data.

3D PCA


Then, we train a Support Vector Machine on the digits-dataset. Finally, we use cross validation to repeat the train/test split several times to get a more accurate estimate of the real test score by averaging the values found on the individual runs. You may add custom notebooks. For example: Click on File->Open from URL https://raw.githubusercontent.com/thinkahead/DeveloperRecipes/master/Notebooks/digits.ipynb

When we are done working with the digit recognition sample notebook, we can delete it as follows:

oc delete -f digit-recognition.yaml 

Ouput:

[root@rocky tensorflow-notebook]# oc delete -f digit-recognition.yaml
pod "digit-recognition" deleted
service "digit-recognition-svc" deleted
route.route.openshift.io "digit-recognition-route" deleted

Cleanup MicroShift

We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on Rocky Linux 8.5 (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume (we can store it in /var/lib/microshift and /var/lib/kubelet on the host VM as shown in previous blogs).
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.

MicroShift Containerized

If you did not already install podman, you can do it now.

dnf -y install podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /etc/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Output:

[root@rocky hack]# systemctl daemon-reload
[root@rocky hack]# systemctl enable --now crio microshift
Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.
Created symlink /etc/systemd/system/default.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.
[root@rocky hack]# podman ps -a
CONTAINER ID  IMAGE                                 COMMAND     CREATED        STATUS            PORTS       NAMES
7049e61473ab  quay.io/microshift/microshift:latest  run         2 minutes ago  Up 2 minutes ago              microshift
[root@rocky hack]# podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-04-26T13:44:46.036869085Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
]

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Normally you would run the following to start the all-in-one microshift, but it does not work.

setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

The containers will give error when starting within the microshift pod. Since the “sudo setsebool -P container_manage_cgroup true” does not work, we mount the /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

Note that if port 80 is in use by haproxy from the previous run, just restart the Raspberry Pi 4. Then delete and recreate the microshift pod.

We can inspect the microshift-data volume to find the path

podman volume inspect microshift-data

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now, we can run the samples shown earlier.

For the Virtual Machine Instance Sample 4, after it is started, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.4. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 32422 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 32422 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:

[root@rocky vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@rocky vmi]# oc get svc vmi-fedora-ssh
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.33.195   

After we are done, we can delete the microshift container.

podman rm -f microshift && podman volume rm microshift-data

Conclusion

In this Part 14, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Rocky Linux 8.5 (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with Rocky Linux. Finally we ran a Jupyter notebook in a pod. We will look into MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with Rocky Linux 9 in Part 28.  In the next Part 15, we will run MicroShift on openSUSE.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


​​​​​

0 comments
25 views

Permalink