Infrastructure as a Service

 View Only

MicroShift – Part 29: Raspberry Pi 4 with CentOS Stream 9

By Alexei Karve posted Sun December 18, 2022 06:18 PM

  

MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with CentOS Stream 9

Introduction

MicroShift is a Red Hat-led open-source community project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. Red Hat Device Edge delivers an enterprise-ready and supported distribution of MicroShift. Red Hat Device Edge is planned as a developer preview early next year and expected to be generally available with full support later in 2023.

Over the last 28 parts, we have worked with MicroShift on multiple distros of Linux on the Raspberry Pi 4 and Jetson Nano. Specifically, we have used upto the 4.8.0-microshift-2022-04-20-141053 branch of MicroShift in this blogs series. In Part 5, we worked with MicroShift on CentOS 8 Linux Stream. In this Part 28, we will work with MicroShift on CentOS 9 Linux Stream. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also use .NET to drive a Raspberry Pi Sense HAT and run a sample Python Operator with kopf. Finally, we will setup MicroShift with Kata Containers Runtime.

CentOS Stream sits as the development platform between the Fedora Project’s leading edge operating system innovation and RHEL’s production stability. Fedora Linux is where Red Hat and the larger community do the work of fast-paced operating system innovation. That work accrues to CentOS Stream and ultimately to RHEL.

Setting up the Raspberry Pi 4 with CentOS Stream 9

Run the following steps to download the CentOS Stream 9 image and setup the Raspberry Pi 4 

  1. Download the CentOS Stream 9 image from https://people.centos.org/pgreco/CentOS-Userland-9-stream-aarch64-RaspberryPI-Minimal-4/. This text “Minimal-4” in the name indicates that it has support for the Raspberry Pi 4’s hardware.
  2. Write to Microsdxc card, insert Microsdxc into Raspberry Pi4 and poweron
  3. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
    $ sudo nmap -sn 192.168.1.0/24
    
    Nmap scan report for 192.168.1.209
    Host is up (0.0043s latency).
    MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
    
  4. Login using centos/centos
    ssh centos@$ipaddress
    sudo su -
    
  5. Extend the disk
    sudo growpart /dev/mmcblk0 3
    sudo lsblk
    sudo resize2fs /dev/mmcblk0p3
    

    Output

    [root@centos9stream ~]# sudo growpart /dev/mmcblk0 3
    CHANGED: partition=3 start=1593344 old: size=4882432 end=6475775 new: size=120545247 end=122138590 
    [root@centos9stream ~]# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    mmcblk0     179:0    0 58.2G  0 disk
    ├─mmcblk0p1 179:1    0  286M  0 part /boot
    ├─mmcblk0p2 179:2    0  488M  0 part [SWAP]
    └─mmcblk0p3 179:3    0 57.5G  0 part /
    [root@centos9stream ~]# sudo resize2fs /dev/mmcblk0p3
    resize2fs 1.46.5 (30-Dec-2021)
    Filesystem at /dev/mmcblk0p3 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 8
    The filesystem on /dev/mmcblk0p3 is now 15068155 (4k) blocks long.
    
  6. Add your public key, enable wifi
    ssh centos@$ipaddress
    mkdir ~/.ssh
    vi ~/.ssh/authorized_keys
    chmod 700 ~/.ssh
    chmod 600 ~/.ssh/authorized_keys
    

    Check that your key is RSA 2048 or larger. The RSA 1024 will not work after updates.

    ssh-keygen -l -v -f ~/.ssh/id_rsa.pub
    

    If it is 1024, you will get the error “Invalid key length” instead of “Accepted publickey”

    [centos@centos9stream ~]$ sudo cat /var/log/secure | grep RSA
    Dec  4 18:40:46 centos9stream sshd[759]: refusing RSA key: Invalid key length [preauth]
    
    sudo su -
    nmcli device wifi list # Note your ssid
    nmcli device wifi connect $ssid -ask
    
  7. Set the hostname with a domain and add to /etc/hosts and set locale
    hostnamectl set-hostname centos.example.com
    echo $ipaddress centos9stream centos9stream.example.com >> /etc/hosts
    
    dnf -y install glibc-locale-source langpacks-en glibc-all-langpacks
    localedef -c -f UTF-8 -i en_US en_US.UTF-8
    # export LC_ALL=en_US.UTF-8
    localectl list-locales
    localectl set-locale LANG=en_US.UTF-8
    localectl status
    
  8. Update kernel parameters: concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
     cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

    A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

    Create /etc/yum.repos.d/pgrepo.repo

    [pgrepo]
    name=New Kernel
    type=rpm-md
    baseurl=https://people.centos.org/pgreco/rpi_aarch64_el9_5.15/
    gpgcheck=0
    enabled=1
    

    Update and reboot

    dnf -y update
    

    Verify

    mount | grep cgroup
    cat /proc/cgroups | column -t # Check that memory and cpuset are present
    

    Output (hugetlb is not present):

    [root@centos9stream ~]# mount | grep cgroup
    cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
    [root@centos9stream ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
    #subsys_name  hierarchy  num_cgroups  enabled
    cpuset        0          62           1
    cpu           0          62           1
    cpuacct       0          62           1
    blkio         0          62           1
    memory        0          62           1
    devices       0          62           1
    freezer       0          62           1
    net_cls       0          62           1
    perf_event    0          62           1
    net_prio      0          62           1
    pids          0          62           1
    
  9. Check the release
    cat /etc/os-release
    
    [root@centos9stream ~]# cat /etc/os-release
    NAME="CentOS Stream"
    VERSION="9"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="9"
    PLATFORM_ID="platform:el9"
    PRETTY_NAME="CentOS Stream 9"
    ANSI_COLOR="0;31"
    LOGO="fedora-logo-icon"
    CPE_NAME="cpe:/o:centos:centos:9"
    HOME_URL="https://centos.org/"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9"
    REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
    
  10. Optionally, install neofetch to display information about your Raspberry Pi and add a script to watch the cpu temperature
    sudo dnf config-manager --set-enabled crb
    sudo dnf -y install epel-release
    sudo dnf -y install neofetch bc
    cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
    # Update the /etc/sysconfig/cpupower to use ondemand
    systemctl enable cpupower; systemctl start cpupower
    cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
    

    Output:

    [root@centos9stream ~]$ cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
    powersave
    [root@centos9stream ~]$ cat /etc/sysconfig/cpupower
    # See 'cpupower help' and cpupower(1) for more info
    CPUPOWER_START_OPTS="frequency-set -g performance"
    CPUPOWER_STOP_OPTS="frequency-set -g ondemand"
    [root@centos9stream ~]$ systemctl enable cpupower; systemctl start cpupower
    Created symlink /etc/systemd/system/multi-user.target.wants/cpupower.service → /usr/lib/systemd/system/cpupower.service.
    [root@centos9stream ~]$ cat /sys/devices/system/cpu/cpufreq/policy0/scaling_governor
    performance
    

    You may keep a watch on the temperature of the Raspberry Pi using the following script rpi_temp.sh

    #!/bin/bash
    cpu=$(</sys/class/thermal/thermal_zone0/temp)
    echo "$(bc <<< "${cpu} / 1000") C  ($(bc <<< "${cpu} / 1000 * 1.8 + 32") F)"
    

Install the sense_hat and RTIMULib on CentOS 9

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip cmake git
pip3 install Cython Pillow numpy sense_hat smbus

Install RTIMULib

cd
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11

cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10

# Optional
yum -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Check the Sense Hat with i2cdetect

[root@centos9stream /]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

# Test the USB camera
pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

Install MicroShift on the Raspberry Pi 4 CentOS 9 Stream

Setup crio and MicroShift Nightly CentOS Stream 9 aarch64

rpm -qi selinux-policy # selinux-policy-38.1.1
dnf -y install 'dnf-command(copr)'
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/centos-stream-9/group_redhat-et-microshift-nightly-centos-stream-9.repo -o /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo
cat /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo

VERSION=1.25
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Fedora_36/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_9_Stream/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo

dnf -y install firewalld cri-o cri-tools microshift containernetworking-plugins # Be patient, this takes a few minutes

Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

sudo dnf -y install libvirt-client libvirt-nss qemu-kvm virt-manager virt-install virt-viewer
systemctl enable --now libvirtd
virt-host-validate qemu

Output

[root@centos9stream sensehat-fedora-iot]# systemctl enable --now libvirtd
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.
[root@centos9stream sensehat-fedora-iot]# virt-host-validate qemu
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (Unknown if this platform has IOMMU support)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)

Check that cni plugins are present and start MicroShift

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

Output:

[root@centos9stream sensehat-fedora-iot]# ls /opt/cni/bin/ # empty
ls: cannot access '/opt/cni/bin/': No such file or directory
[root@centos9stream sensehat-fedora-iot]# ls /usr/libexec/cni # cni plugins
bandwidth  bridge  dhcp  firewall  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sample  sbr  static  tuning  vlan  vrf

We will have systemd start and manage MicroShift on this rpm-based host. Refer to the microshift service for the three approaches.

systemctl enable --now crio microshift

You may read about selecting zones for your interfaces.

sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

sudo firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

sudo firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

The microshift service references the microshift binary in the /usr/bin directory

[root@centos9stream sensehat-fedora-iot]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

Install the kubectl and the openshift client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

Replace the coreos nodename in the persistent volume claims with the centos9stream.example.com (our current nodename)

sed -i "s|coreos|centos9stream.example.com|" influxdb-data-dynamic.yaml grafana/grafana-data-dynamic.yaml

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: centos9stream.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner 

We create and push the “measure:latest” image using the Dockerfile. If you want to run all the steps in a single command, just execute the runall-balena-dynamic.sh.

./runall-balena-dynamic.sh

The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click on Skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Go back and open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. You can see the screenshots for these dashboards in previous blogs.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

Update the env: WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.209 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209

Create the deployment

oc project default
oc apply -f object-detection.yaml
oc -n default wait deployment object-detection-deployment --for condition=Available --timeout=300s

We will see pictures being sent to Node Red when a person is detected. Chat messages are sent to http://nodered-svc-nodered.cluster.local/chat

If instead you see the following error in the logs, it means you are using wss:// instead of ws:// for the local nodered. Change it to ws:// and replace the deployment. The wss:// for WebSocketURL and the https:// for ImageUploadURL can be used to connect to Node Red Deployment on IBM Cloud.

[root@centos9stream object-detection]# oc logs deployment/object-detection-deployment -f
Traceback (most recent call last):
  File "//detect.py", line 18, in <module>
    ws.connect(webSocketURL)
  File "/usr/lib/python3/dist-packages/websocket/_core.py", line 222, in connect
    self.sock, addrs = connect(url, self.sock_opt, proxy_info(**options),
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 127, in connect
    sock = _ssl_socket(sock, options.sslopt, hostname)
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 264, in _ssl_socket
    sock = _wrap_sni_socket(sock, sslopt, hostname, check_hostname)
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 239, in _wrap_sni_socket
    return context.wrap_socket(
  File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/lib/python3.9/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)
[root@centos9stream object-detection]# vi object-detection.yaml
[root@centos9stream object-detection]# oc replace --force -f object-detection.yaml
deployment.apps "object-detection-deployment" deleted
deployment.apps/object-detection-deployment replaced

When we are done testing, we can delete the deployment

cd ~/microshift/raspberry-pi/object-detection
oc delete -f object-detection.yaml

4. Running a Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator and install.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

vi okd-web-console-install.yaml
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”.

Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password.

Output:

[root@centos9stream vmi]# ssh fedora@`oc get vmi vmi-fedora -o jsonpath="{ .status.interfaces[0].ipAddress }"` 'bash -c "ping -c 2 google.com"'
The authenticity of host '10.42.0.17 (10.42.0.17)' can't be established.
ED25519 key fingerprint is SHA256:DgowLpqM+4pb2oJ2hasn2VZlqPqCenhdlMOAINmaSac.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.17' (ED25519) to the list of known hosts.
fedora@10.42.0.17's password:
PING google.com (142.251.35.174) 56(84) bytes of data.
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=1 ttl=118 time=4.48 ms
64 bytes from lga25s78-in-f14.1e100.net (142.251.35.174): icmp_seq=2 ttl=118 time=4.39 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.390/4.432/4.475/0.042 ms

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

[root@centos9stream vmi]# oc get vmi vmi-fedora -o jsonpath='{ .status.interfaces[0].ipAddress }{"\n"}'
10.42.0.17
[root@centos9stream vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.17 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.17 (10.42.0.17)' can't be established.
ED25519 key fingerprint is SHA256:DgowLpqM+4pb2oJ2hasn2VZlqPqCenhdlMOAINmaSac.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.17' (ED25519) to the list of known hosts.
fedora@10.42.0.17's password:
PING google.com (142.251.40.110) 56(84) bytes of data.
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=1 ttl=117 time=3.63 ms
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=2 ttl=117 time=4.14 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.632/3.887/4.143/0.255 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

dnf -y install podman
id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora

Output:

[root@centos9stream vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@centos9stream vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@centos9stream vmi]# podman rm -v $id
59b4f00b2dde80c4e0fc9ce2b11d246f782337494c3f6fca263fee14806792cb
[root@centos9stream vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]
                                                                       fedora
Password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.81.238) 56(84) bytes of data.
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=1 ttl=118 time=5.35 ms
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=2 ttl=118 time=4.38 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.384/4.867/5.351/0.483 ms
[fedora@vmi-fedora ~]$ # Ctrl ] to exit
[root@centos9stream vmi]#

When done, we can delete the VMI

oc delete -f vmi-fedora.yaml

You can run the CentOS Stream 9 VMs or Ubuntu VMs using the Containerized Data Importer (CDI) as shown in Part 28

Also delete the kubevirt operator

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

Delete the console

cd /root/microshift/raspberry-pi/console
oc delete -f okd-web-console-install.yaml

5. Use .NET to drive a Raspberry Pi Sense HAT

We will run the .NET sample to retrieve sensor values from the Sense HAT, respond to joystick input, and drive the LED matrix. The source code is in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/dotnet

You may build the image using the  Dockerfile that uses the sensehat-quickstart-1.sh to install dot net and build the SenseHat.Quickstart sample and test it directly using podman as shown in Part 25. Now, let’s run the sample in MicroShift using the prebuilt arm64v8 image “docker.io/karve/sensehat-dotnet”.

oc new-project dotnet
oc apply -f dotnet.yaml
oc -n dotnet wait deployment dotnet-deployment --for condition=Available --timeout=300s
oc logs deployment/dotnet-deployment -f

We can observe the console log output as sensor data is displayed. The LED matrix displays a yellow pixel on a field of blue. Holding the joystick in any direction moves the yellow pixel in that direction. Clicking the center joystick button causes the background to switch from blue to red.

Temperature Sensor 1: 38.2°C
Temperature Sensor 2: 37.4°C
Pressure: 1004.04 hPa
Altitude: 83.29 m
Acceleration: <-0.024108887, -0.015258789, 0.97961426> g
Angular rate: <2.8270676, 0.075187966, 0.30827066> DPS
Magnetic induction: <-0.15710449, 0.3963623, -0.51342773> gauss
Relative humidity: 38.6%
Heat index: 43.2°C
Dew point: 21.5°C
…

When we are done, we can delete the deployment

oc delete -f dotnet.yaml

6. Postgresql database server

The source code is in github. We will deploy Postgresql and use this instance in the next Sample 7.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/pg

Create a new project pg. Create the configmap, pv, pvc and deployment for PostgreSQL

oc new-project pg
mkdir -p /var/hpvolumes/pg

If you have the selinux set to Enforcing, run the

restorecon -R -v "/var/hpvolumes/*"

oc apply -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc get configmap
oc get svc pg-svc
oc get all -lapp=pg
oc -n pg wait deployment pg-deployment --for condition=Available --timeout=300s
oc logs deployment/pg-deployment -f

Output:

[root@centos9stream pg]# oc get svc pg-svc
NAME     TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
pg-svc   NodePort   10.43.231.247   <none>        5432:30080/TCP   80m
Install the postgresql client
dnf -y install postgresql 

Connect to the database

psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password

Create a TABLE cities and insert a couple of rows

CREATE TABLE cities (name varchar(80), location point);
\t
INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
SELECT * from cities;
\d
\q

Output:

[root@centos9stream pg]# psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password
Password for user postgresadmin:
psql (13.7, server 9.6.24)
Type "help" for help.

postgresdb=# \t
Tuples only is on.
postgresdb=# \d
Did not find any relations.
postgresdb=# CREATE TABLE cities (name varchar(80), location point);
CREATE TABLE
postgresdb=# \t
Tuples only is off.
postgresdb=# INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
INSERT 0 2
postgresdb=# SELECT * from cities;
     name      |    location
---------------+-----------------
 Madison       | (89.4,43.07)
 San Francisco | (-122.43,37.78)
(2 rows)

postgresdb=# \d
            List of relations
 Schema |  Name  | Type  |     Owner
--------+--------+-------+---------------
 public | cities | table | postgresadmin
(1 row)

postgresdb=# \q

We can continue to the next sample where we will use this postgresql deployment to demonstrate a python operator.

Instead, to delete the deployment and project, run:

cd ~/microshift/raspberry-pi/pg
oc delete -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc delete project pg

7. Running a Python Operator using kopf (Postgresql)

We will run the Operator that is explained in the youtube video. In this, we will create and delete a student record in the postgresql using a Python Operator “postgres-writer-operator” with the Custom Resource Definition postgres-writers.demo.yash.com. A sample Custom Resource sample-student is created in the default namespace. The operator sees this and inserts an entry into the students table. When the Resource is deleted, the Operator deletes this entry from the students table.

Connect to postgresql server

psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password

Create the table students

create table students(id varchar(50) primary key, name varchar(20), age integer, country varchar(20));

Output:

[root@centos9stream pycon-india-2021-talk]# psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password
Password for user postgresadmin:
psql (13.7, server 9.6.24)
Type "help" for help.

postgresdb=# create table students(id varchar(50) primary key, name varchar(20), age integer, country varchar(20));
CREATE TABLE
postgresdb=# \q
Now we build and push the image
cd
git clone https://github.com/thinkahead/python-postgres-writer-operator
cd ~/python-postgres-writer-operator
podman build -t docker.io/karve/postgres-writer:latest .
# podman login docker.io
podman push docker.io/karve/postgres-writer:latest

Run the Operator

oc apply -f kubernetes/
oc wait deployment postgres-writer-operator --for condition=Available --timeout=300s

Create the sample student resource

oc apply -f sample.yaml
oc get psw -A

Output:

[root@centos9stream pycon-india-2021-talk]# oc apply -f sample.yaml
postgreswriter.demo.yash.com/sample-student created
[root@centos9stream pycon-india-2021-talk]# oc get psw -A
NAMESPACE   NAME             AGE
default     sample-student   5s
[root@centos9stream pycon-india-2021-talk]# psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password
Password for user postgresadmin:
psql (13.7, server 9.6.24)
Type "help" for help.

postgresdb=# select * from students;
           id           | name | age | country
------------------------+------+-----+---------
 default/sample-student | alex |  23 | canada
(1 row)

postgresdb=# \q

Delete the sample student resource

oc delete -f sample.yaml
oc get psw -A

Output:

[root@centos9stream pycon-india-2021-talk]# oc delete -f sample.yaml
postgreswriter.demo.yash.com "sample-student" deleted
[root@centos9stream pycon-india-2021-talk]# psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password
Password for user postgresadmin:
psql (13.7, server 9.6.24)
Type "help" for help.

postgresdb=# select * from students;
 id | name | age | country
----+------+-----+---------
(0 rows)

postgresdb=# \q
[root@centos9stream pycon-india-2021-talk]# oc logs deployment/postgres-writer-operator -f
/usr/local/lib/python3.9/site-packages/kopf/_core/reactor/running.py:170: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
  warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2022-12-04 21:49:03,227] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.
[2022-12-04 21:49:03,237] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2022-12-04 21:49:03,240] kopf._core.engines.a [INFO    ] Initial authentication has finished.
[2022-12-04 21:49:14,210] kopf.objects         [INFO    ] [default/sample-student] Handler 'create_fn' succeeded.
[2022-12-04 21:49:14,213] kopf.objects         [INFO    ] [default/sample-student] Creation is processed: 1 succeeded; 0 failed.
[2022-12-04 21:53:34,224] kopf.objects         [INFO    ] [default/sample-student] Handler 'delete_fn' succeeded.
[2022-12-04 21:53:34,225] kopf.objects         [INFO    ] [default/sample-student] Deletion is processed: 1 succeeded; 0 failed.

Note that to avoid status progress errors such as follows, we added the “x-kubernetes-preserve-unknown-fields: true” in the Custom Resource Definition.

[2022-12-04 21:53:34,357] kopf.objects         [WARNING ] [default/sample-student] Patching failed with inconsistencies: (('remove', ('status',), {'delete_fn': 'Successfully delete data corresponding to id: default/sample-student'}, None),)

The logs show the sample student “default/sample-student” being inserted into and deleted from the database. When we are done, we can delete the Python Operator.

cd ~/python-postgres-writer-operator
oc delete -f kubernetes/

8. Running a Python Operator using kopf (MongoDB)


The Kubernetes Operator Pythonic Framework (kopf) is part of the Zalando-incubator github repository. This project is well documented at https://kopf.readthedocs.io

We will deploy and use the mongodb database using the image: docker.io/arm64v8/mongo:4.4.18. Do not use the latest tag for the image. It will result in "WARNING: MongoDB 5.0+ requires ARMv8.2-A or higher, and your current system does not appear to implement any of the common features for that!" and fail to start. Raspberry Pi 4 uses an ARM Cortex-A72 which is ARM v8-A.

A new PersistentVolumeClaim mongodb will use the storageClassName: kubevirt-hostpath-provisioner for the Persistent Volume. The mongodb-root-username uses the root user with a the mongodb-root-password set to a default of mongodb-password. Remember to update the selected-node in the mongodb-pv.yaml

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/mongodb
oc project default
vi mongodb-pv.yaml # Update the node name in the annotation
oc apply -f .
oc exec -it statefulset/mongodb -- bash
mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password

Now we will run the Operator python code in Development mode

cd ~/microshift/raspberry-pi/python-mongodb-writer-operator
pip install pymongo kopf kubernetes
oc apply -f kubernetes/crd.yaml
export KUBE_CONFIG=$KUBECONFIG
#export KUBE_CONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
export DEV=true
oc get pods -o wide
vi runme.sh # Update the MONGODB_HOST to the ip address of mongodb-0 pod
. runme.sh # source the environment variables
kopf run operator.py
oc apply -f sample.yaml
oc apply -f sample2.yaml
#Optionally update the sample2
#cat sample2.yaml | sed "s/age: 24/age: 26/g" | sed "s/country: .*/country: germany/g" | oc apply -f -
oc delete -f sample.yaml
oc delete -f sample2.yaml

Operator log

[root@centos9stream python-postgres-writer-operator]# kopf run operator.py
Loading from local kube config
/var/lib/microshift/resources/kubeadmin/kubeconfig
/usr/local/lib/python3.9/site-packages/kopf/_core/reactor/running.py:176: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
  warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2022-12-15 11:08:12,301] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.
[2022-12-15 11:08:12,359] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2022-12-15 11:08:12,360] kopf._core.engines.a [INFO    ] Initial authentication has finished.
[2022-12-15 11:09:29,106] kopf.objects         [INFO    ] [default/sample-student] Handler 'create_fn' succeeded.
[2022-12-15 11:09:29,107] kopf.objects         [INFO    ] [default/sample-student] Creation is processed: 1 succeeded; 0 failed.
[2022-12-15 11:10:06,328] kopf.objects         [INFO    ] [default/sample-student2] Handler 'create_fn' succeeded.
[2022-12-15 11:10:06,329] kopf.objects         [INFO    ] [default/sample-student2] Creation is processed: 1 succeeded; 0 failed.
[2022-12-15 11:10:36,320] kopf.objects         [INFO    ] [default/sample-student] Handler 'delete_fn' succeeded.
[2022-12-15 11:10:36,321] kopf.objects         [INFO    ] [default/sample-student] Deletion is processed: 1 succeeded; 0 failed.
[2022-12-15 11:10:51,802] kopf.objects         [INFO    ] [default/sample-student2] Handler 'delete_fn' succeeded.
[2022-12-15 11:10:51,803] kopf.objects         [INFO    ] [default/sample-student2] Deletion is processed: 1 succeeded; 0 failed.

# Ctrl-C to exit

Client log

[root@centos9stream python-postgres-writer-operator]# oc apply -f sample.yaml
mongodbwriter.demo.karve.com/sample-student created
[root@centos9stream python-postgres-writer-operator]# oc apply -f sample2.yaml
mongodbwriter.demo.karve.com/sample-student2 created
[root@centos9stream python-postgres-writer-operator]# oc delete -f sample.yaml
mongodbwriter.demo.karve.com "sample-student" deleted
[root@centos9stream python-postgres-writer-operator]# oc delete -f sample2.yaml
mongodbwriter.demo.karve.com "sample-student2" deleted 

MongoDB logs

[root@centos9stream python-postgres-writer-operator]# oc exec -it statefulset/mongodb -- bash
groups: cannot find name for group ID 1001
1001@mongodb-0:/$ mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password
MongoDB shell version v4.4.18
connecting to: mongodb://mongodb.default.svc.cluster.local:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("843b622b-5596-469f-a220-40be5ebe9cd0") }
MongoDB server version: 4.4.18
Welcome to the MongoDB shell.
> show dbs;
admin   0.000GB
config  0.000GB
local   0.000GB
> show dbs;
admin   0.000GB
config  0.000GB
local   0.000GB
school  0.000GB
> use school
switched to db school
> db.students.find()
{ "_id" : ObjectId("639b006940cd080f1c0b42ba"), "id" : "default/sample-student", "name" : "alex", "age" : 23, "country" : "canada" }
> db.students.find()
{ "_id" : ObjectId("639b006940cd080f1c0b42ba"), "id" : "default/sample-student", "name" : "alex", "age" : 23, "country" : "canada" }
{ "_id" : ObjectId("639b008e40cd080f1c0b42bb"), "id" : "default/sample-student2", "name" : "alex2", "age" : 24, "country" : "usa" }
> db.students.find()
{ "_id" : ObjectId("639b008e40cd080f1c0b42bb"), "id" : "default/sample-student2", "name" : "alex2", "age" : 24, "country" : "usa" }
> db.students.find()

We ran the operator above in a local development mode. Now we will package and run in a productionalized way.

cd ~/microshift/raspberry-pi/python-mongodb-writer-operator
podman build -t docker.io/karve/mongodb-writer:latest .
podman login docker.io
# Optionally push the image to registry
podman push docker.io/karve/mongodb-writer:latest
oc apply -f kubernetes/

Now we can apply, update and delete the sample.yaml and sample2.yaml as shown in DEV mode. When done, we can delete the operator.

oc delete -f kubernetes/

Cleanup MicroShift

We can use the script available on github to remove the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

wget https://raw.githubusercontent.com/thinkahead/microshift/main/hack/cleanup.sh
#cd ~/microshifgt/hack
./cleanup.sh

Containerized MicroShift on CentOS 9 Stream

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume (we can store it in /var/lib/microshift and /var/lib/kubelet on the host as shown in previous blogs).
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.

MicroShift Containerized

Run the cleanup shown above in preparation for running MicroShift Containerized. If you did not already install podman, you can do it now.

dnf -y install podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Check that MicroShift is started

watch "podman ps -a;oc get nodes;oc get pods -A;crictl pods;crictl images"

CONTAINER ID  IMAGE                                 COMMAND     CREATED        STATUS            PORTS       NAMES
50e637aa61a5  quay.io/microshift/microshift:latest  run         5 minutes ago  Up 5 minutes ago              microshift
NAME                        STATUS   ROLES    AGE     VERSION
centos9stream.example.com   Ready    <none>   4m11s   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-lm572                 1/1     Running   0          4m11s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-br86b   1/1     Running   0          4m1s
openshift-dns                   dns-default-rwz99                     2/2     Running   0          4m11s
openshift-dns                   node-resolver-dqvk5                   1/1     Running   0          4m11s
openshift-ingress               router-default-85bcfdd948-x8c2c       1/1     Running   0          4m15s
openshift-service-ca            service-ca-7764c85869-nbh44           1/1     Running   0          4m16s
POD ID              CREATED             STATE               NAME                                  NAMESPACE                       ATTEMPT             RUNTIME
578f4bac5ef1f       3 minutes ago	Ready               dns-default-rwz99                     openshift-dns                   0                   (default)
a891193d6cce4       3 minutes ago	Ready               router-default-85bcfdd948-x8c2c	  openshift-ingress               0                   (default)
28ca852ff2a8e       3 minutes ago	Ready               service-ca-7764c85869-nbh44           openshift-service-ca            0                   (default)
804659e06889d       3 minutes ago	Ready               kubevirt-hostpath-provisioner-br86b   kubevirt-hostpath-provisioner   0                   (default)
ba71ef493cf4a       4 minutes ago	Ready               node-resolver-dqvk5                   openshift-dns                   0                   (default)
b5dd07414aff9       4 minutes ago	Ready               kube-flannel-ds-lm572                 kube-system                     0                   (default)
IMAGE                                     TAG                             IMAGE ID            SIZE
quay.io/microshift/cli                    4.8.0-0.okd-2021-10-10-030117   33a276ba2a973       205MB
quay.io/microshift/coredns                4.8.0-0.okd-2021-10-10-030117   67a95c8f15902       265MB
quay.io/microshift/flannel-cni            4.8.0-0.okd-2021-10-10-030117   0e66d6f50c694       8.78MB
quay.io/microshift/flannel                4.8.0-0.okd-2021-10-10-030117   85fc911ceba5a       68.1MB
quay.io/microshift/haproxy-router         4.8.0-0.okd-2021-10-10-030117   37292c44812e7       225MB
quay.io/microshift/hostpath-provisioner   4.8.0-0.okd-2021-10-10-030117   fdef3dc1264ad       39.3MB
quay.io/microshift/kube-rbac-proxy        4.8.0-0.okd-2021-10-10-030117   7f149e453e908       41.5MB
quay.io/microshift/microshift             latest                          bdccb7de6c282       406MB
quay.io/microshift/service-ca-operator    4.8.0-0.okd-2021-10-10-030117   0d3ab44356260       276MB
registry.k8s.io/pause                     3.6                             7d46a07936af9       492kB

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

or

systemctl stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Run the cleanup shown above in preparation for running MicroShift Containerized. Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Use a different name for the microshift all-in-one pod (with the -h parameter for podman below) than the hostname for the Raspberry Pi 4.

sudo setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 -p 30080:30080 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 -p 30080:30080 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

Note that if port 80 is in use by haproxy from the previous run, just restart the Raspberry Pi 4. Then delete and recreate the microshift pod. We can inspect the microshift-data volume to find the path for kubeconfig.

podman volume inspect microshift-data

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now, we can run the samples shown earlier.

For the Virtual Machine Instance Sample 4, after it is started, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 31436 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 31436 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below. First, check by connecting using the virtctl console.

# Wait until the console shows that the VM is started
# virtctl console vmi-fedora # ^] to exit the console

Output:

Fedora 32 (Cloud Edition)
Kernel 5.6.6-300.fc32.aarch64 on an aarch64 (ttyAMA0)

SSH host key: SHA256:EOQXlwFw6uwp57bB7ON+BxiaXW5ZzoeUdhAWD6oproQ (RSA)
SSH host key: SHA256:S4taIVbmn4eCs1v6iZ1TgllsiC7evzB50I/7POGJ2rE (ECDSA)
SSH host key: SHA256:XnVcDxbm1uGQYP2ZSzQthVW94jMAIK3CH00SE5iHgxE (ED25519)
eth0: 10.42.0.12 fe80::ec3f:99ff:fe50:5395
vmi-fedora login:

Now, connect by exposing the ssh port:

[root@centos9stream vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@centos9stream vmi]# oc get svc vmi-fedora-ssh
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.17.201   <none>        22:31436/TCP   57s
[root@centos9stream vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.2
[root@centos9stream vmi]# oc run -i --tty ssh-proxy --rm --image=ubuntu --restart=Never -- /bin/sh -c "apt-get update;apt-get -y install openssh-client;ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 31436"
If you don't see a command prompt, try pressing enter.
…
Warning: Permanently added '[10.88.0.2]:31436' (ED25519) to the list of known hosts.
fedora@10.88.0.2's password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.80.78) 56(84) bytes of data.
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=1 ttl=117 time=3.85 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=2 ttl=117 time=3.61 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 3.609/3.730/3.851/0.121 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted

We can install the QEMU guest agent, a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

After you are done, you can delete the VMI, kubevirt operator and console as shown previously. Finally delete the all-in-one microshift container.

#podman stop -t 120 microshift
podman rm -f microshift && podman volume rm microshift-data

or if started using systemd, then

systemctl stop microshift
podman volume rm microshift-data

Kata Containers

Let’s install Kata.

dnf -y install wget pkg-config

# Install golang
wget https://golang.org/dl/go1.19.3.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.3.linux-arm64.tar.gz
rm -f go1.19.3.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH

We can build and install Kata from source as shown in Part 25, that however takes a long time. I have created a docker image with precompiled binaries and kernel that we can copy.

cd /root
id=$(podman create docker.io/karve/kata-go-directory:arm64 -- ls)
podman cp $id:kata-go-directory.tgz kata-go-directory.tgz
tar -zxvf kata-go-directory.tgz && rm -f kata-go-directory.tgz
podman rm $id

For reference, I used the following Dockerfile to create this image after I built the binaries. We can directly skip to Install kata-runtime section below to install without building from source.

Dockerfile

FROM scratch
WORKDIR /
COPY kata-go-directory.tgz kata-go-directory.tgz

Build the kata-go-directory:arm64 image

cd /root
tar -czf kata-go-directory.tgz go
podman build -t docker.io/karve/kata-go-directory:arm64 . 
podman push docker.io/karve/kata-go-directory:arm64

Install kata-runtime

cd /root/go/src/github.com/kata-containers/kata-containers/src/runtime/
make install

Output:

[root@centos9stream runtime]# make install
kata-runtime - version 3.1.0-alpha0 (commit 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1)

• architecture:
	Host:
	golang:
	Build: arm64

• golang:
	go version go1.19.3 linux/arm64

• hypervisors:
	Default: qemu
	Known: acrn cloud-hypervisor firecracker qemu
	Available for this architecture: cloud-hypervisor firecracker qemu

• Summary:

	destination install path (DESTDIR) : /
	binary installation path (BINDIR) : /usr/local/bin
	binaries to install :
	 - /usr/local/bin/kata-runtime
	 - /usr/local/bin/containerd-shim-kata-v2
	 - /usr/local/bin/kata-monitor
	 - /usr/local/bin/data/kata-collect-data.sh
	configs to install (CONFIGS) :
	 - config/configuration-clh.toml
 	 - config/configuration-fc.toml
 	 - config/configuration-qemu.toml
	install paths (CONFIG_PATHS) :
	 - /usr/share/defaults/kata-containers/configuration-clh.toml
 	 - /usr/share/defaults/kata-containers/configuration-fc.toml
 	 - /usr/share/defaults/kata-containers/configuration-qemu.toml
	alternate config paths (SYSCONFIG_PATHS) :
	 - /etc/kata-containers/configuration-clh.toml
 	 - /etc/kata-containers/configuration-fc.toml
 	 - /etc/kata-containers/configuration-qemu.toml
	default install path for qemu (CONFIG_PATH) : /usr/share/defaults/kata-containers/configuration.toml
	default alternate config path (SYSCONFIG) : /etc/kata-containers/configuration.toml
	qemu hypervisor path (QEMUPATH) : /usr/bin/qemu-system-aarch64
	cloud-hypervisor hypervisor path (CLHPATH) : /usr/bin/cloud-hypervisor
	firecracker hypervisor path (FCPATH) : /usr/bin/firecracker
	assets path (PKGDATADIR) : /usr/share/kata-containers
	shim path (PKGLIBEXECDIR) : /usr/libexec/kata-containers

     INSTALL  install-scripts
     INSTALL  install-completions
     INSTALL  install-configs
     INSTALL  install-configs
     INSTALL  install-bin
     INSTALL  install-containerd-shim-v2
     INSTALL  install-monitor

Check hardware requirements

kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
which kata-runtime
kata-runtime --version
containerd-shim-kata-v2 --version

Output:

[root@centos9stream runtime]# kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
ERRO[0000] /usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist  arch=arm64 name=kata-runtime pid=6770 source=runtime
/usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist
[root@centos9stream runtime]# which kata-runtime
/usr/local/bin/kata-runtime
[root@centos9stream runtime]# kata-runtime --version
kata-runtime  : 3.1.0-alpha0
   commit   : 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1
   OCI specs: 1.0.2-dev
[root@centos9stream runtime]# containerd-shim-kata-v2 --version
Kata Containers containerd shim: id: "io.containerd.kata.v2", version: 3.1.0-alpha0, commit: 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1

Configure to use initrd image

Since, Kata containers can run with either an initrd image or a rootfs image, we will install both images but initially use the initrd. We will switch to rootfs in later section. So, make sure you add initrd = /usr/share/kata-containers/kata-containers-initrd.img in the configuration file /usr/share/defaults/kata-containers/configuration.toml and comment out the default image line with the following:

sudo mkdir -p /etc/kata-containers/
sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
sudo sed -i 's/^\(image =.*\)/# \1/g' /etc/kata-containers/configuration.toml
sudo sed -i 's/^# \(initrd =.*\)/\1/g' /etc/kata-containers/configuration.toml

The /etc/kata-containers/configuration.toml now looks as follows:

# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"

One of the initrd and image options in Kata runtime config file must be set, but not both. The main difference between the options is that the size of initrd (10MB+) is significantly smaller than rootfs image (100MB+).

Install the initrd image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-initrd-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers-initrd.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd.img)

Install the rootfs image (we will use this later)

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)

Install Kata Containers Kernel

yum -y install flex bison bc
go env -w GO111MODULE=auto
cd $GOPATH/src/github.com/kata-containers/packaging/kernel

Install the kernel to the default Kata containers path (/usr/share/kata-containers/)

./build-kernel.sh install

Output:

[root@centos9stream image-builder]# go env -w GO111MODULE=auto
[root@centos9stream image-builder]# cd $GOPATH/src/github.com/kata-containers/packaging/kernel
[root@centos9stream kernel]# ./build-kernel.sh install
package github.com/kata-containers/tests: no Go files in /root/go/src/github.com/kata-containers/tests
~/go/src/github.com/kata-containers/tests ~/go/src/github.com/kata-containers/packaging/kernel
~/go/src/github.com/kata-containers/packaging/kernel
INFO: Config version: 92
INFO: Kernel version: 5.4.60
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/lexer.lex.o
  HOSTCC  scripts/kconfig/expr.o
  HOSTLD  scripts/kconfig/conf
scripts/kconfig/conf  --syncconfig Kconfig
  HOSTCC  scripts/dtc/dtc.o
  HOSTCC  scripts/dtc/flattree.o
  HOSTCC  scripts/dtc/fstree.o
  HOSTCC  scripts/dtc/data.o
  HOSTCC  scripts/dtc/livetree.o
  HOSTCC  scripts/dtc/treesource.o
  HOSTCC  scripts/dtc/srcpos.o
  HOSTCC  scripts/dtc/checks.o
  HOSTCC  scripts/dtc/util.o
  HOSTCC  scripts/dtc/dtc-lexer.lex.o
  HOSTCC  scripts/dtc/dtc-parser.tab.o
  HOSTLD  scripts/dtc/dtc
  HOSTCC  scripts/kallsyms
  HOSTCC  scripts/mod/modpost.o
  HOSTCC  scripts/mod/sumversion.o
  HOSTLD  scripts/mod/modpost
  CALL    scripts/atomic/check-atomics.sh
  CALL    scripts/checksyscalls.sh
  HOSTCC  usr/gen_init_cpio
  CHK     include/generated/compile.h
  UPD     include/generated/compile.h
  CC      init/version.o
  GEN     usr/initramfs_data.cpio
  AR      init/built-in.a
  AS      usr/initramfs_data.o
  AR      usr/built-in.a
  GEN     .version
  CHK     include/generated/compile.h
  UPD     include/generated/compile.h
  CC      init/version.o
  AR      init/built-in.a
  LD      vmlinux.o
  MODPOST vmlinux.o
  MODINFO modules.builtin.modinfo
  LD      .tmp_vmlinux.kallsyms1
  KSYM    .tmp_vmlinux.kallsyms1.o
  LD      .tmp_vmlinux.kallsyms2
  KSYM    .tmp_vmlinux.kallsyms2.o
  LD      vmlinux
  SORTEX  vmlinux
  SYSMAP  System.map
  OBJCOPY arch/arm64/boot/Image
  GZIP    arch/arm64/boot/Image.gz
lrwxrwxrwx. 1 root root 17 Dec  6 15:24 /usr/share/kata-containers/vmlinux.container -> vmlinux-5.4.60-92
lrwxrwxrwx. 1 root root 17 Dec  6 15:24 /usr/share/kata-containers/vmlinuz.container -> vmlinuz-5.4.60-92

The /etc/kata-containers/configuration.toml has the following:

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/libexec/virtiofsd"

Output:

[root@centos9stream kernel]# cat /etc/kata-containers/configuration.toml | grep virtio_fs_daemon
virtio_fs_daemon = "/usr/libexec/virtiofsd"
valid_virtio_fs_daemon_paths = ["/usr/libexec/virtiofsd"]

Check the output kata-runtime, it gives an error:

[root@centos9stream kernel]# kata-runtime check --verbose
ERRO[0000] /etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist  arch=arm64 name=kata-runtime pid=11093 source=runtime
/etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist 

Let’s fix this with:

ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64 

Output:

[root@centos9stream kernel]# ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64
[root@centos9stream kernel]# ls -las /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64
    0 lrwxrwxrwx. 1 root root       21 Dec  6 15:25 /usr/bin/qemu-system-aarch64 -> /usr/libexec/qemu-kvm
14660 -rwxr-xr-x. 1 root root 15010976 Nov 14 10:03 /usr/libexec/qemu-kvm
[root@centos9stream kernel]# kata-runtime check --verbose
WARN[0000] Not running network checks as super user      arch=arm64 name=kata-runtime pid=11234 source=runtime
INFO[0000] Unable to know if the system is running inside a VM  arch=arm64 source=virtcontainers/hypervisor
INFO[0000] kernel property found                         arch=arm64 description="Host kernel accelerator for virtio" name=vhost pid=11234 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Host kernel accelerator for virtio network" name=vhost_net pid=11234 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Host Support for Linux VM Sockets" name=vhost_vsock pid=11234 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Kernel-based Virtual Machine" name=kvm pid=11234 source=runtime type=module
System is capable of running Kata Containers
INFO[0000] device available                              arch=arm64 check-type=full device=/dev/kvm name=kata-runtime pid=11234 source=runtime
INFO[0000] feature available                             arch=arm64 check-type=full feature=create-vm name=kata-runtime pid=11234 source=runtime
INFO[0000] kvm extension is supported                    arch=arm64 description="Maximum IPA shift supported by the host" id=165 name=KVM_CAP_ARM_VM_IPA_SIZE pid=11234 source=runtime type="kvm extension"
INFO[0000] IPA limit size: 44 bits.                      arch=arm64 name=KVM_CAP_ARM_VM_IPA_SIZE pid=11234 source=runtime type="kvm extension"
System can currently create Kata Containers

Check the hypervisor.qemu section in configuration.toml:

cat /etc/kata-containers/configuration.toml | awk -v RS= '/\[hypervisor.qemu\]/'
[root@centos9stream kernel]# cat /etc/kata-containers/configuration.toml | awk -v RS= '/\[hypervisor.qemu\]/'
[hypervisor.qemu]
path = "/usr/bin/qemu-system-aarch64"
kernel = "/usr/share/kata-containers/vmlinux.container"
# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
machine_type = "virt"

Check the initrd image (kata-containers-initrd.img), the rootfs image (kata-containers.img), and the kernel in the /usr/share/kata-containers directory:

[root@centos9stream kernel]# ls -las /usr/share/kata-containers
total 171696
     4 drwxr-xr-x.   2 root root      4096 Dec  6 15:24 .
     4 drwxr-xr-x. 133 root root      4096 Dec  6 15:23 ..
    68 -rw-r--r--.   1 root root     68536 Dec  6 15:24 config-5.4.60
131072 -rw-r-----.   1 root root 134217728 Dec  6 15:22 kata-containers-2022-12-06-15:22:43.522965837+0000-9bde32daa
     4 lrwxrwxrwx.   1 root root        60 Dec  6 15:22 kata-containers.img -> kata-containers-2022-12-06-15:22:43.522965837+0000-9bde32daa
 26144 -rw-r-----.   1 root root  26770481 Dec  6 15:22 kata-containers-initrd-2022-12-06-15:22:32.644022721+0000-9bde32daa
     4 lrwxrwxrwx.   1 root root        67 Dec  6 15:22 kata-containers-initrd.img -> kata-containers-initrd-2022-12-06-15:22:32.644022721+0000-9bde32daa
  9820 -rw-r--r--.   1 root root  10246656 Dec  6 15:24 vmlinux-5.4.60-92
     0 lrwxrwxrwx.   1 root root        17 Dec  6 15:24 vmlinux.container -> vmlinux-5.4.60-92
  4576 -rw-r--r--.   1 root root   4684135 Dec  6 15:24 vmlinuz-5.4.60-92
     0 lrwxrwxrwx.   1 root root        17 Dec  6 15:24 vmlinuz.container -> vmlinuz-5.4.60-92

Create the file /etc/crio/crio.conf.d/50-kata

cat > /etc/crio/crio.conf.d/50-kata << EOF
[crio.runtime.runtimes.kata]
  runtime_path = "/usr/local/bin/containerd-shim-kata-v2"
  runtime_root = "/run/vc"
  runtime_type = "vm"
  privileged_without_host_devices = true
EOF

If you were using the containerized all-in-one approach, let’s switch. We will run Kata using non-containerized approach for MicroShift. Let’s cleanup.

cd ~/microshift/hack
./cleanup.sh

Replace microshift.service to allow non-containerized MicroShift. Restart crio and start microshift.

cat << EOF > /usr/lib/systemd/system/microshift.service 
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl restart crio

systemctl start microshift
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig

Running some Kata samples

After MicroShift is started, you can apply the kata runtimeclass and run the samples.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-runtimeclass.yaml

We execute the runall-balena-dynamic.sh for CentOS Linux 9 Stream after updating the deployment yamls to use the runtimeclass: kata.

cd ~/microshift/raspberry-pi/influxdb/

Update the influxdb-deployment.yaml, telegraf-deployment.yaml and grafana/grafana-deployment.yaml to use the runtimeClassName: kata. With Kata containers, we do not directly get access to the host devices. So, we run the measure container as a runc pod. In runc, '--privileged' for a container means all the /dev/* block devices from the host are mounted into the guest. This will allow the privileged container to gain access to mount any block device from the host.

sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml telegraf-deployment.yaml grafana/grafana-deployment.yaml

Now, get the nodename

[root@centos9stream influxdb]# oc get nodes
NAME                        STATUS   ROLES    AGE   VERSION
centos9stream.example.com   Ready    <none>   43h   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename microshift.example.com and execute the runall-balena-dynamic.sh. This will create a new project influxdb.

nodename=centos9stream.example.com
sed -i "s|kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" influxdb-data-dynamic.yaml grafana/grafana-data-dynamic.yaml

./runall-balena-dynamic.sh

Let’s watch the stats (CPU%, Memory, Disk and Inodes) of the kata container pods:

watch "oc get nodes;oc get pods -A;crictl stats -a"

Output:

NAME                        STATUS   ROLES    AGE   VERSION
centos9stream.example.com   Ready    <none>   43h   v1.21.0

NAMESPACE                       NAME                                   READY   STATUS    RESTARTS   AGE
influxdb                        grafana-855ffb48d8-5tkvt               1/1     Running   0          22s
influxdb                        influxdb-deployment-6d898b7b7b-t484h   1/1     Running   0          56s
influxdb                        measure-deployment-58cddb5745-w9nt5    1/1     Running   0          45s
influxdb                        telegraf-deployment-d746f5c6-8b4fv     1/1     Running   0          36s
kube-system                     kube-flannel-ds-8xrfz                  1/1     Running   1          43h
kube-system                     metrics-server-684454657f-vwkwc        1/1     Running   1          42h
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-b52bt    1/1     Running   1          43h
openshift-dns                   dns-default-sf46p                      2/2     Running   2          43h
openshift-dns                   node-resolver-5wlf4                    1/1     Running   1          43h
openshift-ingress               router-default-85bcfdd948-9bg42        1/1     Running   1          43h
openshift-service-ca            service-ca-7764c85869-q5q8r            1/1     Running   1          43h
pg                              pg-deployment-78cbc9cc88-knjwb         1/1     Running   1          42h

CONTAINER           CPU %               MEM                 DISK                INODES
0d27f77d2f921       0.04                9.2MB               186kB               11
2fcd01b8e668c       0.00                0B                  12.1kB              13
3d0fbbd62294f       0.00                0B                  12B                 18
40788c4bb2087       0.02                21.7MB              4.026MB             70
44a2d4b7c6a51       0.00                0B                  75B                 10
473cf0b86b36d       0.00                0B                  138B                15
4b900877139b7       0.00                0B                  6.965kB             11
51b14fd235f17       0.00                0B                  12B                 19
5d6156dd60d75       0.09                15.47MB             265B                11
60f30ba9a894a       0.00                0B                  0B                  3
7a493fdbce367       0.00                0B                  14.18kB             22
7b0d39782543b       0.00                0B                  0B                  4
bab73c0602f08       0.00                0B                  12B                 14

We can look at the RUNTIME_CLASS using custom columns:

oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A 

[root@centos9stream influxdb]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A
NAME                                   STATUS    RUNTIME_CLASS   IP              IMAGE
grafana-855ffb48d8-5tkvt               Running   kata            10.42.0.10      docker.io/grafana/grafana:5.4.3
influxdb-deployment-6d898b7b7b-t484h   Running   kata            10.42.0.7       docker.io/library/influxdb:1.7.4
measure-deployment-58cddb5745-w9nt5    Running   <none>          10.42.0.8       docker.io/karve/measure:latest
telegraf-deployment-d746f5c6-8b4fv     Running   kata            10.42.0.9       docker.io/library/telegraf:1.10.0
kube-flannel-ds-8xrfz                  Running   <none>          192.168.1.209   quay.io/microshift/flannel:4.8.0-0.okd-2021-10-10-030117
metrics-server-684454657f-vwkwc        Running   <none>          10.42.0.3       k8s.gcr.io/metrics-server/metrics-server:v0.6.2
kubevirt-hostpath-provisioner-b52bt    Running   <none>          10.42.0.4       quay.io/microshift/hostpath-provisioner:4.8.0-0.okd-2021-10-10-030117
dns-default-sf46p                      Running   <none>          10.42.0.6       quay.io/microshift/coredns:4.8.0-0.okd-2021-10-10-030117
node-resolver-5wlf4                    Running   <none>          192.168.1.209   quay.io/microshift/cli:4.8.0-0.okd-2021-10-10-030117
router-default-85bcfdd948-9bg42        Running   <none>          192.168.1.209   quay.io/microshift/haproxy-router:4.8.0-0.okd-2021-10-10-030117
service-ca-7764c85869-q5q8r            Running   <none>          10.42.0.2       quay.io/microshift/service-ca-operator:4.8.0-0.okd-2021-10-10-030117
pg-deployment-78cbc9cc88-knjwb         Running   <none>          10.42.0.5       docker.io/arm64v8/postgres:9-bullseye

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

cd ~/microshift/raspberry-pi/influxdb/
./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

Configure to use the rootfs image

We have been using the initrd image when running the samples above, now let’s switch to the rootfs image instead of using initrd by changing the following lines in /etc/kata-containers/configuration.toml

vi /etc/kata-containers/configuration.toml 

Replace as shown below:

image = "/usr/share/kata-containers/kata-containers.img"
#initrd = "/usr/share/kata-containers/kata-containers-initrd.img"

Also disable the image nvdimm by setting the following:

disable_image_nvdimm = true # Default is false

Restart crio and test with the kata-alpine sample. You may also run the influxdb sample shown earlier.

systemctl restart crio
cd ~/microshift/raspberry-pi/kata/
oc project default
oc apply -f kata-alpine.yaml

Output:

[root@centos9stream kata]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -n default
NAME          STATUS    RUNTIME_CLASS   IP           IMAGE
kata-alpine   Running   kata            10.42.0.11   docker.io/karve/alpine-sshclient:arm64

When done, you can delete the pod

oc delete -f kata-alpine.yaml

We can also run MicroShift Containerized as shown in Part 18 and execute the Jupyter Notebook samples for Digit Recognition, Object Detection and License Plate Recognition with Kata containers as shown in Part 23.

Conclusion

In this Part 29, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 9 Stream (64 bit). We ran samples that used persistent volume for postgresql, Sense Hat, and USB camera. We saw an object detection sample that sent pictures and web socket messages to Node Red on IBM Cloud when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with CentOS Linux 9 Stream. We used .NET to drive a Raspberry Pi Sense HAT and ran sample Python Operators with kopf to connect to Postgresql and MongoDB. Finally, we installed and configured Kata containers to run with MicroShift and ran samples that used Kata containers. In the next and final Part 30, we will build and work with our own distribution of Linux for the Raspberry Pi 4 using Yocto langdale.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#node-red#raspberry-pi#centos
​​​​​​​

0 comments
17 views

Permalink