Infrastructure as a Service

 View Only

MicroShift – Part 27: Raspberry Pi 4 with Oracle Linux 9

By Alexei Karve posted Tue December 13, 2022 12:12 PM

  

MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with Oracle Linux 9

Introduction

MicroShift is a Red Hat-led open-source community project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. Red Hat Device Edge delivers an enterprise-ready and supported distribution of MicroShift. Red Hat Device Edge is planned as a developer preview early next year and expected to be generally available with full support later in 2023.

Over the last 26 parts, we have worked with MicroShift on multiple distros of Linux on the Raspberry Pi 4 and Jetson Nano. Specifically, we have used upto the 4.8.0-microshift-2022-04-20-141053 branch of MicroShift in this blogs series. In Part 16, we worked with MicroShift on Oracle Linux 8.5. In this Part 27, we will work with MicroShift on Oracle Linux 9. We will run an object detection sample and send messages to Node Red installed on MicroShift and also work with MongoDB. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. Finally, we will setup MicroShift with Kata Containers Runtime.

Oracle provides an Oracle Linux (aarch64) installation image as a technology preview for developer use only, that is specifically designed to run on a variety of Raspberry Pi models. The installation image that is provided is a default installation of Oracle Linux (aarch64) into a raw disk image that can be cloned, block-by-block, to an SD Card for an immediate boot. Btrfs is the default file system that is used in the image.

Setting up the Raspberry Pi 4 with Oracle Linux

Run the following steps to download the Oracle Linux image and setup the Raspberry Pi 4.

  1. Download the latest Oracle Linux 9 64-bit Arm (aarch64) for use with RPi 4 from https://www.oracle.com/linux/downloads/linux-arm-downloads.html
  2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
  3. Have a Keyboard and Monitor connected to the Raspberry Pi 4
  4. Insert Microsdxc into Raspberry Pi4 and poweron
  5. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
    $ sudo nmap -sn 192.168.1.0/24
    
    Nmap scan report for 192.168.1.209
    Host is up (0.010s latency).
    MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
    
  6. Log in as the root user by using the password oracle. The remote login (by ssh) for root user does not work so this operation has to be executed locally using the keyboard connected to the Raspberry Pi 4. You will need to change your password on first login, by typing the old password again and then the new password. Oracle Linux 9 disallows the user to select passwords based on dictionary words.
  7. Edit the sshd configuration file /etc/ssh/sshd_config and add the line
    PermitRootLogin yes

    and run:

    systemctl restart sshd
  8. Now we can ssh again with the new password to ipaddress above. You may add your public ssh key to login without password.
    $ ssh root@$ipaddress
    
    [root@rpi ~]# mkdir ~/.ssh
    [root@rpi ~]# vi ~/.ssh/authorized_keys
    [root@rpi ~]# chmod 700 ~/.ssh
    [root@rpi ~]# chmod 600 ~/.ssh/authorized_keys
    [root@rpi ~]# # chcon -R -v system_u:object_r:usr_t:s0 ~/.ssh/
    

    Check that your key is RSA 2048 or larger. The RSA 1024 will not work.

    ssh-keygen -l -v -f ~/.ssh/id_rsa.pub
    

    If it is 1024, you will get the error

    [root@rpi ~]# cat /var/log/secure
    Nov 26 16:26:42 rpi sshd[3927]: refusing RSA key: Invalid key length [preauth]
    
  9. Extend the partition to maximize disk usage
    fdisk -lu
    growpart /dev/mmcblk0 3
    btrfs filesystem resize max /
    lsblk
    

    Output:

    [root@rpi ~]# fdisk -lu
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xc1a95370
    
    Device         Boot  Start     End Sectors  Size Id Type
    /dev/mmcblk0p1 *      2048  264191  262144  128M  6 FAT16
    /dev/mmcblk0p2      264192  788479  524288  256M 82 Linux swap / Solaris
    /dev/mmcblk0p3      788480 7604223 6815744  3.3G 83 Linux
    [root@rpi ~]# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    mmcblk0     179:0    0 58.2G  0 disk
    ├─mmcblk0p1 179:1    0  128M  0 part /boot/efi
    ├─mmcblk0p2 179:2    0  256M  0 part [SWAP]
    └─mmcblk0p3 179:3    0  3.3G  0 part /boot
                                         /
    [root@rpi ~]# growpart /dev/mmcblk0 3
    CHANGED: partition=3 start=788480 old: size=6815744 end=7604224 new: size=121350111 end=122138591
    [root@rpi ~]# btrfs filesystem resize max /
    Resize device id 1 (/dev/mmcblk0p3) from 3.25GiB to max
    [root@rpi ~]# lsblk
    NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
    mmcblk0     179:0    0 58.2G  0 disk
    ├─mmcblk0p1 179:1    0  128M  0 part /boot/efi
    ├─mmcblk0p2 179:2    0  256M  0 part [SWAP]
    └─mmcblk0p3 179:3    0 57.9G  0 part /boot
                                         /
    [root@rpi ~]# fdisk -lu
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xc1a95370
    
    Device         Boot  Start       End   Sectors  Size Id Type
    /dev/mmcblk0p1 *      2048    264191    262144  128M  6 FAT16
    /dev/mmcblk0p2      264192    788479    524288  256M 82 Linux swap / Solaris
    /dev/mmcblk0p3      788480 122138590 121350111 57.9G 83 Linux
    
  10. Set the hostname with a domain and add the ipv4 address to /etc/hosts (Update with the ipaddress seen earlier)
    dnf update -y
    
    hostnamectl set-hostname rpi.example.com
    echo "$ipaddress rpi rpi.example.com" >> /etc/hosts
    
  11. Optionally, enable wifi

    Note: when wifi is enabled, it disables the eth0. You need to run “rmmod brcmfmac” to restore access via eth0

    dnf -y install wpa_supplicant
    
    
    # Update with your NETWORK_SSID and NETWORK_PASSWORD/PSA
    cat << EOF > /etc/wpa_supplicant/wpa_supplicant.conf
    ctrl_interface=/var/run/wpa_supplicant
    ctrl_interface_group=wheel
    country=US
    p2p_disabled=1
    
    network={
     scan_ssid=1
     ssid="<NETWORK_SSID>"
     psk="<NETWORK_PASSWORD>"
    }
    EOF
    

    For some reason the modprobe command does not work in the Service, so I had to run it manually

    rmmod brcmfmac
    modprobe -vvv brcmfmac
    

    Output:

    [root@rpi ~]# rmmod brcmfmac
    [root@rpi ~]# modprobe -vvv brcmfmac
    modprobe: INFO: custom logging function 0xaaac75debf80 registered
    insmod /lib/modules/5.4.17-2136.307.3.1.el8uek.aarch64/kernel/drivers/net/wireless/broadcom/brcm80211/brcmfmac/brcmfmac.ko.xz
    modprobe: INFO: context 0xaaaca47904e0 released
    

    Create and start the wlan0 service

    cat << EOF > /etc/systemd/system/network-wireless@.service
    [Unit]
    Description=Wireless network connectivity (%i)
    Wants=network.target
    Before=network.target
    BindsTo=sys-subsystem-net-devices-%i.device
    After=sys-subsystem-net-devices-%i.device
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    
    #ExecStart=modprobe -vvv -r brcmfmac
    #ExecStart=modprobe -vvv brcmfmac
    ExecStart=/usr/sbin/ip link set dev %i up
    ExecStart=/usr/sbin/wpa_supplicant -B -i %i -c /etc/wpa_supplicant/wpa_supplicant.conf
    ExecStart=/usr/sbin/dhclient -v %i
    
    ExecStop=/usr/sbin/ip link set dev %i down
    
    [Install]
    WantedBy=multi-user.target
    EOF
    ln -s /etc/systemd/system/network-wireless@.service   /etc/systemd/system/multi-user.target.wants/network-wireless@wlan0.service
    systemctl daemon-reload
    systemctl start network-wireless@wlan0.service
    

    If starting without the service, you can run

    modprobe -vvv -r brcmfmac
    modprobe -vvv brcmfmac
    wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf
    #dhclient -v -r wlan0
    dhclient -v wlan0
    

    Output:

    [root@rpi ~]# modprobe -vvv -r brcmfmac
    modprobe: INFO: custom logging function 0xaaac6879bf80 registered
    rmmod brcmfmac
    rmmod cfg80211
    rmmod brcmutil
    modprobe: INFO: context 0xaaac83b104d0 released
    [root@rpi ~]# modprobe -vvv brcmfmac
    modprobe: INFO: custom logging function 0xaaaab377bf80 registered
    insmod /lib/modules/5.4.17-2136.300.7.el8uek.aarch64/kernel/net/wireless/cfg80211.ko.xz
    insmod /lib/modules/5.4.17-2136.300.7.el8uek.aarch64/kernel/drivers/net/wireless/broadcom/brcm80211/brcmutil/brcmutil.ko.xz
    insmod /lib/modules/5.4.17-2136.300.7.el8uek.aarch64/kernel/drivers/net/wireless/broadcom/brcm80211/brcmfmac/brcmfmac.ko.xz
    modprobe: INFO: context 0xaaaace3104d0 released
    [root@rpi ~]# wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf
    Successfully initialized wpa_supplicant
    

    If you do not update the /etc/wpa_supplicant/wpa_supplicant.conf with the SSID, you can use wpa_cli to set the ssid manually.

    wpa_cli
    #wpa_cli -i wlan0 status 
    

    We get interfaces p2p-dev-wlan0 and wlan0. Use the wlan0.

    [root@rpi ~]# wpa_cli
    wpa_cli v2.9
    Copyright (c) 2004-2019, Jouni Malinen <j@w1.fi> and contributors
    
    This software may be distributed under the terms of the BSD license.
    See README for more details.
    
    
    Selected interface 'p2p-dev-wlan0'
    
    Interactive mode
    
    > interface
    Available interfaces:
    p2p-dev-wlan0
    wlan0
    > interface wlan0
    Connected to interface 'wlan0.
    > scan
    OK
    <3>CTRL-EVENT-SCAN-STARTED
    > add_network
    0
    > set_network 0 ssid "<NETWORK_SSID>"
    OK
    > set_network 0 psk "<NETWORK_SSID>"
    OK
    > enable_network 0
    OK
    <3>CTRL-EVENT-SCAN-STARTED
    <3>WPS-AP-AVAILABLE
    …
    > quit
    [root@rpi ~]# The -v option shows information on screen about dhcp server and obtained lease
    [root@rpi ~]# dhclient -v -r wlan0 # Release
    [root@rpi ~]# dhclient -v wlan0 # Get new
    [root@rpi ~]# pgrep -a wpa_supplicant
    998 wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant.conf
    
  12. Check the release and file system type. Observe also the ssd mount option for btrfs filesystem for the “/” and “/boot”. Btrfs is SSD-aware and exploits TRIM/Discard to allow the file system to report unused blocks to the storage device for reuse. On SSDs, Btrfs avoids unnecessary seek optimization and aggressively sends writes in clusters, even if they are from unrelated files. This results in larger write operations and faster write throughput, albeit at the expense of more seeks later.
    cat /etc/os-release
    lsblk -f
    df -Th
    

    Output:

    [root@rpi ~]# cat /etc/os-release
    NAME="Oracle Linux Server"
    VERSION="9.1"
    ID="ol"
    ID_LIKE="fedora"
    VARIANT="Server"
    VARIANT_ID="server"
    VERSION_ID="9.1"
    PLATFORM_ID="platform:el9"
    PRETTY_NAME="Oracle Linux Server 9.1"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:oracle:linux:9:1:server"
    HOME_URL="https://linux.oracle.com/"
    BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
    
    ORACLE_BUGZILLA_PRODUCT="Oracle Linux 9"
    ORACLE_BUGZILLA_PRODUCT_VERSION=9.1
    ORACLE_SUPPORT_PRODUCT="Oracle Linux"
    ORACLE_SUPPORT_PRODUCT_VERSION=9.1
    [root@rpi ~]# lsblk -f
    NAME        FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
    mmcblk0
    ├─mmcblk0p1 vfat   FAT16       8466-747D                                98M    23% /boot/efi
    ├─mmcblk0p2 swap   1           796dea42-c560-4023-8bb0-31b97b9108d0                [SWAP]
    └─mmcblk0p3 btrfs        ol    94c941dc-c2d8-486d-8474-f2ce87ae9d04     56G     2% /boot
                                                                                       /
    [root@rpi ~]# df -Th
    Filesystem     Type      Size  Used Avail Use% Mounted on
    devtmpfs       devtmpfs  4.0M     0  4.0M   0% /dev
    tmpfs          tmpfs     3.7G     0  3.7G   0% /dev/shm
    tmpfs          tmpfs     1.5G  8.8M  1.5G   1% /run
    /dev/mmcblk0p3 btrfs      58G  1.5G   56G   3% /
    /dev/mmcblk0p3 btrfs      58G  1.5G   56G   3% /boot
    tmpfs          tmpfs     3.7G     0  3.7G   0% /tmp
    /dev/mmcblk0p1 vfat      128M   30M   99M  24% /boot/efi
    tmpfs          tmpfs     753M     0  753M   0% /run/user/0
    
  13. Check the cgroup - A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In MicroShift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

    ssh root@$ipaddress
    mount | grep cgroup
    cat /proc/cgroups | column -t # Check that memory, cpuset and hugetlb are present
    

    Output:

    [root@rpi ~]# mount | grep cgroup
    cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
    [root@rpi ~]# cat /proc/cgroups | column -t # Check that memory, cpuset and hugetlb are present
    #subsys_name  hierarchy  num_cgroups  enabled
    cpuset        0          78           1
    cpu           0          78           1
    cpuacct       0          78           1
    blkio         0          78           1
    memory        0          78           1
    devices       0          78           1
    freezer       0          78           1
    net_cls       0          78           1
    perf_event    0          78           1
    net_prio      0          78           1
    hugetlb       0          78           1
    pids          0          78           1
    rdma          0          78           1
    misc          0          78           1
    

Install sense_hat and RTIMULib on Oracle Linux

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip cmake
pip3 install Cython Pillow numpy sense_hat

Check the Sense Hat with i2cdetect

i2cdetect -y 1
[root@rpi ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Add the i2c-dev line to /etc/modules to load the kernel module automatically on boot.

cat << EOF >> /etc/modules
i2c-dev
EOF

Create the file /etc/udev/rules.d/99-i2c.rules with the following contents:

cat << EOF >> /etc/udev/rules.d/99-i2c.rules
KERNEL=="i2c-[0-7]",MODE="0666"
EOF

The Raspberry Pi build comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

[root@rpi ~]# lsmod | grep st_
st_pressure_spi       262144  0
st_magn_spi           262144  0
st_sensors_spi        262144  2 st_pressure_spi,st_magn_spi
st_magn_i2c           262144  0
st_pressure_i2c       262144  0
st_pressure           262144  2 st_pressure_i2c,st_pressure_spi
st_magn               262144  2 st_magn_i2c,st_magn_spi
st_sensors_i2c        262144  2 st_pressure_i2c,st_magn_i2c
st_sensors            327680  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer   262144  2 st_pressure,st_magn
industrialio          327680  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the modules and reboot to take effect

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Install RTIMULib

dnf -y install git
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break

cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
dnf -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Check the Sense Hat with i2cdetect and that the i2c sensors are no longer being held.

i2cdetect -y 1
lsmod | grep st_
[root@rpi ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
[root@rpi sensehat-fedora-iot]# lsmod | grep st_
empty

Replace the sense_hat.py with the new file that uses SMBus and test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

pip3 install setuptools smbus

# Update the python package to use the i2cbus
unalias cp
cp -f sense_hat.py.new /usr/local/lib/python3.6/site-packages/sense_hat/sense_hat.py

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

Install MicroShift on the Raspberry Pi 4 Oracle Linux 9 host

Setup crio and MicroShift Nightly CentOS Stream 9 aarch64

rpm -qi selinux-policy # selinux-policy-34.1.43
dnf -y install 'dnf-command(copr)'
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/centos-stream-9/group_redhat-et-microshift-nightly-centos-stream-9.repo -o /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo
cat /etc/yum.repos.d/microshift-nightly-centos-stream-9.repo

VERSION=1.24 # Using 1.24 from CentOS_8_Stream. You may also use the 1.21 from CentOS_8_Stream or the the 1.25 from CentOS_9_Stream
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/Fedora_36/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8_Stream/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo

dnf -y install firewalld cri-o cri-tools microshift containernetworking-plugins # Be patient, this takes a few minutes

Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

dnf -y install libvirt-client libvirt-nss qemu-kvm virt-manager virt-install virt-viewer
systemctl enable --now libvirtd
virt-host-validate qemu

Check that cni plugins are present

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

We will have systemd start and manage MicroShift. Refer to the microshift service for the three approaches.

systemctl enable --now crio microshift

# cp /opt/cni/bin/flannel /usr/libexec/cni/. # Copy flannel

You may read about selecting zones for your interfaces.

systemctl enable firewalld --now
firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5353/udp --permanent
firewall-cmd --reload

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
firewall-cmd --list-all

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

The microshift service references the microshift binary in the /usr/bin directory

[root@rpi sensehat-fedora-iot]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

Install the kubectl and the openshift oc client

ARCH=arm64
cd /tmp
dnf -y install tar
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

You may also want to setup the Kubectl and oc autocomplete. The shell command-line completion allows you to quickly build your command without having to type every character.

dnf -y install bash-completion

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

source <(oc completion bash)
echo "source <(oc completion bash)" >> ~/.bashrc

and use use a shorthand alias for kubectl that also works with completion

alias k=kubectl
complete -o default -F __start_kubectl k

Install podman - We will use podman for containerized deployment of MicroShift and building images for the samples.

dnf -y install podman

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, get the nodename.

oc get nodes

Output:

[root@rpi influxdb]# oc get nodes
NAME              STATUS   ROLES    AGE     VERSION
rpi.example.com   Ready    <none>   3m36s   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-fedora-dynamic.sh. This will create a new project influxdb. Note that the node name is different when running MicroShift with the all-in-one containerized approach. So, you will use the microshift.example.com instead of the rpi.example.com.

sed -i "s|coreos|rpi.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rpi.example.com|" grafana/grafana-data-dynamic.yaml

./runall-fedora-dynamic.sh

We create and push the “measure-fedora:latest” image using the Dockerfile that uses SMBus. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: rpi.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner

Persistent Volumes and Claims Output:

[root@rpi raspberry-pi]# oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS                    REASON   AGE
persistentvolume/pvc-92ab8559-9998-4abf-8e7e-a68833cf3748   57Gi       RWO            Delete           Bound    influxdb/influxdb-data   kubevirt-hostpath-provisioner            13m
persistentvolume/pvc-c8530d30-0282-4898-a963-c01cf589d57d   57Gi       RWO            Delete           Bound    influxdb/grafana-data    kubevirt-hostpath-provisioner            9m57s

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
persistentvolumeclaim/grafana-data    Bound    pvc-c8530d30-0282-4898-a963-c01cf589d57d   57Gi       RWO            kubevirt-hostpath-provisioner   9m58s
persistentvolumeclaim/influxdb-data   Bound    pvc-92ab8559-9998-4abf-8e7e-a68833cf3748   57Gi       RWO            kubevirt-hostpath-provisioner   13m

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image. I create and use the “karve/nodered-fedora:arm64”.

cd docker-custom/
./docker-debianonfedora.sh
podman push docker.io/karve/nodered-fedora:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT.

If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml -n nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

cp ../sensehat-fedora-iot/sense_hat.py.new .
podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address (192.168.1.209 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc project default
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment.

oc delete -f object-detection-fedora.yaml

4. Running a Virtual Machine and Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

I used the following version:

LATEST=20221126 # If the latest version does not work

oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

vi okd-web-console-install.yaml
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.

Output:

ssh fedora@`oc get vmi --no-headers | awk '{print $4}'` ping -c 2 google.com

[root@rpi vmi]# oc get vmi
NAME         AGE   PHASE     IP           NODENAME          READY
vmi-fedora   36m   Running   10.42.0.31   rpi.example.com   True
[root@rpi vmi]# ssh fedora@10.42.0.31
The authenticity of host '10.42.0.31 (10.42.0.31)' can't be established.
ECDSA key fingerprint is SHA256:go9hFGRHMLKW/PBgimj93wnRTTkYY4iJpbiexMUIQWc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.31' (ECDSA) to the list of known hosts.
fedora@10.42.0.31's password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.176.206) 56(84) bytes of data.
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=1 ttl=116 time=3.18 ms
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=2 ttl=116 time=2.99 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 2.992/3.088/3.184/0.096 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.31 closed.

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

[root@rpi vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.31 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.31 (10.42.0.31)' can't be established.
ED25519 key fingerprint is SHA256:AXd6hQWbiFrau3pYPQiTRJvLLHCV1BJpU8cgEg2ZMTg.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.31' (ED25519) to the list of known hosts.
fedora@10.42.0.31's password:
PING google.com (142.250.176.206) 56(84) bytes of data.
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=1 ttl=116 time=3.12 ms
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=2 ttl=116 time=3.02 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.017/3.070/3.124/0.053 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4 and connect to the VMI using “virtctl console” command.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora

Output:

[root@rpi nodered]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@rpi nodered]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@rpi nodered]# podman rm -v $id
37acc1660b7684e7d98d0873df70b98fad502056be825238f39335ceea1ed5aa
[root@rpi nodered]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
Last login: Fri May 13 16:37:58 from 10.42.0.1
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.176.206) 56(84) bytes of data.
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=1 ttl=116 time=3.28 ms
64 bytes from lga34s37-in-f14.1e100.net (142.250.176.206): icmp_seq=2 ttl=116 time=2.91 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 2.908/3.095/3.282/0.187 ms
[fedora@vmi-fedora ~]$ # Ctrl-] to detach
[root@rpi nodered]#

We can also create and start the CentOS 9 Stream VM on the Raspberry Pi 4 using the image we create in Part 26.

crictl pull docker.io/karve/centos-stream-genericcloud-9-20221206:arm64
cd /root/microshift/raspberry-pi/vmi

Update the vm-centos9.yaml with your public key in ssh_authorized_keys so you can ssh

vi vm-centos9.yaml
oc apply -f vm-centos9.yaml
virtctl start vm-centos9

We can ssh to the centos VM with user:cloud-user and password:centos. Then, stop the VM.

virtctl stop vm-centos9

You may continue to the next sample 5 that will use the kubevirt operator and the OKD console. You can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9.

If done with the Fedora VM and CentOS VM, we can delete the VMIs

oc delete -f vmi-fedora.yaml -f vm-centos.yaml

When done with KubeVirt, you may delete kubevirt operator.

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

When done with the OKD Web Console, delete it:

cd /root/microshift/raspberry-pi/console
oc delete -f okd-web-console-install.yaml

5. Containerized Data Importer (CDI)

CDI is a utility designed to import Virtual Machine images for use with Kubevirt. At a high level, a PersistentVolumeClaim (PVC) is created. A custom controller watches for importer specific claims, and when discovered, starts an import process to create a raw image with the desired content into the associated PVC.

When we use the default microshift binary, the cdi-apiserver logs show “no valid subject specified” error as shown below. Skip to the next paragraph so we can fix it before running the steps indicated in logs below.

[root@rpi microshift]# oc logs pod/cdi-apiserver-5b77584bf9-kz89g -n cdi
I1212 21:54:49.128552       1 apiserver.go:83] Note: increase the -v level in the api deployment for more detailed logging, eg. -v=2 or -v=3
W1212 21:54:49.130518       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1212 21:54:49.453098       1 certwatcher.go:125] Updated current TLS certificate
I1212 21:54:50.509936       1 certwatcher.go:81] Starting certificate watcher
2022/12/12 21:54:55 http: TLS handshake error from 10.42.0.1:60290: no valid subject specified
2022/12/12 21:54:55 http: TLS handshake error from 10.42.0.1:60256: no valid subject specified
…
[root@rpi microshift]# oc get apiservices | grep cdi
v1alpha1.cdi.kubevirt.io                 Local                               True                           10m
v1alpha1.upload.cdi.kubevirt.io          cdi/cdi-api                         False (FailedDiscoveryCheck)   10m
v1beta1.cdi.kubevirt.io                  Local                               True                           10m
v1beta1.upload.cdi.kubevirt.io           cdi/cdi-api                         False (FailedDiscoveryCheck)   10m 
[root@rpi microshift]# kubectl api-resources --api-group=cdi.kubevirt.io
NAME              SHORTNAMES   APIVERSION                NAMESPACED   KIND
cdiconfigs                     cdi.kubevirt.io/v1beta1   false        CDIConfig
cdis              cdi,cdis     cdi.kubevirt.io/v1beta1   false        CDI
dataimportcrons   dic,dics     cdi.kubevirt.io/v1beta1   true         DataImportCron
datasources       das          cdi.kubevirt.io/v1beta1   true         DataSource
datavolumes       dv,dvs       cdi.kubevirt.io/v1beta1   true         DataVolume
objecttransfers   ot,ots       cdi.kubevirt.io/v1beta1   false        ObjectTransfer
storageprofiles                cdi.kubevirt.io/v1beta1   false        StorageProfile
error: unable to retrieve the complete list of server APIs: upload.cdi.kubevirt.io/v1alpha1: the server is currently unable to handle the request, upload.cdi.kubevirt.io/v1beta1: the server is currently unable to handle the request

We fix this issue by setting the requestheader-allowed-names="" in the ~/microshift/pkg/controllers/kube-apiserver.go; update the line requestheader-allowed-names to empty and recompile microshift binary. This blank option indicates to an extension apiserver that any CN is acceptable.

                "--requestheader-allowed-names=",

We will use the prebuilt binary with the microshift image containing the above fix.

[root@rpi hack]# id=$(podman create docker.io/karve/microshift:arm64)
Trying to pull docker.io/karve/microshift:arm64...
Getting image source signatures
Copying blob 92cc81bd9f3b done
Copying blob 5d1d750e1695 done
Copying blob 7d5d04141937 done
Copying blob 3747df4fb07d done
Copying blob 209d1a1affea done
Copying config b21a0a79b2 done
Writing manifest to image destination
Storing signatures
[root@rpi hack]# podman cp $id:/usr/bin/microshift /usr/bin/microshift
[root@rpi hack]# restorecon /usr/bin/microshift
[root@rpi hack]# systemctl start microshift

If you want to build the microshift binary, use the following steps. Use the golang 1.17.2 as shown in Part 26 to build the microshift binary.

[root@rpi microshift]# systemctl stop microshift
[root@rpi microshift]# # Update "--requestheader-allowed-names=",
[root@rpi microshift]# vi pkg/controllers/kube-apiserver.go
[root@rpi microshift]# make
[root@rpi microshift]# cp microshift /usr/bin/microshift
cp: overwrite '/usr/bin/microshift'? y
[root@rpi microshift]# restorecon -R -v /usr/bin/microshift
[root@rpi microshift]# systemctl start microshift

Check the latest arm64 version at https://quay.io/repository/kubevirt/cdi-operator?tab=tags&tag=latest

VERSION=v1.55.2
ARM_VERSION=20221210_a6ebd75e-arm64 # Use the arm64 tag from https://quay.io/repository/kubevirt/cdi-operator?tab=tags&tag=latest


# The version does not work with arm64 images
# oc apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml

# So we use the ARM_VERSION
curl -sL https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml | sed "s/$VERSION/$ARM_VERSION/g" | oc apply -f -
# Wait for cdi-operator to start

# Next create the cdi-cr that will create the apiserver, deployment and uploadproxy
oc apply -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

oc get apiservices
oc api-resources --api-group=cdi.kubevirt.io

In this section, we show the concise instructions for creating the CentOS 9 Stream and Ubuntu Jammy VMs. Have a look at uploading CentOS and Ubuntu images using Containerized Data Importer (CDI) and creating VMs for other distros in Part 26 for detailed instructions.

CentOS 9 Stream VM using URL as datavolume source

cd ~/microshift/raspberry-pi/vmi
vi centos9-dv.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name
oc apply -f centos9-dv.yaml # Create a persistent volume by downloading the CentOS image
# The cloning when creating VM below will for the above import to be completed
vi vm-centos9-datavolume.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name and the ssh_authorized_keys
oc apply -f vm-centos9-datavolume.yaml # Create a CentOS VM by cloning the persistent volume with the above CentOS image
watch oc get dv,pv,pvc,pods,vm,vmi

Ubuntu Jammy VM using the virt-ctl image-upload

vi example-upload-dv.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name
oc apply -f example-upload-dv.yaml
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-arm64.img

Add the “ipaddress cdi-uploadproxy-cdi.cluster.local” to /etc/hosts where the ipaddress is your Raspberry Pi’s IP address

virtctl image-upload dv example-upload-dv --namespace default --size 10Gi --image-path jammy-server-cloudimg-arm64.img --wait-secs 1200 --no-create --uploadproxy-url=https://cdi-uploadproxy-cdi.cluster.local --insecure

The --insecure in command above is required to avoid the following error

Post "https://cdi-uploadproxy-cdi.cluster.local/v1alpha1/upload": x509: certificate is valid for router-internal-default.openshift-ingress.svc, router-internal-default.openshift-ingress.svc.cluster.local, not cdi-uploadproxy-cdi.cluster.local

Wait for the jammy image to be uploaded and processed

vi vm-ubuntujammy-uploadvolume.yaml # Update the annotation kubevirt.io/provisionOnNode with your microshift node name and the ssh_authorized_keys
oc apply -f vm-ubuntujammy-uploadvolume.yaml
watch oc get dv,pv,pvc,pods,vm,vmi

You may see the source-pod show CreateContainerConfigError, just wait for a few seconds and it will go to Running state.

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS
     REASON   AGE
persistentvolume/pvc-242655fb-2f15-4a91-9b8d-ad0f80a13bcd   57Gi       RWO            Delete           Bound    default/centos9-dv          kubevirt-hostpath-provision
er            56m
persistentvolume/pvc-ad0eb1d6-c570-45c6-b4eb-346a1bc92af0   57Gi       RWO            Delete           Bound    default/example-upload-dv   kubevirt-hostpath-provision
er            51m
persistentvolume/pvc-e204b42a-8ebc-4f06-94f2-6774fc688bc4   57Gi       RWO            Delete           Bound    default/centos9instance1    kubevirt-hostpath-provision
er            50m
persistentvolume/pvc-e27f5c89-eee7-479f-89b5-91abec7c2328   57Gi       RWO            Delete           Bound    default/ubuntujammy1        kubevirt-hostpath-provision
er            39m

NAME                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
persistentvolumeclaim/centos9-dv          Bound    pvc-242655fb-2f15-4a91-9b8d-ad0f80a13bcd   57Gi	 RWO            kubevirt-hostpath-provisioner   56m
persistentvolumeclaim/centos9instance1    Bound    pvc-e204b42a-8ebc-4f06-94f2-6774fc688bc4   57Gi	 RWO            kubevirt-hostpath-provisioner   50m
persistentvolumeclaim/example-upload-dv   Bound    pvc-ad0eb1d6-c570-45c6-b4eb-346a1bc92af0   57Gi	 RWO            kubevirt-hostpath-provisioner   51m
persistentvolumeclaim/ubuntujammy1        Bound    pvc-e27f5c89-eee7-479f-89b5-91abec7c2328   57Gi	 RWO            kubevirt-hostpath-provisioner   39m

NAME                                       READY   STATUS    RESTARTS   AGE
pod/virt-launcher-centos9instance1-w69p9   1/1     Running   0          34m
pod/virt-launcher-ubuntujammy1-5cw29	   1/1     Running   0          31m
pod/virt-launcher-vmi-fedora-8s7fm         2/2     Running   0          22m

NAME                                          AGE     STATUS    READY
virtualmachine.kubevirt.io/centos9instance1   50m     Running   True
virtualmachine.kubevirt.io/ubuntujammy1       39m     Running   True
virtualmachine.kubevirt.io/vm-centos9         9m50s   Stopped   False

NAME                                                  AGE   PHASE     IP           NODENAME          READY
virtualmachineinstance.kubevirt.io/centos9instance1   34m   Running   10.42.0.36   rpi.example.com   True
virtualmachineinstance.kubevirt.io/ubuntujammy1       31m   Running   10.42.0.37   rpi.example.com   True
virtualmachineinstance.kubevirt.io/vmi-fedora         22m   Running   10.42.0.39   rpi.example.com   True

You can check the OKD console for these four VMIs.

VMs in OKD Console created using CDI


If done, you can delete the VMs, VMI, and the source PVs created by the CDI

[root@rpi ~]# oc delete vm --all
virtualmachine.kubevirt.io "centos9instance1" deleted
virtualmachine.kubevirt.io "ubuntujammy1" deleted
virtualmachine.kubevirt.io "vm-centos9" deleted
[root@rpi ~]# oc delete vmi --all
virtualmachineinstance.kubevirt.io "vmi-fedora" deleted
[root@rpi ~]# oc delete pvc --all
persistentvolumeclaim "centos9-dv" deleted
persistentvolumeclaim "example-upload-dv" deleted

Finally, you can delete the CDI resource and Operator as follows:

VERSION=v1.55.2
oc delete -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
oc delete -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml

6. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

dnf install -y wget jq
oc apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
oc get --raw /apis/metrics.k8s.io/v1beta1/nodes
oc get --raw /apis/metrics.k8s.io/v1beta1/pods
oc get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

watch "kubectl top nodes;kubectl top pods -A"
watch "oc adm top nodes;oc adm top pods -A"

Output:

NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
rpi.example.com              865m         21%    1257Mi          16%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     kube-flannel-ds-4mc4p                 9m           16Mi
kube-system                     metrics-server-684454657f-kgkwv       14m          19Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-bpnd8   1m           12Mi
openshift-dns                   dns-default-65cg2                     5m           22Mi
openshift-dns                   node-resolver-d82ck                   0m           1Mi
openshift-ingress               router-default-85bcfdd948-5s28c       3m           27Mi
openshift-service-ca            service-ca-7764c85869-zrnzx           14m          27Mi

We can delete the metrics server using

oc delete -f metrics-server-components.yaml

7. MongoDB

We will deploy and use the mongodb database using the image: docker.io/arm64v8/mongo:4.4.18. Do not use the latest tag for the image. It will result in "WARNING: MongoDB 5.0+ requires ARMv8.2-A or higher, and your current system does not appear to implement any of the common features for that!" and fail to start. Raspberry Pi 4 uses an ARM Cortex-A72 which is ARM v8-A.

A new PersistentVolumeClaim mongodb will use the storageClassName: kubevirt-hostpath-provisioner for the Persistent Volume. The mongodb-root-username uses the root user with a the mongodb-root-password set to a default of mongodb-password.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/mongodb
oc project default
vi mongodb-pv.yaml # Update the node name
oc apply -f .

We create the school db and insert 1000 records into the student collection.

Output:

[root@rpi mongodb]# watch "oc get nodes;oc get pods -A;crictl pods;crictl images"
[root@rpi mongodb]# oc exec -it statefulset/mongodb -- bash
groups: cannot find name for group ID 1001
1001@mongodb-0:/$ mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password
MongoDB shell version v4.4.18
connecting to: mongodb://mongodb.default.svc.cluster.local:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("baa50325-95bf-411a-bc01-2d72a5b3c993") }
MongoDB server version: 4.4.18
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
	https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
	https://community.mongodb.com
---
The server generated these startup warnings when booting:
        2022-12-12T21:24:34.484+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
---
---
        Enable MongoDB's free cloud-based monitoring service, which will then receive and display
        metrics about your deployment (disk utilization, CPU, operation statistics, etc).

        The monitoring data will be available on a MongoDB website with a unique URL accessible to you
        and anyone you share the URL with. MongoDB may use this information to make product
        improvements and to suggest MongoDB products and deployment options to you.

        To enable free monitoring, run the following command: db.enableFreeMonitoring()
        To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
> use admin
switched to db admin
> show users
{
	"_id" : "admin.root",
	"userId" : UUID("54ff5cd3-b975-4d43-b358-696bbcdbb9cb"),
	"user" : "root",
	"db" : "admin",
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}
> use school
switched to db school
> for(var i = 1;i<=1000;i++){ db.students.insert({student_id:i,class:Math.ceil(Math.random()*20),scores:[{type:"exam" , score:Math.ceil(Math.random()*100)},{type:"quiz" , score:Math.ceil(Math.random()*100)},{type:"homework" , score:Math.ceil(Math.random()*100)},{type:"homework" , score:Math.ceil(Math.random()*100)}]}); if(i%100 == 0) {print(i);} }
100
200
300
400
500
600
700
800
900
1000
> db.students.findOne()
{
	"_id" : ObjectId("63979c677f4c0f85dcab5dd5"),
	"student_id" : 1,
	"class" : 20,
	"scores" : [
		{
			"type" : "exam",
			"score" : 19
		},
		{
			"type" : "quiz",
			"score" : 72
		},
		{
			"type" : "homework",
			"score" : 45
		},
		{
			"type" : "homework",
			"score" : 69
		}
	]
}
> exit
bye
1001@mongodb-0:/$ exit
exit
We can run another client pod to connect to the mongodb service and check the number of students in the collection in the school db we created earlier.
[root@rpi mongodb]# oc run --namespace default mongodb-client --rm --tty -i --restart='Never' --image docker.io/arm64v8/mongo:4.4.18 -- bash
If you don't see a command prompt, try pressing enter.
root@mongodb-client:/# mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password
MongoDB shell version v4.4.18
connecting to: mongodb://mongodb.default.svc.cluster.local:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("7e088bec-e032-4601-9330-8259e6d7e607") }
MongoDB server version: 4.4.18
Welcome to the MongoDB shell.
For interactive help, type "help".
...
> use school
switched to db school
> db.students.count()
1000
> exit
bye
root@mongodb-client:/# exit
exit
pod "mongodb-client" deleted

When we are done, we can delete the mongodb

cd ~/microshift/raspberry-pi/mongodb
oc delete -f .

You can also alternatively install mongodb using the bitnami helm chart, however the auth will not be set because the chart uses different env variables for username and password within the pod.

# Install helm on arm64
curl -o helm-v3.10.2-linux-arm64.tar.gz https://get.helm.sh/helm-v3.10.2-linux-arm64.tar.gz
tar -zxvf helm-v3.10.2-linux-arm64.tar.gz
cp linux-arm64/helm /usr/local/bin
rm -rf linux-arm64
rm -f helm-v3.10.2-linux-arm64.tar.gz

# Add the bitnami repo
chmod 600 /var/lib/microshift/resources/kubeadmin/kubeconfig
helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo mongo

helm install mongodb bitnami/mongodb --set image.registry=docker.io --set image.repository=arm64v8/mongo --set image.tag=4.4.18 --set persistence.mountPath=/data/db --set livenessProbe.enabled=false --set readinessProbe.enabled=false --set auth.enabled=false

When done, you can delete with

helm delete mongodb

Cleanup MicroShift

We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

dnf -y install sudo
cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on Oracle Linux (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only

Microshift Containerized

If you did not already install podman, you can do it now.

dnf install -y podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=120
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

Alternatively, we can run

systemctl stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Use a different name for the microshift all-in-one pod (with the -h parameter for podman below) than the hostname for the Raspberry Pi 4.

setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
# Run with microshift.example.com as the pod hostname
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64
# Run with microshift-aio.example.com as the pod hostname
# podman run -d --rm --name microshift -h microshift-aio.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:latest

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=120
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift-aio.example.com --privileged -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now, we can run the samples shown earlier.

For the Virtual Machine Instance Sample 4, we can connect to using virtctl.

[root@rpi vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@rpi vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@rpi vmi]# podman rm -v $id
e67b39982a38c88f5191326c4202bd1b565b62d14f0f85dbb0495f1c2414f7c0
[root@rpi vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=117 time=4.75 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=117 time=4.14 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.141/4.447/4.753/0.306 ms
[fedora@vmi-fedora ~]$ # ^] to detach
[root@rpi vmi]# 

We can also connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.6. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 30186 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 30186 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:

[root@rpi vmi]# oc get vmi,pods
NAME                                            AGE   PHASE     IP           NODENAME                     READY
virtualmachineinstance.kubevirt.io/vmi-fedora   11m   Running   10.42.0.14   microshift-aio.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-ckwbn   2/2     Running   0          11m
[root@rpi vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@rpi vmi]# oc get svc vmi-fedora-ssh
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.227.54   <none>        22:30186/TCP   13s
[root@rpi vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.6
[root@rpi vmi]# oc run -i --tty ssh-proxy --rm --image=ubuntu --restart=Never -- /bin/sh -c "apt-get update;apt-get -y install openssh-client;ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.6 -p 30186"
If you don't see a command prompt, try pressing enter.
…
Warning: Permanently added '[10.88.0.6]:30186' (ED25519) to the list of known hosts.
fedora@10.88.0.6's password:
Last login: Sat Nov 26 21:32:38 2022
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.65.174) 56(84) bytes of data.
64 bytes from lga25s71-in-f14.1e100.net (142.250.65.174): icmp_seq=1 ttl=117 time=4.91 ms
64 bytes from lga25s71-in-f14.1e100.net (142.250.65.174): icmp_seq=2 ttl=117 time=4.58 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.578/4.743/4.909/0.165 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.6 closed.
pod "ssh-proxy" deleted

We can install the QEMU guest agent, a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

After we are done, we can delete the all-in-one microshift container.

#podman stop -t 120 microshift
podman rm -f microshift && podman volume rm microshift-data

or if started using systemd, then

systemctl stop microshift
podman volume rm microshift-data

Run Kata Containers in MicroShift

Let’s install Kata without building from scratch. I have created a docker image with precompiled binaries that we will download. If you want to build from scratch, follow the instructions in Part 26.

dnf -y install wget pkg-config

# Install golang
wget https://golang.org/dl/go1.19.3.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.19.3.linux-arm64.tar.gz
rm -f go1.19.3.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH

cd /root
id=$(podman create docker.io/karve/kata-go-directory:arm64 -- ls)
podman cp $id:kata-go-directory.tgz kata-go-directory.tgz
tar -zxvf kata-go-directory.tgz && rm -f kata-go-directory.tgz
podman rm $id

For reference, I used the following Dockerfile to create this image after I built the binaries. We can directly skip to install kata-runtime section below to install without building from source.

Dockerfile

FROM scratch
WORKDIR /
COPY kata-go-directory.tgz kata-go-directory.tgz

Build the kata-go-directory:arm64 image

cd /root
tar -czf kata-go-directory.tgz go
podman build -t docker.io/karve/kata-go-directory:arm64 . 
podman push docker.io/karve/kata-go-directory:arm64

Install kata-runtime

cd /root/go/src/github.com/kata-containers/kata-containers/src/runtime/
make install

Output:

[root@rpi runtime]# make install
kata-runtime - version 3.1.0-alpha0 (commit 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1)

• architecture:
	Host:
	golang:
	Build: arm64

• golang:
	go version go1.19.3 linux/arm64

• hypervisors:
	Default: qemu
	Known: acrn cloud-hypervisor firecracker qemu
	Available for this architecture: cloud-hypervisor firecracker qemu

• Summary:

	destination install path (DESTDIR) : /
	binary installation path (BINDIR) : /usr/local/bin
	binaries to install :
	 - /usr/local/bin/kata-runtime
	 - /usr/local/bin/containerd-shim-kata-v2
	 - /usr/local/bin/kata-monitor
	 - /usr/local/bin/data/kata-collect-data.sh
	configs to install (CONFIGS) :
	 - config/configuration-clh.toml
 	 - config/configuration-fc.toml
 	 - config/configuration-qemu.toml
	install paths (CONFIG_PATHS) :
	 - /usr/share/defaults/kata-containers/configuration-clh.toml
 	 - /usr/share/defaults/kata-containers/configuration-fc.toml
 	 - /usr/share/defaults/kata-containers/configuration-qemu.toml
	alternate config paths (SYSCONFIG_PATHS) :
	 - /etc/kata-containers/configuration-clh.toml
 	 - /etc/kata-containers/configuration-fc.toml
 	 - /etc/kata-containers/configuration-qemu.toml
	default install path for qemu (CONFIG_PATH) : /usr/share/defaults/kata-containers/configuration.toml
	default alternate config path (SYSCONFIG) : /etc/kata-containers/configuration.toml
	qemu hypervisor path (QEMUPATH) : /usr/bin/qemu-system-aarch64
	cloud-hypervisor hypervisor path (CLHPATH) : /usr/bin/cloud-hypervisor
	firecracker hypervisor path (FCPATH) : /usr/bin/firecracker
	assets path (PKGDATADIR) : /usr/share/kata-containers
	shim path (PKGLIBEXECDIR) : /usr/libexec/kata-containers

     INSTALL  install-scripts
     INSTALL  install-completions
     INSTALL  install-configs
     INSTALL  install-configs
     INSTALL  install-bin
     INSTALL  install-containerd-shim-v2
     INSTALL  install-monitor

Check hardware requirements

kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
which kata-runtime
kata-runtime --version
containerd-shim-kata-v2 --version

Output:

[root@rpi runtime]# kata-runtime check --verbose # This will return error because vmlinux.container does not exist yet
ERRO[0000] /usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist  arch=arm64 name=kata-runtime pid=1487 source=runtime
/usr/share/defaults/kata-containers/configuration-qemu.toml: file /usr/bin/qemu-system-aarch64 does not exist
[root@rpi runtime]# which kata-runtime
/usr/local/bin/kata-runtime
[root@rpi runtime]# kata-runtime --version
kata-runtime  : 3.1.0-alpha0
   commit   : 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1
   OCI specs: 1.0.2-dev
[root@rpi runtime]# containerd-shim-kata-v2 --version
Kata Containers containerd shim: id: "io.containerd.kata.v2", version: 3.1.0-alpha0, commit: 9bde32daa102368b9dbc27a6c03ed2e3e87d65e1

Configure to use initrd image

Since, Kata containers can run with either an initrd image or a rootfs image, we will install both images but initially use the initrd. We will switch to rootfs in later section. So, make sure you add initrd = /usr/share/kata-containers/kata-containers-initrd.img in the configuration file /usr/share/defaults/kata-containers/configuration.toml and comment out the default image line with the following:

sudo mkdir -p /etc/kata-containers/
sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
sudo sed -i 's/^\(image =.*\)/# \1/g' /etc/kata-containers/configuration.toml
sudo sed -i 's/^# \(initrd =.*\)/\1/g' /etc/kata-containers/configuration.toml

The /etc/kata-containers/configuration.toml now looks as follows:

# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"

One of the initrd and image options in Kata runtime config file must be set, but not both. The main difference between the options is that the size of initrd (10MB+) is significantly smaller than rootfs image (100MB+).

Install the initrd image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-initrd-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers-initrd.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd.img)

Install the rootfs image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)

Install Kata Containers Kernel

yum -y install flex bison bc
go env -w GO111MODULE=auto
cd $GOPATH/src/github.com/kata-containers/packaging/kernel

Install the kernel to the default Kata containers path (/usr/share/kata-containers/)

./build-kernel.sh install

Output:

[root@rpi kernel]# go env -w GO111MODULE=auto
[root@rpi kernel]# ./build-kernel.sh install
package github.com/kata-containers/tests: no Go files in /root/go/src/github.com/kata-containers/tests
~/go/src/github.com/kata-containers/tests ~/go/src/github.com/kata-containers/packaging/kernel
~/go/src/github.com/kata-containers/packaging/kernel
INFO: Config version: 92
INFO: Kernel version: 5.4.60
  CALL    scripts/atomic/check-atomics.sh
  CALL    scripts/checksyscalls.sh
  CHK     include/generated/compile.h
./scripts/mkcompile_h: line 48: hostname: command not found
  UPD     include/generated/compile.h
  CC      init/version.o
  AR      init/built-in.a
  GEN     .version
  CHK     include/generated/compile.h
./scripts/mkcompile_h: line 48: hostname: command not found
  UPD     include/generated/compile.h
  CC      init/version.o
  AR      init/built-in.a
  LD      vmlinux.o
  MODPOST vmlinux.o
  MODINFO modules.builtin.modinfo
  LD      .tmp_vmlinux.kallsyms1
  KSYM    .tmp_vmlinux.kallsyms1.o
  LD      .tmp_vmlinux.kallsyms2
  KSYM    .tmp_vmlinux.kallsyms2.o
  LD      vmlinux
  SORTEX  vmlinux
  SYSMAP  System.map
  OBJCOPY arch/arm64/boot/Image
  GZIP    arch/arm64/boot/Image.gz
lrwxrwxrwx 1 root root 17 Nov 29 16:27 /usr/share/kata-containers/vmlinux.container -> vmlinux-5.4.60-92
lrwxrwxrwx 1 root root 17 Nov 29 16:27 /usr/share/kata-containers/vmlinuz.container -> vmlinuz-5.4.60-92

The /etc/kata-containers/configuration.toml has the following:

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/libexec/virtiofsd"

Check the output kata-runtime (there is still a problem):

[root@microshift kernel]# kata-runtime check --verbose
ERRO[0000] /etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist  arch=arm64 name=kata-runtime pid=205921 source=runtime
/etc/kata-containers/configuration.toml: file /usr/bin/qemu-system-aarch64 does not exist 

Let’s fix this with:

ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64 

Output:

[root@rpi kernel]# ls -las /usr/libexec/qemu-kvm /usr/bin/qemu-system-aarch64
    4 lrwxrwxrwx 1 root root       21 Nov 29 16:21 /usr/bin/qemu-system-aarch64 -> /usr/libexec/qemu-kvm
12068 -rwxr-xr-x 1 root root 12354904 Nov 16 05:05 /usr/libexec/qemu-kvm

Check the hypervisor.qemu section in configuration.toml:

[root@rpi kernel]# cat /etc/kata-containers/configuration.toml | awk -v RS= '/\[hypervisor.qemu\]/'
[hypervisor.qemu]
path = "/usr/bin/qemu-system-aarch64"
kernel = "/usr/share/kata-containers/vmlinux.container"
# image = "/usr/share/kata-containers/kata-containers.img"
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
machine_type = "virt"

Check the initrd image (kata-containers-initrd.img), the rootfs image (kata-containers.img), and the kernel in the /usr/share/kata-containers directory:

[root@rpi kernel]# ls -las /usr/share/kata-containers
total 171772
     0 drwxr-xr-x  1 root root       506 Nov 29 16:27 .
     0 drwxr-xr-x. 1 root root      1512 Nov 29 16:25 ..
    68 -rw-r--r--  1 root root     68607 Nov 29 16:27 config-5.4.60
131072 -rw-r-----  1 root root 134217728 Nov 29 16:17 kata-containers-2022-11-29-16:17:18.507476686+0000-9bde32daa
     4 lrwxrwxrwx  1 root root        60 Nov 29 16:17 kata-containers.img -> kata-containers-2022-11-29-16:17:18.507476686+0000-9bde32daa
 26144 -rw-r-----  1 root root  26770481 Nov 29 16:16 kata-containers-initrd-2022-11-29-16:16:01.736902761+0000-9bde32daa
     4 lrwxrwxrwx  1 root root        67 Nov 29 16:16 kata-containers-initrd.img -> kata-containers-initrd-2022-11-29-16:16:01.736902761+0000-9bde32daa
  9896 -rw-r--r--  1 root root  10246656 Nov 29 16:27 vmlinux-5.4.60-92
     4 lrwxrwxrwx  1 root root        17 Nov 29 16:27 vmlinux.container -> vmlinux-5.4.60-92
  4576 -rw-r--r--  1 root root   4684120 Nov 29 16:27 vmlinuz-5.4.60-92
     4 lrwxrwxrwx  1 root root        17 Nov 29 16:27 vmlinuz.container -> vmlinuz-5.4.60-92

Create the file /etc/crio/crio.conf.d/50-kata

cat > /etc/crio/crio.conf.d/50-kata << EOF
[crio.runtime.runtimes.kata]
  runtime_path = "/usr/local/bin/containerd-shim-kata-v2"
  runtime_root = "/run/vc"
  runtime_type = "vm"
  privileged_without_host_devices = true
EOF

We will run Kata using non-containerized approach for MicroShift. It also works with the "Containerized" approach, but not with "Containerized all-in-one". Let’s cleanup.

cd ~/microshift/hack
./cleanup.sh

Replace microshift.service to allow non-containerized MicroShift. Restart crio and start microshift.

cat << EOF > /usr/lib/systemd/system/microshift.service 
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl restart crio

systemctl start microshift
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig

Running some Kata samples

After MicroShift is started, you can apply the kata runtimeclass and run the samples.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-runtimeclass.yaml

We execute the runall-fedora-dynamic.sh for Oracle Linux 9 after updating the deployment yamls to use the runtimeclass: kata.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb/

Update the influxdb-deployment.yaml, telegraf-deployment.yaml and grafana/grafana-deployment.yaml to use the runtimeClassName: kata. With Kata containers, we do not directly get access to the host devices. So, we run the measure container as a runc pod. In runc, '--privileged' for a container means all the /dev/* block devices from the host are mounted into the guest. This will allow the privileged container to gain access to mount any block device from the host.

sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml telegraf-deployment.yaml grafana/grafana-deployment.yaml

Now, get the nodename

[root@rpi influxdb]# oc get nodes
NAME              STATUS   ROLES    AGE   VERSION
rpi.example.com   Ready    <none>   19h   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename rpi.example.com and execute the runall-fedora-dynamic.sh. This will create a new project influxdb.

nodename=rpi.example.com
sed -i "s|kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" influxdb-data-dynamic.yaml
sed -i "s| kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" grafana/grafana-data-dynamic.yaml

./runall-fedora-dynamic.sh

Let’s watch the stats (CPU%, Memory, Disk and Inodes) of the kata container pods:

watch "oc get nodes;oc get pods -A;crictl stats -a"

Output:

NAME              STATUS   ROLES    AGE   VERSION
rpi.example.com   Ready    <none>   35m   v1.21.0

NAMESPACE                       NAME                                   READY   STATUS    RESTARTS   AGE
influxdb                        grafana-855ffb48d8-nstwt               1/1     Running   0          2m9s
influxdb                        influxdb-deployment-6d898b7b7b-8gbcv   1/1     Running   0          4m45s
influxdb                        measure-deployment-5557947b8c-8mt42    1/1     Running   0          3m53s
influxdb                        telegraf-deployment-d746f5c6-jf9d6     1/1     Running   0          2m45s
kube-system                     kube-flannel-ds-nsrbf                  1/1     Running   0          35m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-cpgjz    1/1     Running   0          35m
openshift-dns                   dns-default-2qrqt                      2/2     Running   0          35m
openshift-dns                   node-resolver-472hp                    1/1     Running   0          35m
openshift-ingress               router-default-85bcfdd948-jpbk4        1/1     Running   0          35m
openshift-service-ca            service-ca-7764c85869-5h5hr            1/1     Running   0          36m

CONTAINER           CPU %               MEM                 DISK                INODES
4057ff905750d       0.10                11.87MB             186kB               11
431246eff16ea       0.18                17.15MB             265B                11
53f553a32283f       0.00                0B                  0B                  4
71f6754201723       0.00                0B                  13.28kB             22
8868c33e67ddc       0.00                0B                  12B                 18
8fda5bf99f59f       0.00                0B                  0B                  3
97427d0e37e5b       0.00                0B                  12B                 19
c7b49b2f2a006       0.05                23.3MB              4.026MB             70
d1642b9f5c3e9       0.00                0B                  6.961kB             11
d58ccbfcd9ccf       0.00                0B                  35.41kB             17
ded98933309b1       0.00                0B                  138B                15

We can look at the RUNTIME_CLASS using custom columns:

oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A 

Output:

[root@rpi influxdb]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A
NAME                                   STATUS    RUNTIME_CLASS   IP              IMAGE
grafana-855ffb48d8-nstwt               Running   kata            10.42.0.11      docker.io/grafana/grafana:5.4.3
influxdb-deployment-6d898b7b7b-8gbcv   Running   kata            10.42.0.8       docker.io/library/influxdb:1.7.4
measure-deployment-5557947b8c-8mt42    Running   <none>          10.42.0.9       docker.io/karve/measure-fedora:latest
telegraf-deployment-d746f5c6-jf9d6     Running   kata            10.42.0.10      docker.io/library/telegraf:1.10.0
kube-flannel-ds-nsrbf                  Running   <none>          192.168.1.209   quay.io/microshift/flannel:4.8.0-0.okd-2021-10-10-030117
kubevirt-hostpath-provisioner-cpgjz    Running   <none>          10.42.0.3       quay.io/microshift/hostpath-provisioner:4.8.0-0.okd-2021-10-10-030117
dns-default-2qrqt                      Running   <none>          10.42.0.4       quay.io/microshift/coredns:4.8.0-0.okd-2021-10-10-030117
node-resolver-472hp                    Running   <none>          192.168.1.209   quay.io/microshift/cli:4.8.0-0.okd-2021-10-10-030117
router-default-85bcfdd948-jpbk4        Running   <none>          192.168.1.209   quay.io/microshift/haproxy-router:4.8.0-0.okd-2021-10-10-030117
service-ca-7764c85869-5h5hr            Running   <none>          10.42.0.2       quay.io/microshift/service-ca-operator:4.8.0-0.okd-2021-10-10-030117

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

cd ~/microshift/raspberry-pi/influxdb/
./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

Configure to use the rootfs image

We have been using the initrd image when running the samples above, now let’s switch to the rootfs image instead of using initrd by changing the following lines in /etc/kata-containers/configuration.toml

image = "/usr/share/kata-containers/kata-containers.img"
#initrd = "/usr/share/kata-containers/kata-containers-initrd.img"

Also disable the image nvdimm by setting the following:

disable_image_nvdimm = true # Default is false

Restart crio and test with the kata-alpine sample

systemctl restart crio
cd ~/microshift/raspberry-pi/kata/
oc project default
oc apply -f kata-alpine.yaml

Output:

[root@rpi kata]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -n default
NAME         STATUS    RUNTIME_CLASS   IP           IMAGE
kata-alpine  Running   kata            10.42.0.12   docker.io/karve/alpine-sshclient:arm64

We can also run MicroShift Containerized as shown in Part 18 and execute the Jupyter Notebook samples for Digit Recognition, Object Detection and License Plate Recognition with Kata containers as shown in Part 23.

Conclusion

In this Part 27, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Oracle Linux 9 (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We ran a MongoDB sample. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with Oracle Linux 9. Finally, we installed and configured Kata containers to run with MicroShift and ran samples to use Kata containers. In the next Part 28, we will work with Rocky Linux 9.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi
#oraclelinux

0 comments
19 views

Permalink