Infrastructure as a Service

 View Only

MicroShift – Part 25: Raspberry Pi 4 with Pop!_OS

By Alexei Karve posted Mon August 29, 2022 09:59 AM

  

MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with Pop!_OS 22.04

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream. In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04. In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively, Part 13 with Ubuntu 22.04, Part 14 on Rocky Linux, Part 15 on openSUSE, Part 16 on Oracle Linux, Part 17 on AlmaLinux, Part 18 on Manjaro, Part 19 on Kali Linux, Part 20 on Arch Linux, Part 21 on Fedora 36 Silverblue and Part 22 on EndeavourOS. In Part 23 and Part 24, we worked with Kata Containers on Fedora and Manjaro respectively. In this Part 25, we will work with MicroShift on Pop!_OS. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also use .NET to drive a Raspberry Pi Sense HAT. Finally, we will setup MicroShift with Kata Containers Runtime.

Pop!_OS is a free and open-source Linux system based on Ubuntu and features the COSMIC Gnome desktop environment. Pop! OS is largely maintained by System76, with the release version source code posted on GitHub. The latest version of the flagship Linux distro from this US-based computer vendor is based on Ubuntu 22.04 LTS. It inherits much of that release’s foundations, including the lift to GNOME 42, but adds its own ‘Cosmic’ desktop experience and other embellishments. Pop!_OS system requirements mandate a 64-bit processor, plus a minimum of 4GB RAM and 16GB storage. As of now, System76 has made its Pop!_OS distribution available as a technical preview on Raspberry Pi 4. So, you should expect bugs and usability issues if you would like to take it for a test drive with MicroShift.

Setting up the Raspberry Pi 4 with Pop!_OS

Setting up the Raspberry Pi 4 with Pop!_OS


We will download the Pop!_OS image (RAS PI4) from system76 and write to Microsdxc card

  1. Download the image from https://iso.pop-os.org/22.04/arm64/raspi/2/pop-os_22.04_arm64_raspi_2.img.xz
  2. Write to Microsdxc card
  3. Have a Keyboard, Mouse and Monitor connected to the Raspberry Pi
  4. Insert Microsdxc into Raspberry Pi4 and poweron
  5. Set the language, WiFi connection, timezone, click Next for online accounts and type the user rpi and password and click on “Start Using Pop!_OS”. We will continue by connecting using ssh. You can read more about using the GUI interface.
  6. Update the Pop!_OS - This update process is tricky and may change. I had to reboot three times before coming to the welcome screen that allowed the configuration. Later had to use a combination of commands below to upgrade the OS because of problems with dependencies.

    Update the /etc/apt/sources.list.d/system.sources with following:

    X-Repolib-Name: Pop_OS System Sources
    Enabled: yes
    Types: deb deb-src
    URIs: http://ports.ubuntu.com/ubuntu-ports/
    Suites: jammy jammy-proposed jammy-updates jammy-backports jammy-security
    Components: main restricted universe multiverse
    X-Repolib-Default-Mirror: http://us.archive.ubuntu.com/ubuntu/
    

    Upgrade the OS - Problem with broken packages

    dpkg --configure -a
    apt-get install -f

    Edit the dpkg status file manually and remove the section on udev

    vi /var/lib/dpkg/status # Remove the section on udev
    
    apt -y install systemd=249.11-0ubuntu3.4 libpam-systemd=249.11-0ubuntu3.4 libnss-systemd=249.11-0ubuntu3.4 libsystemd0=249.11-0ubuntu3.4 systemd-sysv=249.11-0ubuntu3.4 systemd-timesyncd=249.11-0ubuntu3.4 udev==249.11-0ubuntu3.4 libudev1=249.11-0ubuntu3.4 dpkg
    dpkg --configure -a apt-get install -f apt full-upgrade apt autoremove --purge #apt -y upgrade --allow-downgrades #apt dist-upgrade

    Install openssh-server

    apt -y install openssh-server
    systemctl start sshd # should be already started by the install above
  7. Update the hostname and add the ipv4 address to /etc/hosts
    hostnamectl set-hostname microshift.example.com
    echo "$ipaddress microshift microshift.example.com" >> /etc/hosts
    
  8. Login to the Raspberry Pi 4 using ssh and check the release
    ssh rpi@$ipaddress
    sudo su -
    cat /etc/os-release

    Output:

    root@microshift:~# cat /etc/os-release
    NAME="Pop!_OS"
    VERSION="22.04 LTS"
    ID=pop
    ID_LIKE="ubuntu debian"
    PRETTY_NAME="Pop!_OS 22.04 LTS"
    VERSION_ID="22.04"
    HOME_URL="https://pop.system76.com"
    SUPPORT_URL="https://support.system76.com"
    BUG_REPORT_URL="https://github.com/pop-os/pop/issues"
    PRIVACY_POLICY_URL="https://system76.com/privacy"
    VERSION_CODENAME=jammy
    UBUNTU_CODENAME=jammy
    LOGO=distributor-logo-pop-os
    
  9. Update kernel parameters: concatenate onto the end of the existing line (do not add a new line) in /boot/firmware/cmdline.txt (or /boot/cmdline.txt) and reboot
     cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

    A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

    # After reboot
    root@microshift:~# cat /proc/cmdline
    coherent_pool=1M 8250.nr_uarts=0 snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1 video=HDMI-A-1:1920x1200M@60 smsc95xx.macaddr=E4:5F:01:2E:D8:95 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  dwc_otg.lpm_enable=0 console=tty1 root=LABEL=writable rootfstype=ext4 elevator=deadline rootwait fixrtc cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
    
    root@pop-os:~# mount | grep cgroup # Check that memory and cpuset are present
    cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
    
    root@pop-os:~# cat /proc/cgroups | column -t # Check that memory and cpuset are present
    #subsys_name  hierarchy  num_cgroups  enabled
    cpuset        0          242          1
    cpu           0          242          1
    cpuacct       0          242          1
    blkio         0          242          1
    memory        0          242          1
    devices       0          242          1
    freezer       0          242          1
    net_cls       0          242          1
    perf_event    0          242          1
    net_prio      0          242          1
    hugetlb       0          242          1
    pids          0          242          1
    rdma          0          242          1
    misc          0          242          1
    
  10. Install the VXLAN support. Reference vxlan required for flannel, vxlan failing to route, linux-modules-extra-raspi

    modprobe vxlan
    

    Output:

    root@microshift:~# modprobe vxlan
    modprobe: FATAL: Module vxlan not found in directory /lib/modules/5.15.0-1011-raspi

    Let’s install the linux-modules-extra-raspi to fix this

    # vxlan modules were moved by upstream Ubuntu 21.10 to this separate package
    apt install -y linux-modules-extra-raspi
    
  11. Enable wifi if not done earlier (Optional)
    root@microshift:~# nmcli dev status
    DEVICE         TYPE      STATE        CONNECTION
    eth0           ethernet  connected    Wired connection 1
    wlan0          wifi      unavailable  --
    p2p-dev-wlan0  wifi-p2p  unavailable  --
    lo             loopback  unmanaged    --
    root@microshift:~# nmcli radio wifi
    disabled
    root@microshift:~# nmcli radio wifi on
    root@microshift:~# nmcli device wifi list # Note your network-ssid
    root@microshift:~# nmcli dev wifi connect network-ssid --ask
    
  12. Resize the ext4 partition to use the full size
    root@microshift:~/microshift# fdisk -lu
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x1bf8c317
    
    Device         Boot  Start      End  Sectors  Size Id Type
    /dev/mmcblk0p1 *      2048   524287   522240  255M  c W95 FAT32 (LBA)
    /dev/mmcblk0p2      524288 16777215 16252928  7.8G 83 Linux
    root@microshift:~/microshift# fdisk /dev/mmcblk0
    
    Welcome to fdisk (util-linux 2.37.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    This disk is currently in use - repartitioning is probably a bad idea.
    It's recommended to umount all file systems, and swapoff all swap
    partitions on this disk.
    
    
    Command (m for help): d
    Partition number (1,2, default 2): 2
    
    Partition 2 has been deleted.
    
    Command (m for help): n
    Partition type
       p   primary (1 primary, 0 extended, 3 free)
       e   extended (container for logical partitions)
    Select (default p): p
    Partition number (2-4, default 2):
    First sector (524288-122138623, default 524288):
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (524288-122138623, default 122138623):
    
    Created a new partition 2 of type 'Linux' and of size 58 GiB.
    Partition #2 contains a ext4 signature.
    
    Do you want to remove the signature? [Y]es/[N]o: n
    
    Command (m for help): w
    
    The partition table has been altered.
    Syncing disks.
    
    root@microshift:~/microshift# resize2fs /dev/mmcblk0p2
    resize2fs 1.46.5 (30-Dec-2021)
    Filesystem at /dev/mmcblk0p2 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 8
    The filesystem on /dev/mmcblk0p2 is now 15201792 (4k) blocks long.
    
    root@microshift:~/microshift# fdisk -lu
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x1bf8c317
    
    Device         Boot  Start       End   Sectors  Size Id Type
    /dev/mmcblk0p1 *      2048    524287    522240  255M  c W95 FAT32 (LBA)
    /dev/mmcblk0p2      524288 122138623 121614336   58G 83 Linux
    

Install sense_hat and RTIMULib on Pop!_OS

Install the dependencies for SenseHat

apt install -y python3 python3-dev build-essential autoconf libtool pkg-config cmake libssl-dev openssl libcurl4-openssl-dev i2c-tools

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, test it with i2cdetect.

root@ubuntu:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Add the i2c-dev line to /etc/modules:

cat << EOF >> /etc/modules
i2c-dev
EOF

Create the file /etc/udev/rules.d/99-i2c.rules with the following contents:

cat << EOF >> /etc/udev/rules.d/99-i2c.rules
KERNEL=="i2c-[0-7]",MODE="0666"
EOF

The Raspberry Pi build of Ubuntu Server comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

root@microshift:~# lsmod | grep st_
st_magn_spi            16384  0
st_pressure_spi        16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
st_magn_i2c            16384  0
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure_i2c        16384  0
st_pressure            20480  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio          102400  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the modules

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Login back to the Raspberry Pi 4 and check the i2cdetect output

ssh rpi@$ipaddress
sudo su –
cat /boot/firmware/config.txt
i2cdetect -y 1
i2cdetect -F 1

Output:

root@microshift:~# cat /boot/firmware/config.txt
[pi4]
max_framebuffers=2

[all]
kernel=vmlinuz
cmdline=cmdline.txt
initramfs initrd.img followkernel

# Enable the audio output, I2C and SPI interfaces on the GPIO header
dtparam=audio=on
dtparam=i2c_arm=on
dtparam=spi=on

# Enable the KMS ("full" KMS) graphics overlay, and allocate 128Mb to the GPU
# memory. The full KMS overlay is required for X11 application support under
# wayland
dtoverlay=vc4-kms-v3d
gpu_mem=128

# Uncomment the following to enable the Raspberry Pi camera module firmware.
# Be warned that there *may* be incompatibilities with the "full" KMS overlay
#start_x=1

# Comment out the following line if the edges of the desktop appear outside
# the edges of your display
disable_overscan=1

# If you have issues with audio, you may try uncommenting the following line
# which forces the HDMI output into HDMI mode instead of DVI (which doesn't
# support audio output)
#hdmi_drive=2

# If you have a CM4, uncomment the following line to enable the USB2 outputs
# on the IO board (assuming your CM4 is plugged into such a board)
#dtoverlay=dwc2,dr_mode=host

# Config settings specific to arm64
arm_64bit=1
dtoverlay=dwc2

# Enable 4k@60Hz (disable if system overheats)
hdmi_enable_4kp60=1

root@microshift:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

root@microshift:~# i2cdetect -F 1
Functionalities implemented by /dev/i2c-1:
I2C                              yes
SMBus Quick Command              yes
SMBus Send Byte                  yes
SMBus Receive Byte               yes
SMBus Write Byte                 yes
SMBus Read Byte                  yes
SMBus Write Word                 yes
SMBus Read Word                  yes
SMBus Process Call               yes
SMBus Block Write                yes
SMBus Block Read                 no
SMBus Block Process Call         no
SMBus PEC                        yes
I2C Block Write                  yes
I2C Block Read                   yes

If you see output as follows below, then shutdown, power off, and power on the Raspberry Pi. The "i2cdetect -y 1" should show the correct output as seen above.

root@microshift:~# i2cdetect -y 1 # i2cdetect 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         08 09 0a 0b 0c 0d 0e 0f
10: 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f
20: 20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f
30: 30 31 32 33 34 35 36 37 38 39 3a 3b 3c 3d 3e 3f
40: 40 41 42 43 44 45 UU 47 48 49 4a 4b 4c 4d 4e 4f
50: 50 51 52 53 54 55 56 57 58 59 5a 5b 5c 5d 5e 5f
60: 60 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f
70: 70 71 72 73 74 75 76 77

Install the RTIMULib. This is required to use the SenseHat.

git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break

cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
cd /root/RTIMULib/Linux/RTIMULibDemoGL
apt-get -y install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
qmake
make -j4
make install
RTIMULibDemoGL # Run this on Graphical User Interface
RTIMULibDemoGL


Install the sense_hat

cd ~
apt -y install python3-pip
pip3 install Cython Pillow numpy sense_hat smbus

Output:

root@microshift:~# pip3 install Cython Pillow numpy sense_hat smbus
Collecting Cython
  Downloading Cython-0.29.32-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl (1.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 6.2 MB/s eta 0:00:00
Collecting Pillow
  Downloading Pillow-9.2.0-cp310-cp310-manylinux_2_28_aarch64.whl (3.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 6.5 MB/s eta 0:00:00
Collecting numpy
  Downloading numpy-1.23.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (13.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.9/13.9 MB 3.3 MB/s eta 0:00:00
Collecting sense_hat
  Downloading sense_hat-2.4.0-py3-none-any.whl (17 kB)
Collecting smbus
  Downloading smbus-1.1.post2.tar.gz (104 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 KB 4.1 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: smbus
  Building wheel for smbus (setup.py) ... done
  Created wheel for smbus: filename=smbus-1.1.post2-cp310-cp310-linux_aarch64.whl size=40770 sha256=be2d61090b51e2d380e9168adc5407c4a5653ee74dc385507f8e76f281ceb508
  Stored in directory: /root/.cache/pip/wheels/42/c2/24/5c3e4f44425dfc5482f32d21d1cb894f956a72300367cd3c76
Successfully built smbus
Installing collected packages: smbus, Pillow, numpy, Cython, sense_hat
Successfully installed Cython-0.29.32 Pillow-9.2.0 numpy-1.23.2 sense_hat-2.4.0 smbus-1.1.post2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

You may install the latest sense_hat==2.3.1 that adds support for the new TCS34725, but it shows a Warning for the SenseHat. You may instead install sense_hat==2.2.0.

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

You can also try out the gravity_ball, dice, egg drop, life cycles.

Test the USB camera - Install the latest pygame.

pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

You will see the error “ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection refused” because the Pop 22.04 uses Pipewire, you cannot install pulseaudio.

Install MicroShift

Clone the microshift repo so we can run the install.sh

sudo su –
git clone https://github.com/thinkahead/microshift.git
cd microshift

Although crio 1.24 is available for arm64 xUbuntu_22.04, setting CRIO_VERSION=1.24 did not work for me. The Node STATUS did not go to Ready. It showed the following error in crio logs.

Aug 27 13:31:03 microshift.example.com microshift[60174]: E0827 13:31:03.764662   60174 kubelet.go:2223] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"

Therefore, hardcode the DISTRO=ubuntu (instead of pop) and OS=xUbuntu_20.04. We will use CRIO_VERSION=1.21 in the install.sh for arm64 xUbuntu_20.04.

root@microshift:~/microshift# git diff ./install.sh
diff --git a/install.sh b/install.sh
index 6d3a8b86..e77312e8 100755
--- a/install.sh
+++ b/install.sh
@@ -17,6 +17,7 @@ CONFIG_ENV_ONLY=${CONFIG_ENV_ONLY:=false}
 # Function to get Linux distribution
 get_distro() {
     DISTRO=$(egrep '^(ID)=' /etc/os-release| sed 's/"//g' | cut -f2 -d"=")
+    DISTRO=ubuntu
     if [[ $DISTRO != @(rhel|fedora|centos|ubuntu) ]]
     then
       echo "This Linux distro is not supported by the install script"
@@ -161,6 +162,7 @@ install_crio() {
       ;;
       "ubuntu")
         OS=xUbuntu_$OS_VERSION
+        OS=xUbuntu_20.04
         KEYRINGS_DIR=/usr/share/keyrings

Run the install script

./install.sh

We can get more details about the microshift service with

systemctl show microshift.service

To check the microshift systemd service, check the file /lib/systemd/system/microshift.service. It shows that the microshift binary is in /usr/local/bin/ directory.

root@microshift:~/microshift# cat /lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
After=crio.service

[Service]
WorkingDirectory=/usr/local/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

To start microshift and check the status and logs, you can run

systemctl start microshift
systemctl status microshift
journalctl -u microshift -f

Install the oc and kubectl client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Output when microshift is started

NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    <none>   28m   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-2nxcj                 1/1     Running   0          28m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-ntmtv   1/1     Running   0          28m
openshift-dns                   dns-default-dddqr                     2/2     Running   0          28m
openshift-dns                   node-resolver-xgx6z                   1/1     Running   0          28m
openshift-ingress               router-default-85bcfdd948-7cdmf       1/1     Running   0          28m
openshift-service-ca            service-ca-7764c85869-ks92j           1/1     Running   0          28m

Install Podman

Although we do not need podman for microshift, we will build images and test some containers. So, let’s install podman.

apt install -y podman buildah skopeo

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

Replace the coreos nodename in the persistent volume claims with the microshift.example.com (our current nodename)

sed -i "s|coreos|microshift.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|microshift.example.com|" grafana/grafana-data-dynamic.yaml

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: microshift.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner

We create and push the “measure:latest” image using the Dockerfile. If you want to run all the steps in a single command using the prebuilt image, just execute the runall-balena-dynamic.sh.

./runall-balena-dynamic.sh

The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

You can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.227
oc project default
oc apply -f object-detection.yaml

We will see pictures being sent to Node Red when a person is detected, and chat messages as follows at http://nodered-svc-nodered.cluster.local/chat

When we are done testing, we can delete the deployment

oc delete -f object-detection.yaml

4. Running a Virtual Machine Instance on MicroShift

Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

sudo apt install -y virt-manager libvirt0 qemu-system
#vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
#systemctl restart firewalld
virt-host-validate qemu

Then find the latest KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

LATEST=20220827 # If the latest version does not work
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'
vi okd-web-console-install.yaml # Replace token and ip address

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc logs deployment/console-deployment -f -n kube-system

When the kubevirt and console pods are started, the output is:

root@microshift:~/microshift/raspberry-pi/console# watch "oc get nodes;oc get pods -A"
NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    <none>   82m   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     console-deployment-6885cf69cb-vsm95   1/1     Running   0          50s
kube-system                     kube-flannel-ds-2nxcj                 1/1     Running   0          82m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-ntmtv   1/1     Running   0          82m
kubevirt                        virt-api-597df68c96-xrbrt             1/1     Running   0          2m47s
kubevirt                        virt-controller-7b9664ccdb-c57ll      1/1     Running   0          2m1s
kubevirt                        virt-controller-7b9664ccdb-rprnx      1/1     Running   0          2m1s
kubevirt                        virt-handler-wg62h                    1/1     Running   0          2m1s
kubevirt                        virt-operator-6855fc4f5b-6lclp        1/1     Running   0          4m28s
kubevirt                        virt-operator-6855fc4f5b-s6kmq        1/1     Running   0          4m28s
openshift-dns                   dns-default-dddqr                     2/2     Running   0          82m
openshift-dns                   node-resolver-xgx6z                   1/1     Running   0          82m
openshift-ingress               router-default-85bcfdd948-7cdmf       1/1     Running   0          82m
openshift-service-ca            service-ca-7764c85869-ks92j           1/1     Running   0          82m

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”.

VM Launch Flow

Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. It will take another minute after the VMI goes to Running state to ssh to the instance.

oc get vmi
ssh -o StrictHostKeyChecking=no fedora@$vmipaddress ping -c 2 google.com

Output:

root@microshift:~/microshift/raspberry-pi/vmi# oc get vmi,pods
NAME                                            AGE   PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   12m   Running   10.42.0.19   microshift.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-t8cz7   2/2     Running   0          12m
root@microshift:~/microshift/raspberry-pi/vmi# ssh fedora@10.42.0.19 ping -c2 google.com
The authenticity of host '10.42.0.19 (10.42.0.19)' can't be established.
ED25519 key fingerprint is SHA256:JGAlJh15te3r1pgqwGk36jidOuD1MewwhseinVqH9kI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.19' (ED25519) to the list of known hosts.
fedora@10.42.0.19's password:
PING google.com (142.251.40.174) 56(84) bytes of data.
64 bytes from lga25s81-in-f14.1e100.net (142.251.40.174): icmp_seq=1 ttl=58 time=5.28 ms
64 bytes from lga25s81-in-f14.1e100.net (142.251.40.174): icmp_seq=2 ttl=58 time=5.18 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 5.175/5.227/5.280/0.052 ms

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

root@microshift:~/microshift/raspberry-pi/vmi# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.19 ping -c2 google.com
The authenticity of host '10.42.0.19 (10.42.0.19)' can't be established.
ED25519 key fingerprint is SHA256:JGAlJh15te3r1pgqwGk36jidOuD1MewwhseinVqH9kI.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.19' (ED25519) to the list of known hosts.
fedora@10.42.0.19's password:
PING google.com (142.251.40.174) 56(84) bytes of data.
64 bytes from lga25s81-in-f14.1e100.net (142.251.40.174): icmp_seq=1 ttl=58 time=6.55 ms
64 bytes from lga25s81-in-f14.1e100.net (142.251.40.174): icmp_seq=2 ttl=58 time=4.83 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.827/5.690/6.554/0.863 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use virtctl. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

Output:

root@microshift:~/microshift/raspberry-pi/vmi# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
root@microshift:~/microshift/raspberry-pi/vmi# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
root@microshift:~/microshift/raspberry-pi/vmi# podman rm -v $id
eb080ccbfef6c7ffa3d09c30baa3d55b881e380baf64fb8d38c48e3f99172d29
root@microshift:~/microshift/raspberry-pi/vmi# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ping -c2 google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=116 time=4.18 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=116 time=3.60 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.601/3.891/4.182/0.290 ms
[fedora@vmi-fedora ~]$ # ^] to disconnect
root@microshift:~/microshift/raspberry-pi/vmi#

We can look at the VM in the console:

VM in Console


When done, we can delete the VMI

root@microshift:~/microshift/raspberry-pi/vmi# oc delete -f vmi-fedora.yaml
virtualmachineinstance.kubevirt.io "vmi-fedora" deleted

Also delete kubevirt operator

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

5. Use .NET to drive a Raspberry Pi Sense HAT

We will run the .NET sample to retrieve sensor values from the Sense HAT, respond to joystick input, and drive the LED matrix. The source code is in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/dotnet

Build and push the arm64v8 image “docker.io/karve/sensehat-dotnet”. The Dockerfile uses the sensehat-quickstart-1.sh to install dot net and build the SenseHat.Quickstart sample.

apt -y install buildah
buildah bud -t docker.io/karve/sensehat-dotnet .
podman push docker.io/karve/sensehat-dotnet

You may test the sample directly using podman. The sensehat-quickstart-2.sh uses the dotnet run  command to run the sample.

podman run --privileged -d docker.io/karve/sensehat-dotnet

After you are done, stop and remove the container using podman

root@microshift:~/microshift/raspberry-pi/dotnet# podman ps
CONTAINER ID  IMAGE                                   COMMAND               CREATED         STATUS             PORTS       NAMES
a073d27b5063  docker.io/karve/sensehat-dotnet:latest  bash -c . ~/.bash...  11 seconds ago  Up 11 seconds ago              youthful_brattain
root@microshift:~/microshift/raspberry-pi/dotnet# podman stop youthful_brattain
youthful_brattain
root@microshift:~/microshift/raspberry-pi/dotnet# podman rm youthful_brattain
a073d27b50634cf735c1a39c0ece1ad795a4213441a8780b7e0a4845aace4b97

Now let’s run the sample in MicroShift

oc new-project dotnet
oc apply -f dotnet.yaml
oc logs deployment/dotnet-deployment -f

We can observe the console log output as sensor data is displayed. The LED matrix displays a yellow pixel on a field of blue. Holding the joystick in any direction moves the yellow pixel in that direction. Clicking the center joystick button causes the background to switch from blue to red.

Temperature Sensor 1: 38.2°C
Temperature Sensor 2: 37.4°C
Pressure: 1004.04 hPa
Altitude: 83.29 m
Acceleration: <-0.024108887, -0.015258789, 0.97961426> g
Angular rate: <2.8270676, 0.075187966, 0.30827066> DPS
Magnetic induction: <-0.15710449, 0.3963623, -0.51342773> gauss
Relative humidity: 38.6%
Heat index: 43.2°C
Dew point: 21.5°C
…

When we are done, we can delete the deployment

oc delete -f dotnet.yaml

Cleanup MicroShift

We can use the cleanup.sh script available on github to clean up the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on Pop!_OS (64 bit)

We can run MicroShift within containers in two ways:
  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Podman container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.

Microshift Containerized

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images;podman ps"

After microshift is started, we can run the samples shown earlier.

After we are done using microshift, we can stop and remove microshift

systemctl stop microshift
podman volume rm microshift-data

Alternatively, delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest arm64 image).

podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift-aio.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift-aio.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

Then run:

systemctl daemon-reload
systemctl start microshift

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

podman volume inspect microshift-data
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman ps;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to update the AppArmour profile. The virt-handler invokes the QEMU binary at /usr/libexec/qemu-kvm, which gets blocked by the AppArmor profile for libvirtd on Ubuntu-based systems. Also, the qemu-kvm package on Ubuntu installs the binary with a different location and name (e.g., /usr/bin/qemu-system-aarch64) as seen below:

root@microshift:~/microshift/raspberry-pi/vmi# ls -las /usr/bin/kvm* /usr/bin/qemu-system-aarch64
    0 lrwxrwxrwx 1 root root       19 Jul  6 22:52 /usr/bin/kvm -> qemu-system-aarch64
19608 -rwxr-xr-x 1 root root 20075264 Jul  6 22:52 /usr/bin/qemu-system-aarch64

Set the symbolic link to /usr/libexec/qemu-kvm

sudo ln -s /usr/bin/kvm /usr/libexec/qemu-kvm
vi /etc/apparmor.d/usr.sbin.libvirtd

Add the following line in /etc/apparmor.d/usr.sbin.libvirtd and reload the apparmor service.

  /usr/libexec/qemu-kvm PUx,

This is seen in the image below:

Update the AppArmour Profile


Now, we can run the samples shown earlier. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman. If however the virtualmachineinstance.kubevirt.io/vmi-fedora stays in scheduled state, check the events with “oc get events” to find the cause of the problem.

oc get vmi,pods 
virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
oc get svc vmi-fedora-ssh # Get the nodeport
podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
#oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@$podman_ip_address -p $nodeport"

The ip address of the all-in-one microshift podman container in output below is 10.42.0.14. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 31531 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 31531 on the all-in-one microshift container using password fedora. This takes us to the VMI port 22 as shown below:

root@microshift:~/microshift/raspberry-pi/vmi# oc get vmi,pods
NAME                                            AGE    PHASE     IP           NODENAME                     READY
virtualmachineinstance.kubevirt.io/vmi-fedora   115s   Running   10.42.0.14   microshift-aio.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-zzrmb   2/2     Running   0          115s
root@microshift:~/microshift/raspberry-pi/vmi# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
root@microshift:~/microshift/raspberry-pi/vmi# oc get svc vmi-fedora-ssh # Get the nodeport
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.173.106   <none>        22:31531/TCP   0s
root@microshift:~/microshift/raspberry-pi/vmi# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
10.88.0.2
root@microshift:~/microshift/raspberry-pi/vmi# oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 31531"
If you don't see a command prompt, try pressing enter.

Permission denied, please try again.
fedora@10.88.0.2's password:
[fedora@vmi-fedora ~]$ sudo dnf install -y qemu-guest-agent >/dev/null
[fedora@vmi-fedora ~]$ sudo systemctl enable --now qemu-guest-agent
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.251.40.142) 56(84) bytes of data.
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=1 ttl=116 time=4.20 ms
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=2 ttl=116 time=4.22 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.202/4.208/4.215/0.006 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted

The QEMU guest agent that we installed is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

When we are done with the VMI sample, we can delete it

oc delete -f vmi-fedora.yaml

After we are done using MicroShift Containerized All-In-One, we can delete the MicroShift container.

podman rm -f microshift && podman volume rm microshift-data

or if started using systemd, then

systemctl stop microshift && podman volume rm microshift-data
rm -f /usr/lib/systemd/system/microshift.service

Kata Containers

We install Kata Containers from sources. We start by installing golang, building the kata containers runtime, followed by the initrd and the rootfs images and finally the kata containers kernel. Make sure you have the qemu installed as shown in Sample 4 earlier (if not already) with:

sudo apt install -y virt-manager libvirt0 qemu-system
virt-host-validate qemu

Install golang

wget https://golang.org/dl/go1.18.3.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.18.3.linux-arm64.tar.gz
rm -f go1.18.3.linux-arm64.tar.gz

export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bash_profile
export PATH=\$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
#export GO111MODULE=off
go env -w GO111MODULE=auto

Build and install the Kata Containers runtime

go get -d -u github.com/kata-containers/kata-containers
cd /root/go/src/github.com/kata-containers/kata-containers/src/runtime/
make
make install

The build creates the following:

  • runtime binary: /usr/local/bin/kata-runtime and /usr/local/bin/containerd-shim-kata-v2
  • configuration file: /usr/share/defaults/kata-containers/configuration.toml

Check requirements

sudo kata-runtime check --verbose # This will return error because vmlinux.container does not exist

kata-runtime --version
containerd-shim-kata-v2 --version

Kata creates a VM in which to run one or more containers by launching a hypervisor. Kata supports multiple hypervisors. We use QEMU. The hypervisor needs two assets for this task: a Linux kernel and a small root filesystem image to boot the VM. The guest kernel is passed to the hypervisor and used to boot the VM. The default kernel provided in Kata Containers is highly optimized for kernel boot time and minimal memory footprint, providing only those services required by a container workload. The hypervisor uses an image file which provides a minimal root filesystem used by the guest kernel to boot the VM and host the Kata Container. Kata Containers supports both initrd and rootfs based minimal guest images. The default packages provide both an image and an initrd, both of which are created using the osbuilder tool.

Since, Kata containers can run with either an initrd image or a rootfs image, we will build both images but initially use the initrd. We will switch to rootfs in later when running the sample.

Configure to use initrd image

Add initrd = /usr/share/kata-containers/kata-containers-initrd.img in the configuration file /usr/share/defaults/kata-containers/configuration.toml and comment out the default image line with the following:

sudo mkdir -p /etc/kata-containers/
sudo install -o root -g root -m 0640 /usr/share/defaults/kata-containers/configuration.toml /etc/kata-containers
sudo sed -i 's/^\(image =.*\)/# \1/g' /etc/kata-containers/configuration.toml

The initrd line is not added by default, so add the initrd line in /etc/kata-containers/configuration.toml so that it looks as follows:

initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
# image = "/usr/share/kata-containers/kata-containers.img"

Next, we create the initrd image and the rootfs images. One of the initrd and image options in Kata runtime config file must be set, but not both. The main difference between the options is that the size of initrd (10MB+) is significantly smaller than rootfs image (100MB+).

Initrd image

We will use podman to build the initrd image. This is done in three steps: create the local rootfs for initrd image, build an initrd image, install the initrd image. There is a problem with runc when using podman. So, we need to switch the DOCKER_RUNTIME to crun.

Create the Local rootfs for initrd image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
./rootfs.sh -l
export distro=ubuntu
time script -fec 'sudo -E GOPATH=$GOPATH DOCKER_RUNTIME=crun AGENT_INIT=yes USE_PODMAN=true ./rootfs.sh ${distro}'

Build an initrd image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/initrd-builder
script -fec 'sudo -E DOCKER_RUNTIME=crun AGENT_INIT=yes USE_PODMAN=true ./initrd_builder.sh ${ROOTFS_DIR}'

Install the initrd image

commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-initrd-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers-initrd.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers-initrd.img)

Rootfs image

Now we build the rootfs image in three steps.

Create a local rootfs for rootfs image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/rootfs-builder
time script -fec 'sudo -E GOPATH=$GOPATH DOCKER_RUNTIME=crun USE_PODMAN=true ./rootfs.sh ${distro}' 

Build a rootfs image

cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
time script -fec 'sudo -E DOCKER_RUNTIME=crun USE_PODMAN=true ./image_builder.sh ${ROOTFS_DIR}'

Install the rootfs image

commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-${date}-${commit}"
sudo install -o root -g root -m 0640 -D kata-containers.img "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" kata-containers.img)

Build Kata Containers Kernel

apt -y install flex bison bc
go env -w GO111MODULE=auto
go get github.com/kata-containers/packaging
cd $GOPATH/src/github.com/kata-containers/packaging/kernel

The script ./build-kernel.sh tries to apply the patches from ${GOPATH}/src/github.com/kata-containers/packaging/kernel/patches/ when it sets up a kernel. If you want to add a source modification, add a patch on this directory. The script also copies or generates a kernel config file from ${GOPATH}/src/github.com/kata-containers/packaging/kernel/configs/ to .config in the kernel source code. You can modify it as needed. We will use the defaults.

./build-kernel.sh setup

After the kernel source code is ready, we build the kernel

cp /root/go/src/github.com/kata-containers/packaging/kernel/configs/fragments/arm64/.config kata-linux-5.4.60-92/.config
./build-kernel.sh build

Install the kernel to the default Kata containers path (/usr/share/kata-containers/)

./build-kernel.sh install

The /etc/kata-containers/configuration.toml has the following:

# Path to vhost-user-fs daemon.
virtio_fs_daemon = "/usr/libexec/virtiofsd"
So, create a symbolic link as follows:
ln -s /usr/lib/qemu/virtiofsd /usr/libexec/virtiofsd

Cgroup v2 on host is not yet supported. So, we need to switch to cgroup v1. Concatenate the following onto the end of the existing line (do not add a new line) in /boot/firmware/cmdline.txt (or /boot/cmdline.txt)

systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller

Then, reboot the Raspberry Pi 4 and log back in as root.

Alternatively, instead of appending the kernel arguments with unified_cgroup_hierarchy=0, you may run the following after every reboot

mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Check the output kata-runtime:

sudo kata-runtime check --verbose

Check the hypervisor.qemu section in configuration.toml:

cat /etc/kata-containers/configuration.toml | awk -v RS= '/\[hypervisor.qemu\]/'

Check the initrd image (kata-containers-initrd.img), the rootfs image (kata-containers.img), and the kernel in the /usr/share/kata-containers directory:

ls -las /usr/share/kata-containers

Create the file /etc/crio/crio.conf.d/50-kata

cat > /etc/crio/crio.conf.d/50-kata << EOF
[crio.runtime.runtimes.kata]
  runtime_path = "/usr/local/bin/containerd-shim-kata-v2"
  runtime_root = "/run/vc"
  runtime_type = "vm"
  privileged_without_host_devices = true
EOF

Restart crio and start microshift (if not already started).

systemctl restart crio

After MicroShift is started, you can apply the kata runtimeclass and run the samples.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-runtimeclass.yaml
# Start three kata pods
oc apply -f kata-nginx.yaml -f kata-alpine.yaml  -f kata-busybox.yaml

watch "oc get nodes;oc get pods -A;crictl stats -a"

InfluxDB sample - We execute the runall-balena-dynamic.sh after updating the deployment yamls to use the runtimeclass: kata.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb/

Update the influxdb-deployment.yaml, telegraf-deployment.yaml and grafana/grafana-deployment.yaml to use the runtimeClassName: kata. With Kata containers, we do not directly get access to the host devices. So, we run the measure container as a runc pod. In runc, '--privileged' for a container means all the /dev/* block devices from the host are mounted into the guest. This will allow the privileged container to gain access to mount any block device from the host.

sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml telegraf-deployment.yaml grafana/grafana-deployment.yaml

Replace the annotation kubevirt.io/provisionOnNode with the nodename raspberry.example.com and execute the runall-balena-dynamic.sh. This will create a new project influxdb.

oc get nodes # Get the nodename and replace in next line
nodename=microshift.example.com
sed -i "s|kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" influxdb-data-dynamic.yaml
sed -i "s| kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" grafana/grafana-data-dynamic.yaml

./runall-balena-dynamic.sh

Let’s watch the stats (CPU%, Memory, Disk and Inodes) of the kata container pods:

watch "oc get nodes;oc get pods;crictl stats"

We can look at the RUNTIME_CLASS using custom columns:

root@microshift:~/microshift/raspberry-pi/influxdb# watch "oc get nodes;oc get pods;crictl stats"
NAME                     STATUS   ROLES    AGE     VERSION
microshift.example.com   Ready    <none>   3h31m   v1.21.0
NAME                                   READY   STATUS    RESTARTS   AGE
grafana-855ffb48d8-bqht4               1/1     Running   0          3m49s
influxdb-deployment-6d898b7b7b-7x5gt   1/1     Running   0          4m42s
measure-deployment-58cddb5745-4qsfm    1/1     Running   0          4m27s
telegraf-deployment-d746f5c6-6ml6t     1/1     Running   0          4m14s
^[2J^[HCONTAINER           CPU %               MEM                 DISK                INODES
7652d50e66232       0.09                11.59MB             214.7kB             11
ab694a8694c6f       0.04                25.87MB             4.132MB             75
d88aa8a3459f2       0.18                24.82MB             28.93kB             11

root@microshift:~/microshift/raspberry-pi/influxdb# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName,IP:.status.podIP,IMAGE:.status.containerStatuses[].image -A
NAME                                   STATUS    RUNTIME_CLASS   IP              IMAGE
grafana-855ffb48d8-bqht4               Running   kata            10.42.0.8       docker.io/grafana/grafana:5.4.3
influxdb-deployment-6d898b7b7b-7x5gt   Running   kata            10.42.0.5       docker.io/library/influxdb:1.7.4
measure-deployment-58cddb5745-4qsfm    Running   <none>          10.42.0.6       docker.io/karve/measure:latest
telegraf-deployment-d746f5c6-6ml6t     Running   kata            10.42.0.7       docker.io/library/telegraf:1.10.0
kube-flannel-ds-q96gf                  Running   <none>          192.168.1.227   quay.io/microshift/flannel:4.8.0-0.okd-2021-10-10-030117
kubevirt-hostpath-provisioner-82t8j    Running   <none>          10.42.0.4       quay.io/microshift/hostpath-provisioner:4.8.0-0.okd-2021-10-10-030117
dns-default-4pb2f                      Running   <none>          10.42.0.3       quay.io/microshift/coredns:4.8.0-0.okd-2021-10-10-030117
node-resolver-s8892                    Running   <none>          192.168.1.227   quay.io/microshift/cli:4.8.0-0.okd-2021-10-10-030117
router-default-85bcfdd948-dnp7l        Running   <none>          192.168.1.227   quay.io/microshift/haproxy-router:4.8.0-0.okd-2021-10-10-030117
service-ca-7764c85869-7lgld            Running   <none>          10.42.0.2       quay.io/microshift/service-ca-operator:4.8.0-0.okd-2021-10-10-030117

Check the qemu process. We used the initrd image and we can see the -initrd in the parameters

ps -ef | grep qemu

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

./deleteall-balena-dynamic.sh

Configure to use the rootfs image

We have been using the initrd image when running the samples for kata containers above, now let’s switch to the rootfs image instead of using initrd by changing the following lines in /etc/kata-containers/configuration.toml

#initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
image = "/usr/share/kata-containers/kata-containers.img"

Also disable the image nvdimm by setting the following:

disable_image_nvdimm = true # Default is false

Restart crio and test with the kata-alpine sample

systemctl restart crio
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-alpine.yaml

Check the qemu process. We see the rootfs image being used “root=/dev/vda1 rootflags=data=ordered,errors=remount-ro ro rootfstype=ext4”

ps -ef | grep qemu

We can also execute the Jupyter Notebook samples for Digit Recognition, Object Detection and License Plate Recognition with Kata containers as shown in Part 23.

Conclusion

In this Part 25, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Pop!_OS (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift and also ran a sample that used .NET to drive a Raspberry Pi Sense HAT. Finally, we built and configured Kata containers to run with MicroShift and ran samples to use Kata containers.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift and KubeVirt on ARM devices and if you would like to see something covered in more detail.

References

#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi#popos

​​

0 comments
100 views

Permalink