Infrastructure as a Service

 View Only

MicroShift – Part 20: Raspberry Pi 4 with Arch Linux

By Alexei Karve posted Fri June 03, 2022 07:04 PM

  

MicroShift and KubeVirt on Raspberry Pi 4 with Arch Linux

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream. In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04. In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively, Part 13 with Ubuntu 22.04, Part 14 on Rocky Linux, Part 15 on openSUSE, Part 16 on Oracle Linux, Part 17 on AlmaLinux, Part 18 on Manjaro, and Part 19 on Kali Linux. In this Part 20, we will work with MicroShift on Arch Linux. We will run a Quarkus sample application. We will work with an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also run sample notebooks for object detection and license plate detection.

Arch is “Do It Yourself” operating system as you can customize every intricate detail yourself. Arch follows a rolling release model. Arch is highly technical at its core, making it an apt distro for expert and power users who can completely utilize it. The Arch User Repository is a community-driven repository for Linux packages developed by users. It holds a massive library of installation packages related to Arch Linux. The pacman package manager is versatile enough to support the installation of packages from the Arch repository and binaries compiled from the source using makepkg. Support for AppArmor, SELinux and Tomoyo was removed in the Arch kernel

Setting up the Raspberry Pi 4 with ArchLinux

To run Arch Linux on a Raspberry Pi 4 via U-Boot, the MicroSDXC card needs to be prepared on another system and then the disk moved to the RPi4. We will create a disk image in a Fedora VM on the MacBook Pro, then copy the image out from the VM to the Macbook and write to MicroSDXC card. We will essentially follow the following steps with minor changes to download the ArchLinux and create an image file for writing. We may also customize the image further by installing the qemu libraries which lets you cross-compile or setup the user and wifi. We will however create a barebones bootable image and do the customization after the boot.

1. Create an image file for writing - We reuse the Fedora 35 VM from Part 1 running in VirtualBox using the Vagrantfile on your Macbook Pro if you have it handy. We do not require to install MicroShift in the VM, so the config.vm.provision section can be removed if you are creating a new VM.

git clone https://github.com/thinkahead/microshift.git
cd microshift/vagrant
vagrant up
vagrant ssh
sudo su -
cd /vagrant/
dd if=/dev/zero of=archlinux.img bs=1024 count=4194304 # fallocate -l 4G "archlinux.img"

losetup -fP archlinux.img # losetup --find --show archlinux.img
losetup --list
# Partition the loop device
parted --script /dev/loop0 mklabel msdos
parted --script /dev/loop0 mkpart primary fat32 0% 200M
parted --script /dev/loop0 mkpart primary ext4 200M 100%
lsblk /dev/loop0
# Create the FAT filesystem
mkfs.vfat -F32 /dev/loop0p1
# Create the ext4 filesystem
sudo mkfs.ext4 -F /dev/loop0p2
mkdir boot
mount /dev/loop0p1 boot
mkdir root
mount /dev/loop0p2 root
# Download and extract and filesystem
wget http://il.us.mirror.archlinuxarm.org/os/ArchLinuxARM-rpi-aarch64-latest.tar.gz
yum install bsdtar
bsdtar -xpf "ArchLinuxARM-rpi-aarch64-latest.tar.gz" -C root
sync
# tar -xpf "ArchLinuxARM-rpi-2-latest.tar.gz" -C /mnt
# If you use tar instead of bsdtar, ignore the tar: Ignoring unknown extended header keyword 'SCHILY.fflags'
# Move boot files to the first partition
mv root/boot/* boot
vi boot/boot.txt # Replace the two booti lines containing ftd_addr_r with ftd_addr

Thus replace,

[…]
booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr_r};
[…]
booti ${kernel_addr_r} - ${fdt_addr_r};
[…]

with

[…]
booti ${kernel_addr_r} ${ramdisk_addr_r}:${filesize} ${fdt_addr};
[…]
booti ${kernel_addr_r} - ${fdt_addr};
[…]
# Run the mkscr in the boot directory
yum -y install uboot-tools # the mkscr uses uboot-tools
cd boot
./mkscr 
cd ..
# Unmount the two partitions
umount boot root
rmdir boot root
losetup -d /dev/loop0 # losetup --detach "/dev/loop0"
exit # Back to vagrant user from root
exit # Back to Macbook Pro from vagrant VM

vagrant plugin install vagrant-scp

vagrant scp :/vagrant/archlinux.img . #  Copy the image from VM to MacBook Pro

2. Write the archlinux.img to Microsdxc card using balenaEtcher or the Raspberry Pi Imager

3. Have a Keyboard and Monitor connected to the Raspberry Pi 4

4. Insert Microsdxc into Raspberry Pi 4 and poweron

5. Optionally have the Keyboard and Monitor attached to the Raspberry Pi

6. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet

$ sudo nmap -sn 192.168.1.0/24
Nmap scan report for alarm.fios-router.home (192.168.1.232)
Host is up (0.071s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)

7. The default user is alarm with the password alarm. The default root password is root. Login using user alarm and password alarm to ipaddress of your Raspberry Pi. For su, use the default root password root.

ssh alarm@$ipaddress
su -
pacman-key --init
pacman-key --populate archlinuxarm
pacman -S --noconfirm sudo
echo '%wheel ALL=(ALL) ALL' >> /etc/sudoers.d/wheel

Output:

$ ssh alarm@192.168.1.232
The authenticity of host '192.168.1.232 (192.168.1.232)' can't be established.
ED25519 key fingerprint is SHA256:zqEjgSuVTM/R5dQ8EG+PSI+N3zJ3/UuTMz88IAHNgIE.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.232' (ED25519) to the list of known hosts.
alarm@192.168.1.232's password:
[alarm@alarm ~]$ su -
Password:
[root@alarm ~]# pacman-key --init
gpg: /etc/pacman.d/gnupg/trustdb.gpg: trustdb created
gpg: no ultimately trusted keys found
gpg: starting migration from earlier GnuPG versions
gpg: porting secret keys from '/etc/pacman.d/gnupg/secring.gpg' to gpg-agent
gpg: migration succeeded
==> Generating pacman master key. This may take some time.
gpg: Generating pacman keyring master key...
gpg: key 81F508C6734A9AA2 marked as ultimately trusted
gpg: directory '/etc/pacman.d/gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/etc/pacman.d/gnupg/openpgp-revocs.d/4566CC0F0B53F646461D3D7B81F508C6734A9AA2.rev'
gpg: Done
==> Updating trust database...
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
[root@alarm ~]# pacman-key --populate archlinuxarm
==> Appending keys from archlinuxarm.gpg...
==> Locally signing trusted keys in keyring...
  -> Locally signed 3 keys.
==> Importing owner trust values...
gpg: setting ownertrust to 4
gpg: inserting ownertrust of 4
gpg: setting ownertrust to 4
==> Updating trust database...
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   3  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1  valid:   3  signed:   1  trust: 0-, 0q, 0n, 3m, 0f, 0u
gpg: depth: 2  valid:   1  signed:   0  trust: 1-, 0q, 0n, 0m, 0f, 0u
[root@alarm ~]# pacman -S --noconfirm sudo
…
[root@alarm ~]# echo '%wheel ALL=(ALL) ALL' >> /etc/sudoers.d/wheel
[root@alarm ~]# exit
logout
[alarm@alarm ~]$ sudo su -

8. Resize the ext4 partition to use the full size using fdisk and resize2fs

[root@alarm ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
dev             3.8G     0  3.8G   0% /dev
run             3.9G  572K  3.9G   1% /run
/dev/mmcblk0p2  3.7G  1.1G  2.5G  31% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G     0  3.9G   0% /tmp
/dev/mmcblk0p1  188M  122M   66M  65% /boot
tmpfs           780M     0  780M   0% /run/user/1000
[root@alarm ~]# fdisk -lu
Disk /dev/mmcblk0: 58.88 GiB, 63218647040 bytes, 123473920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x38c92572

Device         Boot  Start     End Sectors  Size Id Type
/dev/mmcblk0p1        2048  391167  389120  190M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      391168 8388607 7997440  3.8G 83 Linux
[root@alarm ~]# fdisk /dev/mmcblk0

Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.


Command (m for help): p

Disk /dev/mmcblk0: 58.88 GiB, 63218647040 bytes, 123473920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x38c92572

Device         Boot  Start     End Sectors  Size Id Type
/dev/mmcblk0p1        2048  391167  389120  190M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      391168 8388607 7997440  3.8G 83 Linux

Command (m for help): d
Partition number (1,2, default 2): 2

Partition 2 has been deleted.

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2):
First sector (391168-123473919, default 391168):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (391168-123473919, default 123473919):

Created a new partition 2 of type 'Linux' and of size 58.7 GiB.
Partition #2 contains a ext4 signature.

Do you want to remove the signature? [Y]es/[N]o: n

Command (m for help): w

The partition table has been altered.
Syncing disks.

[root@alarm ~]# resize2fs /dev/mmcblk0p2
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mmcblk0p2 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 8
The filesystem on /dev/mmcblk0p2 is now 15385344 (4k) blocks long.

[root@alarm ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
dev             3.8G     0  3.8G   0% /dev
run             3.9G  572K  3.9G   1% /run
/dev/mmcblk0p2   58G  1.1G   55G   2% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G     0  3.9G   0% /tmp
/dev/mmcblk0p1  188M  122M   66M  65% /boot
tmpfs           780M     0  780M   0% /run/user/1000
[root@alarm ~]# fdisk -lu
Disk /dev/mmcblk0: 58.88 GiB, 63218647040 bytes, 123473920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x38c92572

Device         Boot  Start       End   Sectors  Size Id Type
/dev/mmcblk0p1        2048    391167    389120  190M  c W95 FAT32 (LBA)
/dev/mmcblk0p2      391168 123473919 123082752 58.7G 83 Linux

9. Update and add the ipv4 address to /etc/hosts

sudo pacman --noconfirm -Syyu

hostnamectl set-hostname rpi.example.com
echo "$ipaddress rpi rpi.example.com" >> /etc/hosts

10. Optionally, enable wifi using nmcli

pacman -S --noconfirm networkmanager 
systemctl enable --now NetworkManager
sleep 10
nmcli -c no device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask

11. Check the release

cat /etc/os-release
[root@alarm ~]# cat /etc/os-release
NAME="Arch Linux ARM"
PRETTY_NAME="Arch Linux ARM"
ID=archarm
ID_LIKE=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinuxarm.org/"
DOCUMENTATION_URL="https://archlinuxarm.org/wiki"
SUPPORT_URL="https://archlinuxarm.org/forum"
BUG_REPORT_URL="https://github.com/archlinuxarm/PKGBUILDs/issues"
LOGO=archlinux-logo

12. Update the kernel parameters.

pacman -S --noconfirm vim uboot-tools
vim /boot/boot.txt

Concatenate the following onto the end of the existing setenv bootargs line (do not add a new line) in /boot/boot.txt

 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Then run,

cd /boot
./mkscr

A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

reboot

Verify

ssh alarm@$ipaddress
sudo su -
cat /proc/cmdline
mount | grep cgroup # Check that memory and cpuset are present
cat /proc/cgroups | column -t # Check that memory and cpuset are present
top

Output:

[root@rpi ~]# cat /proc/cmdline
console=ttyS1,115200 console=tty0 root=PARTUUID=38c92572-02 rw rootwait smsc95xx.macaddr=e4:5f:01:2e:d8:95 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
[root@rpi ~]# mount | grep cgroup # Check that memory and cpuset are present
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
[root@rpi ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          51           1
cpu           0          51           1
cpuacct       0          51           1
blkio         0          51           1
memory        0          51           1
devices       0          51           1
freezer       0          51           1
net_cls       0          51           1
perf_event    0          51           1
net_prio      0          51           1
hugetlb       0          51           1
pids          0          51           1
rdma          0          51           1

Output of top

Tasks: 123 total,   1 running, 122 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   7796.9 total,   7639.1 free,     71.3 used,     86.6 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   7632.2 avail Mem

Install sense_hat and RTIMULib on ArchLinux

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

pacman -S --noconfirm i2c-tools make cmake gcc python-pip

pip3 install Cython Pillow numpy sense_hat

Add the i2c-dev line to /etc/modules to load the kernel module automatically on boot.

modprobe i2c-dev
echo "i2c-dev" > /etc/modules-load.d/i2c-dev.conf

Check the Sense Hat with i2cdetect

i2cdetect -y 1

Output

[root@rpi ~]# modprobe i2c-dev
[root@rpi ~]# echo "i2c-dev" > /etc/modules-load.d/i2c-dev.conf
[root@rpi ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Create the file /etc/udev/rules.d/99-i2c.rules with the following contents:

cat << EOF >> /etc/udev/rules.d/99-i2c.rules
KERNEL=="i2c-[0-7]",MODE="0666"
EOF

The Raspberry Pi build comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

[root@rpi ~]# lsmod | grep st_
st_magn_spi            16384  0
st_pressure_spi        16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
st_magn_i2c            16384  0
st_pressure_i2c        16384  0
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure            16384  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the modules and reboot to take effect

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Check the Sense Hat with i2cdetect and that the i2c sensors are no longer being held.

ssh alarm@$ipaddress
sudo su -
i2cdetect -y 1
lsmod | grep st_

Output:

[root@rpi ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
[root@rpi ~]# lsmod | grep st_
empty

Install RTIMULib

pacman -S --noconfirm git
cd ~
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break

cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
pacman -S --noconfirm qt5-base
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Replace the sense_hat.py with the new file that uses SMBus as shown below and test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

pip3 install smbus

# Update the python package to use the i2cbus
cp -f sense_hat.py.new /usr/lib/python3.10/site-packages/sense_hat/sense_hat.py

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py
sed -i "s/32,32,32/255,255,255/" digits.py # Brighter LEDs
python3 digits.py

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

Install MicroShift on the Raspberry Pi 4 Arch Linux host

Install the dependencies and copy the latest microshift prebuilt binary. You will also need to set the crio.conf and registries.conf.

pacman -S --noconfirm firewalld cri-o crictl

# Check the registries used in /etc/crio/crio.conf and /etc/containers/registries.conf
echo 'unqualified-search-registries=["docker.io"]' >> /etc/containers/registries.conf

ARCH=arm64
VERSION=$(curl -sL https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | head -n 1 | cut -d '"' -f 4)
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/microshift-linux-$ARCH
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/release.sha256
BIN_SHA="$(sha256sum microshift-linux-$ARCH | awk '{print $1}')"
KNOWN_SHA="$(grep "microshift-linux-$ARCH" release.sha256 | awk '{print $1}')"
if [[ "$BIN_SHA" != "$KNOWN_SHA" ]]; then
    echo "SHA256 checksum failed" && exit 1
fi
sudo chmod +x microshift-linux-$ARCH
sudo mv microshift-linux-$ARCH /usr/bin/microshift

pacman -S --noconfirm --needed wget
wget https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift.service -O /usr/lib/systemd/system/microshift.service
systemctl daemon-reload

Install KVM on the host and validate the Host Virtualization Setup - The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu. We need to download some packages from the Arch x86_64 repo and install them using pacman as mentioned at https://archlinuxarm.org/forum/viewtopic.php?t=16037&p=69506

mkdir packages;cd packages
wget https://archlinux.org/packages/extra/any/edk2-ovmf/download -O edk2-ovmf-202202-2-any.pkg.tar.zst
wget https://archlinux.org/packages/extra/any/seabios/download -O seabios-1.16.0-1-any.pkg.tar.zst
wget http://mirror.archlinuxarm.org/aarch64/extra/qemu-system-x86-7.0.0-10-aarch64.pkg.tar.xz
wget https://archlinux.org/packages/extra/any/edk2-armvirt/download -O edk2-armvirt-202202-2-any.pkg.tar.zst
pacman -U --overwrite \* --noconfirm *

pacman -S --noconfirm jack2 virt-manager virt-viewer virt-install libvirt qemu ebtables dnsmasq bridge-utils qemu-system-aarch64

vim /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd
virt-host-validate qemu

Output:

[root@rpi packages]# systemctl enable --now libvirtd
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service -> /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket -> /usr/lib/systemd/system/virtlockd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket -> /usr/lib/systemd/system/virtlogd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket -> /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket -> /usr/lib/systemd/system/libvirtd-ro.socket.
[root@rpi packages]# virt-host-validate qemu
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (Unknown if this platform has IOMMU support)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)

Check that cni plugins are present

ls /opt/cni/bin/ # cni plugins
ls /usr/libexec/cni # empty

Output:

[root@rpi packages]# ls /opt/cni/bin/ # cni plugins
bandwidth  bridge  dhcp  firewall  host-device	host-local  ipvlan  loopback  macvlan  portmap	ptp  sbr  static  tuning  vlan	vrf
[root@rpi packages]# ls /usr/libexec/cni # empty
ls: cannot access '/usr/libexec/cni': No such file or directory

We will have systemd start and manage MicroShift. Refer to the microshift service for the three approaches.

systemctl enable --now crio microshift

Output:

[root@rpi packages]# systemctl enable --now crio microshift
Created symlink /etc/systemd/system/cri-o.service -> /usr/lib/systemd/system/crio.service.
Created symlink /etc/systemd/system/multi-user.target.wants/crio.service -> /usr/lib/systemd/system/crio.service.
Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service -> /usr/lib/systemd/system/microshift.service.

You may read about selecting zones for your interfaces. Open the firewall based on the cni subnet in /etc/cni/net.d/100-crio-bridge.conf below.

systemctl enable firewalld --now
firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
firewall-cmd --zone=trusted --add-source=10.85.0.0/16 --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5353/udp --permanent
firewall-cmd --reload

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
firewall-cmd --list-all

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

The microshift service references the microshift binary in the /usr/bin directory

[root@rpi ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

Install the kubectl and the openshift oc client

ARCH=arm64
cd /tmp
dnf -y install tar
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Output:

NAME              STATUS   ROLES    AGE    VERSION
rpi.example.com   Ready    <none>   2m8s   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-msjzr                 1/1     Running   0          2m9s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-m22wr   1/1     Running   0          119s
openshift-dns                   dns-default-rqjzj                     2/2     Running   0          2m9s
openshift-dns                   node-resolver-5j6gh                   1/1     Running   0          2m9s
openshift-ingress               router-default-85bcfdd948-tz8gr       1/1     Running   0          2m13s
openshift-service-ca            service-ca-7764c85869-d9xk9           1/1     Running   0          2m14s
POD ID              CREATED              STATE               NAME                                  NAMESPACE                       ATTEMPT             RUNTIME
37b440a73e5f4       48 seconds ago       Ready               router-default-85bcfdd948-tz8gr       openshift-ingress               0                   (default)
3b44d1c951787       About a minute ago   Ready               dns-default-rqjzj                     openshift-dns                   0                   (default)
28beb0c8cbe58       About a minute ago   Ready               kubevirt-hostpath-provisioner-m22wr   kubevirt-hostpath-provisioner   0                   (default)
6eb50f25d8871       About a minute ago   Ready               service-ca-7764c85869-d9xk9           openshift-service-ca            0                   (default)
a14ac3e889dcc       2 minutes ago        Ready               kube-flannel-ds-msjzr                 kube-system                     0                   (default)
d57c2f2332e7e       2 minutes ago        Ready               node-resolver-5j6gh                   openshift-dns                   0                   (default)
IMAGE                                     TAG                             IMAGE ID            SIZE
quay.io/microshift/cli                    4.8.0-0.okd-2021-10-10-030117   33a276ba2a973       205MB
quay.io/microshift/coredns                4.8.0-0.okd-2021-10-10-030117   67a95c8f15902       265MB
quay.io/microshift/flannel-cni            4.8.0-0.okd-2021-10-10-030117   0e66d6f50c694       8.78MB
quay.io/microshift/flannel                4.8.0-0.okd-2021-10-10-030117   85fc911ceba5a       68.1MB
quay.io/microshift/haproxy-router         4.8.0-0.okd-2021-10-10-030117   37292c44812e7       225MB
quay.io/microshift/hostpath-provisioner   4.8.0-0.okd-2021-10-10-030117   fdef3dc1264ad       39.3MB
quay.io/microshift/kube-rbac-proxy        4.8.0-0.okd-2021-10-10-030117   7f149e453e908       41.5MB
quay.io/microshift/service-ca-operator    4.8.0-0.okd-2021-10-10-030117   0d3ab44356260       276MB
registry.k8s.io/pause                     3.6                             7d46a07936af9       492kB

Output of top after MicroShift is started, see the memory usage

Tasks: 149 total,   1 running, 148 sleeping,   0 stopped,   0 zombie
%Cpu(s):  6.7 us,  0.5 sy,  0.0 ni, 92.4 id,  0.1 wa,  0.2 hi,  0.1 si,  0.0 st
MiB Mem :   7796.9 total,   4570.6 free,   1025.7 used,   2200.6 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   6658.5 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1694 root      20   0   12.9g 888040 109736 S  26.2  11.1   3:46.70 microshift
   2420 root      20   0  754820  62932  37392 S   2.0   0.8   0:19.50 service-ca-oper
   2562 root      20   0  749096  34648  25660 S   0.7   0.4   0:00.39 flanneld
   4153 root      20   0   13996   5304   4188 R   0.7   0.1   0:00.10 top
    213 root       0 -20       0      0      0 I   0.3   0.0   0:01.16 kworker/2:1H-kblockd
   1652 root      20   0 2090072  83584  39236 S   0.3   1.0   1:21.37 crio

Install podman - We will use podman for containerized deployment of MicroShift and building images for the samples.

pacman -S --noconfirm podman buildah skopeo

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, get the nodename.

oc get nodes

Output:

[root@rpi influxdb]# oc get nodes
NAME              STATUS   ROLES    AGE     VERSION
rpi.example.com   Ready    <none>   3m36s   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-fedora-dynamic.sh. This will create a new project influxdb. Note that the node name is different when running MicroShift with the all-in-one containerized approach. So, you will use the microshift.example.com instead of the rpi.example.com.

sed -i "s|coreos|rpi.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rpi.example.com|" grafana/grafana-data-dynamic.yaml

./runall-fedora-dynamic.sh

We create and push the “measure-fedora:latest” image using the Dockerfile that uses SMBus. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: rpi.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image "karve/nodered-fedora:arm64"

cd docker-custom/
./docker-debianonfedora.sh
podman push docker.io/karve/nodered-fedora:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT.

If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml -f noderedroute.yaml -n nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

cp ../sensehat-fedora-iot/sense_hat.py.new .
# Use buildah or podman to build the image for object detection
buildah bud -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
#podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora . # Select the docker.io/balenalib/raspberrypi4-64-debian:latest
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address (192.168.1.227 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.227
oc project default
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment.

cd ~/microshift/raspberry-pi/object-detection
oc delete -f object-detection-fedora.yaml

4. Running a Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

I used the following version:

LATEST=20220530 # If the latest version does not work

oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

vim okd-web-console-install.yaml
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/. If you see a blank page, you probably have the value of BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT set incorrectly.

We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.

oc get vmi
ssh -o StrictHostKeyChecking=no fedora@vmipaddress ping -c 2 google.com

Output:

[root@rpi vmi]# oc get vmi
NAME         AGE     PHASE     IP           NODENAME          READY
vmi-fedora   9m59s   Running   10.85.0.20   rpi.example.com   True
[root@rpi vmi]# ssh -o StrictHostKeyChecking=no fedora@10.85.0.20 ping -c 2 google.com
Warning: Permanently added '10.85.0.20' (ED25519) to the list of known hosts.
fedora@10.85.0.20's password:
PING google.com (142.250.65.206) 56(84) bytes of data.
64 bytes from lga25s72-in-f14.1e100.net (142.250.65.206): icmp_seq=1 ttl=117 time=5.23 ms
64 bytes from lga25s72-in-f14.1e100.net (142.250.65.206): icmp_seq=2 ttl=117 time=5.69 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.233/5.459/5.686/0.226 ms

Another way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4 and connect to the VMI using “virtctl console” command.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora # Ctrl-] to detach

When done, we can delete the VMI

oc delete -f vmi-fedora.yaml

We can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9. When done, you may delete kubevirt operator

# LATEST=20220530
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

5. Run a jupyter notebook sample for license plate recognition (RPi with 8GB RAM)

We will run the sample described at the Red Hat OpenShift Data Science Workshop License plate recognition. The Dockerfile uses the arm64 Jupyter Notebook base image: scipy-notebook. Since we do not have a tensorflow arm64 image, we install it as described at Qengineering. The notebook.yaml downloads the licence-plate-workshop sample in an initContainer.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f notebook.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=600s
oc get routes

Output:

[root@rpi tensorflow-notebook]# oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

The image is large, it may take a while for image to be downloaded:

[root@rpi tensorflow-notebook]# crictl images | grep tensorflow-notebook
docker.io/karve/tensorflow-notebook             arm64                           c8da62870fec2       4.73GB

If running in the all-in-one microshift container, you need to run the command within the container

[root@rpi tensorflow-notebook]# # podman exec -it microshift crictl images | grep tensorflow-notebook # All in one

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Go the work folder and select and run the License-plate-recognition notebook at http://notebook-route-default.cluster.local/notebooks/work/02_Licence-plate-recognition.ipynb

We can also run it as an application and test it using the corresponding notebooks. Run the http://notebook-route-default.cluster.local/notebooks/work/03_LPR_run_application.ipynb

Wait for the following to appear.

Instructions for updating:
non-resource variables are not supported in the long term
Model Loaded successfully...
Model Loaded successfully...
[INFO] Model loaded successfully...
[INFO] Labels loaded successfully...

Then run http://notebook-route-default.cluster.local/notebooks/work/04_LPR_test_application.ipynb

We can experiment with a custom image. Let’s download the image to the pod and run the cells again with the new image and check the prediction.

oc exec -it notebook -- bash -c "wget \"https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz\" -O /tmp/3183KND.jpg"

Then, run the http://notebook-route-default.cluster.local/notebooks/work/05_Send_image.ipynb

Add the cell with the following code:

my_image = 'https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz'
from PIL import Image
import requests
from io import BytesIO

response = requests.get(my_image)
img = BytesIO(response.content).read()
import base64
import requests
from json import dumps
encoded_image = base64.b64encode(img).decode('utf-8')
content = {"image": encoded_image}
json_data = dumps(content)
headers = {"Content-Type" : "application/json"}
r = requests.post(my_route + '/predictions', data=json_data, headers=headers)
print(r.content)
from IPython.display import Image
from IPython.core.display import HTML 
Image(url=my_image) 

When we are done working with the license plate recognition sample notebook, we can delete it as follows:

oc delete -f notebook.yaml

6. Run a jupyter notebook sample for object detection

We will run the sample described at the Red Hat OpenShift Data Science Workshop Object Detection. We use the same container image as in previous Sample 5, the only change is to download the object detection sample in object-detection-rest.yaml from object-detection-rest.git.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f object-detection-rest.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=300s
oc get routes

Output will look the same as in Sample 5; we use the same service and route names.

[root@rpi tensorflow-notebook]# oc apply -f object-detection-rest.yaml
pod/notebook created
service/flask-svc created
service/notebook-svc created
route.route.openshift.io/notebook-route created
route.route.openshift.io/flask-route created
[root@rpi tensorflow-notebook]# oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

Login at http://notebook-route-default.cluster.local/tree/work with the default password mysecretpassword. We can run the 1_explore.ipynb that will download twodogs.jpg and use a pre-trained model to identify objects in images. In the next notebooks (2_predict.ipynb, 3_run_flask.ipynb, and 4_test_flask.ipynb), this model is wrapped in a flask app that can be used as part of a larger application.

In 4_test_flask.ipynb, replace the my_route as follows:

my_route = 'http://flask-svc:5000'

We can also test by downloading custom images, for example from Dogs Best Life.

oc exec -it notebook -- bash -c "wget https://dogsbestlife.com/wp-content/uploads/2016/05/two-dogs-same-litter-min.jpeg -O /home/jovyan/work/two-dogs-same-litter-min.jpeg"

In 4_test_flask.ipynb, replace the my_image and run the notebook.

my_image = 'two-dogs-same-litter-min.jpeg'

When we are done working with the object detection sample notebook, we can delete it as follows:

oc delete -f object-detection-rest.yaml

7. Tutorial Notebooks from tensorflow.org

We can run the tutorials from https://www.tensorflow.org/tutorials using the tutorials.yaml. We use the same container image as in previous Sample 5, the only change is that it pulls notebooks from https://github.com/tensorflow/docs.git. Login at http://notebook-route-default.cluster.local/tree/work with the default password mysecretpassword.

[root@rpi vmi]# cd ~/microshift/raspberry-pi/tensorflow-notebook
[root@rpi tensorflow-notebook]# oc apply -f tutorials.yaml
pod/notebook created
service/flask-svc created
service/notebook-svc created
route.route.openshift.io/notebook-route created
route.route.openshift.io/flask-route created
[root@rpi tensorflow-notebook]# oc get routes notebook-route
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None 

We will need to make a few minor changes to use the /tmp to download temporary file and cache folders used in the notebooks to avoid permission denied errors because the local directory is not writable.

1. Tensorflow 2 quickstart beginner http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/quickstart/beginner.ipynb

2. TensorFlow 2 quickstart for experts http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/quickstart/advanced.ipynb

3. Segmentation http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/images/segmentation.ipynb

4. Classification of flowers that shows overfitting, data augmentation for generating additional training data http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/images/classification.ipynb

5. Audio recognition: Recognizing keywords http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/audio/simple_audio.ipynb

Audio recognition


6. Time series forecasting - It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs) http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/structured_data/time_series.ipynb

8. Compiling and deploying your Quarkus native app on MicroShift

Quarkus is built from the ground up to transform Java into the ideal language for building native binaries and Kubernetes applications. Combining the optimization capabilities of GraalVM with the build-time capability of Quarkus leads to the smallest possible memory footprint and startup time. Quarkus can run on a Raspberry Pi. We will initially build the quarkus native executable application binary directly in on the Raspberry Pi and later show how to use the multistage build to build the executable in a container. Reference https://quarkus.io/guides/building-native-image

ssh alarm@$ipaddress
sudo su -
pacman -S gcc glibc zlib 
pacman -Ss jdk # search for the extact package name
pacman -S jdk11-openjdk # install the jdk-openjdk package
java --version
archlinux-java status # list the available Java environments
#archlinux-java set java-11-openjdk
tar -zxvf graalvm-ce-java11-linux-aarch64-22.1.0.tar.gz
mkdir /usr/lib/graalvm;mv graalvm-ce-java11-22.1.0 /usr/lib/graalvm/.
exit # Back as alarm user

JAVA_HOME=/usr/lib/jvm/java-11-openjdk
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME
export PATH
GRAALVM_HOME=/usr/lib/graalvm/graalvm-ce-java11-22.1.0
PATH=$PATH:$HOME/bin:$GRAALVM_HOME/bin
export GRAALVM_HOME
export PATH
sudo gu install native-image

git clone https://github.com/quarkusio/quarkus-quickstarts.git
cd quarkus-quickstarts/getting-started

./mvnw package -Pnative

The native quarkus executable must be packaged into a container image to be able to run it on a container runtime. Note down the ldd version. Since we build directly on the Raspberry Pi, the ldd version needs to match the ldd version in the container image. Both fedora:36 and ubuntu:22.04 have the 2.35 version that is present on the aarch64 we installed on the Raspberry Pi 4.

[alarm@rpi quarkus-quickstarts]$ ldd --version
ldd (GNU libc) 2.35

Edit the src/main/docker/Dockerfile.native

#FROM docker.io/library/fedora:36
FROM docker.io/library/ubuntu:22.04
WORKDIR /work/
RUN chown 1001 /work \
    && chmod "g+rwX" /work \
    && chown 1001:root /work
COPY --chown=1001:root target/*-runner /work/application
RUN ldd --version

EXPOSE 8080
USER 1001

CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
podman build -f src/main/docker/Dockerfile.native -t quay.io/thinkahead/quarkus-getting-started:ldd-2.35-arm64 .

Make sure that the ldd version shown in the above build output is the same as on your host.

Output:

[alarm@rpi getting-started]$ podman build -f src/main/docker/Dockerfile.native -t quay.io/thinkahead/quarkus-getting-started:ldd-2.35-arm64 .
STEP 1/8: FROM docker.io/library/ubuntu:22.04
Trying to pull docker.io/library/ubuntu:22.04...
Getting image source signatures
Copying blob b84950154c18 done
Copying config f3d495355b done
Writing manifest to image destination
Storing signatures
STEP 2/8: WORKDIR /work/
--> c64224a1972
STEP 3/8: RUN chown 1001 /work     && chmod "g+rwX" /work     && chown 1001:root /work
--> 1f9047b59ec
STEP 4/8: COPY --chown=1001:root target/*-runner /work/application
--> 1d6ae957853
STEP 5/8: RUN ldd --version
ldd (Ubuntu GLIBC 2.35-0ubuntu3) 2.35
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
--> e02c57776b2
STEP 6/8: EXPOSE 8080
--> 905d76d8518
STEP 7/8: USER 1001
--> 54a9de2beab
STEP 8/8: CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
COMMIT quay.io/thinkahead/quarkus-getting-started:ldd-2.35-arm64
--> c6778b16e84
Successfully tagged quay.io/thinkahead/quarkus-getting-started:ldd-2.35-arm64
c6778b16e848a71692c03681b893b90e6954432923a9efa9076c49b1fcddd503

Push the image to the registry

podman login quay.io # Enter user and password
podman push quay.io/thinkahead/quarkus-getting-started:ldd-2.35-arm64

Note: You may need to specifically set the Repository Visibility to public in quay.io

sudo su -
cd ~/microshift/raspberry-pi/quarkus/
oc new-project quarkus --display-name "Sample Quarkus App"
oc apply -f quarkus-getting-started.yaml -f quarkus-getting-started-route.yaml

Add the ipaddress of the Raspberry Pi 4 device for quarkus-getting-started-route-quarkus.cluster.local to /etc/hosts on your Laptop. The http://quarkus-getting-started-route-quarkus.cluster.local/ will show the “Congratulations, you have created a new Quarkus application.” and the http://quarkus-getting-started-route-quarkus.cluster.local/hello will show hello.

Quarkus application on Microshift


Finally, after we are done testing, we can delete the sample Quarkus application:

oc delete -f quarkus-getting-started.yaml -f quarkus-getting-started-route.yaml

Instead of building directly on the Raspberry Pi using the ldd version 2.35 on Arch Linux, we can use the multistage build using Dockerfile.graalvmaarch64 with the ldd version 2.28 in the ghcr.io/graalvm/graalvm-ce:latest image and the registry.access.redhat.com/ubi8/ubi-minimal:8.3 for the final image.

ssh alarm@$ipaddress
git clone https://github.com/quarkusio/quarkus-quickstarts.git
cd quarkus-quickstarts/getting-started
mv .dockerignore test.dockerignore  # The default .dockerignore filters everything except the target directory

wget https://raw.githubusercontent.com/thinkahead/microshift/main/raspberry-pi/quarkus/Dockerfile.graalvmaarch64
podman build -f Dockerfile.graalvmaarch64 -t quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64 .
podman login quay.io
podman push quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64

Note: You may need to specifically set the Repository Visibility to public in quay.io if events in the quarkus project show error in pulling the image.

Output for both RUN ldd --version during building shows ldd 2.28:

[1/2] STEP 3/11: RUN ldd --version
ldd (GNU libc) 2.28
…
[2/2] STEP 4/8: RUN ldd --version
ldd (GNU libc) 2.28

Then, run the sample quarkus application in MicroShift as before with the new image.

sudo su -
cd ~/microshift/raspberry-pi/quarkus/
oc new-project quarkus --display-name "Sample Quarkus App"
oc project quarkus # If it already exists
# Update the quarkus-getting-started.yaml to use the quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64
oc apply -f quarkus-getting-started.yaml -f quarkus-getting-started-route.yaml

Cleanup MicroShift

We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on ArchLinux (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only

Microshift Containerized

If you did not already install podman, you can do it now.

pacman -S podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images;podman ps"

After MicroShift is started, we can run the samples shown earlier.

After we are done using MicroShift, we can stop and remove microshift

systemctl stop microshift
podman volume rm microshift-data

Alternatively, delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image).

podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

wget  https://raw.githubusercontent.com/thinkahead/microshift/main/packaging/systemd/microshift-aio.service -O /usr/lib/systemd/system/microshift.service
# Add the “-p 80:80” after the “-p 6443:6443” so we can expose the applications
# Add the “-h microshift.example.com”

or

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

Then run:

systemctl daemon-reload
systemctl start microshift

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

podman volume inspect microshift-data
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman ps;podman exec -it microshift crictl ps -a"

Output:

NAME                     STATUS   ROLES    AGE     VERSION
microshift.example.com   Ready    <none>   2m40s   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-gsm8p                 1/1     Running   0          2m39s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-rwmb5   1/1     Running   0          119s
openshift-dns                   dns-default-frgfc                     2/2     Running   0          2m39s
openshift-dns                   node-resolver-djrc9                   1/1     Running   0          2m39s
openshift-ingress               router-default-85bcfdd948-b2x95       1/1     Running   0          2m43s
openshift-service-ca            service-ca-7764c85869-hclwl           1/1     Running   0          2m44s
CONTAINER ID  IMAGE                                     COMMAND     CREATED        STATUS            PORTS                                       NAMES
e8132771fe2e  quay.io/microshift/microshift-aio:latest  /sbin/init  4 minutes ago  Up 4 minutes ago  0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp  micros
hift
CONTAINER           IMAGE                                                                                                             CREATED
    STATE               NAME                            ATTEMPT             POD ID
d07a42e2c2d20       quay.io/microshift/kube-rbac-proxy@sha256:2b5f44b11bab4c10138ce526439b43d62a890c3a02d42893ad02e2b3adb38703        8 seconds ago
    Running             kube-rbac-proxy                 0                   feba621656a20
8cf23bc9662df       quay.io/microshift/coredns@sha256:07e5397247e6f4739727591f00a066623af9ca7216203a5e82e0db2fb24514a3                14 seconds ago
    Running             dns                             0                   feba621656a20
51ef2de106e83       quay.io/microshift/haproxy-router@sha256:706a43c785337b0f29aef049ae46fdd65dcb2112f4a1e73aaf0139f70b14c6b5         56 seconds ago
    Running             router                          0                   ea0d0facaaa15
7f9406411960a       quay.io/microshift/service-ca-operator@sha256:1a8e53c8a67922d4357c46e5be7870909bb3dd1e6bea52cfaf06524c300b84e8    About a minute ag
o   Running             service-ca-controller           0                   94774ddb3523f
d26b5447cc1e5       quay.io/microshift/hostpath-provisioner@sha256:cb0c1cc60c1ba90efd558b094ba1dee3a75e96b76e3065565b60a07e4797c04c   About a minute ag
o   Running             kubevirt-hostpath-provisioner   0                   7fe9e164c511b
6d61416764ee0       85fc911ceba5a5a5e43a7c613738b2d6c0a14dad541b1577cdc6f921c16f5b75                                                  2 minutes ago
    Running             kube-flannel                    0                   1289346007bbd
8ae58ec5b8837       quay.io/microshift/flannel@sha256:13777a318497ae35593bb73499a0e2ff4cb7eda74f59c1ba7c3e88c717cbaab9                2 minutes ago
    Exited              install-cni                     0                   1289346007bbd
9a3dfe0549039       quay.io/microshift/cli@sha256:1848138e5be66753863c98b86c274bd7fb8572fe0da6f7156f1644187e4cfb84                    2 minutes ago
    Running             dns-node-resolver               0                   a454c0be2fd65
2e70f5db90952       quay.io/microshift/flannel-cni@sha256:39f81dd125398ce5e679322286344a4c13dded73ea0bf4f397e5d1929b43d033            2 minutes ago
    Exited              install-cni-bin                 0                   1289346007bbd

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

Now, we can run the samples shown earlier. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

The output of top shows the following before creating the fedora-vmi:

Tasks: 200 total,   2 running, 198 sleeping,   0 stopped,   0 zombie
%Cpu(s): 12.6 us,  4.2 sy,  0.0 ni, 82.1 id,  0.1 wa,  0.7 hi,  0.4 si,  0.0 st
MiB Mem :   7796.9 total,   1237.2 free,   2144.8 used,   4415.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5529.4 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1541 root      20   0   13.4g   1.1g 119352 S  15.2  14.4   8:09.18 microshift
   1090 root      20   0 2165688  78804  40668 S   1.3   1.0   5:24.01 crio
   5115 root      20   0  748032  42472  32096 S   0.7   0.5   0:01.76 coredns
    256 root      20   0   49852  15852  14680 S   0.3   0.2   0:01.26 systemd-journal
    313 systemd+  20   0   20776  11216   9280 S   0.3   0.1   0:00.76 systemd-resolve
   3445 root      20   0  755332  69752  37328 S   0.3   0.9   0:25.35 service-ca-oper
   9089 1001      20   0 1638272 147028  33928 S   0.3   1.8   0:14.63 virt-api
   9097 1001      20   0 1564796 135612  34248 S   0.3   1.7   0:14.83 virt-api
  10419 1001      20   0 1417476 143996  33020 S   0.3   1.8   0:14.44 virt-controller

After the VM is started, the output of top shows:

Tasks: 212 total,   1 running, 211 sleeping,   0 stopped,   0 zombie
%Cpu(s): 16.7 us,  5.2 sy,  0.0 ni, 76.7 id,  0.3 wa,  0.7 hi,  0.4 si,  0.0 st
MiB Mem :   7796.9 total,    108.3 free,   2676.3 used,   5012.4 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   4997.5 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
   1541 root      20   0   13.4g   1.1g 119960 S  28.6  14.1  10:41.68 microshift
   1090 root      20   0 2165752  79608  40668 S   1.0   1.0   6:54.81 crio
   7394 1001      20   0 1484896 148116  34712 S   1.0   1.9   0:33.52 virt-operator
      1 root      20   0  168820  12036   8756 S   0.7   0.2   0:05.22 systemd
   5115 root      20   0  748032  42692  32096 S   0.7   0.5   0:03.27 coredns
   7396 1001      20   0 1415788 130988  32920 S   0.7   1.6   0:20.38 virt-operator
   8386 root      20   0    3304   2396   1720 S   0.7   0.0   0:00.88 watch
  24745 107       20   0 3484120 658172  15196 S   0.7   8.2   1:13.50 qemu-kvm
  27675 root      20   0   14192   5272   4164 R   0.7   0.1   0:00.45 top

The full list of pods with the node microshift.example.com is:

NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    <none>   21m   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
default                         virt-launcher-vmi-fedora-dq5dc        2/2     Running   0          5m47s
kube-system                     console-deployment-58dbf6b9d9-6crxj   1/1     Running   0          15m
kube-system                     kube-flannel-ds-gsm8p                 1/1     Running   0          21m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-rwmb5   1/1     Running   0          20m
kubevirt                        virt-api-646fc59789-7rbtx             1/1     Running   0          14m
kubevirt                        virt-api-646fc59789-gmx68             1/1     Running   0          14m
kubevirt                        virt-controller-7fd5b5798c-wv2wd      1/1     Running   0          13m
kubevirt                        virt-controller-7fd5b5798c-xdd2p      1/1     Running   0          13m
kubevirt                        virt-handler-fv5mr                    1/1     Running   0          13m
kubevirt                        virt-operator-5c7c7bbb6f-pwx24        1/1     Running   0          16m
kubevirt                        virt-operator-5c7c7bbb6f-v6xlv        1/1     Running   0          16m
openshift-dns                   dns-default-frgfc                     2/2     Running   0          21m
openshift-dns                   node-resolver-djrc9                   1/1     Running   0          21m
openshift-ingress               router-default-85bcfdd948-b2x95       1/1     Running   0          21m
openshift-service-ca            service-ca-7764c85869-hclwl           1/1     Running   0          21m 

For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman.

oc get vmi,pods 
virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
oc get svc vmi-fedora-ssh # Get the nodeport
podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@$podman_ip_address -p $nodeport"

The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 31102 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 31102 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:

[root@rpi vmi]# oc get vmi,pods
NAME                                            AGE     PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   7m12s   Running   10.42.0.14   microshift.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-dq5dc   2/2     Running   0          7m12s
[root@rpi vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@rpi vmi]# oc get svc vmi-fedora-ssh # Get the nodeport
NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.182.225   <none>        22:31102/TCP   15s
[root@rpi vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
10.88.0.2
[root@rpi vmi]# oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 31102"
If you don't see a command prompt, try pressing enter.

[fedora@vmi-fedora ~]$ sudo dnf install -y qemu-guest-agent >/dev/null
[fedora@vmi-fedora ~]$ sudo systemctl enable --now qemu-guest-agent
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.251.40.142) 56(84) bytes of data.
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=1 ttl=115 time=4.04 ms
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=2 ttl=115 time=4.35 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.043/4.197/4.351/0.154 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted

The QEMU guest agent that we installed is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.

After we are done, we can delete the all-in-one microshift container.

podman rm -f microshift && podman volume rm microshift-data

or if started using systemd, then

systemctl stop microshift && podman volume rm microshift-data
rm -f /usr/lib/systemd/system/microshift.service

Conclusion

In this Part 20, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Arch Linux (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with ArchLinux. We built and ran Quarkus sample within a pod. Finally, we saw how to run jupyter notebooks with the license plate recognition, object detection, image segmentation, image classification and audio keyword recognition. In Part 21, we will work with Fedora 36 Silverblue.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References

#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi
#archlinux
#quarkus

​​
0 comments
23 views

Permalink