MicroShift and KubeVirt on Raspberry Pi 4 with Fedora 36 Silverblue
Introduction
MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream. In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04. In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora 35 IoT, Fedora 35 Server and Fedora 35 CoreOS respectively, Part 13 with Ubuntu 22.04, Part 14 on Rocky Linux, Part 15 on openSUSE, Part 16 on Oracle Linux, Part 17 on AlmaLinux, Part 18 on Manjaro, Part 19 on Kali Linux and Part 20 on Arch Linux. In this Part 21, we will work with MicroShift on Fedora 36 Silverblue. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift.
Fedora Silverblue is an immutable desktop operating system that aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows. For working with containers, buildah and podman are recommended. Applications and containers are kept separate from the host system, improving stability and reliability. The root filesystem is immutable. This means that /, /usr and everything below it is read-only. Silverblue’s runtime state is stored in /var with symlinks to make traditional state-carrying directories available in their expected locations. Separate home partitions should be mounted on /var/home. When a package is installed with rpm-ostree, a new OS image is composed by adding the RPM payload to the existing OS image, and creating a new, combined image. To see the newly installed RPMs, the system needs to be rebooted with the new image. rpm-ostree also takes care of recreating the layered image whenever you update the base OS image. It is also possible to roll back to the previous version of the operating system if something goes wrong. Silverblue is a variant of Fedora Workstation. It feels like a regular desktop operating system. Fedora Silverblue uses the same core technology as Fedora Atomic Host (as well as its successor, Fedora CoreOS). However, Silverblue is specifically focused on workstation/desktop use cases. Silverblue also comes with the toolbox utility, which uses containers to provide an environment where development tools and libraries can be installed and used.
Setting up the Raspberry Pi 4 with Fedora Silverblue
To run Fedora Silverblue on a Raspberry Pi 4, the MicroSDXC card needs to be prepared with Unified Extensible Firmware Interface (UEFI) on another system and then the disk moved to the RPi4. We will create a disk image in a Fedora VM on the MacBook Pro, then copy the image out from the VM to the Macbook and write to MicroSDXC card. Additionally, we will also create a USB drive with the Silverblue iso that we will use to install Silverblue from scratch on the Raspberrry Pi 4. We will essentially follow the following steps with minor changes to download Silverblue and create an image file for writing.
1. Download the latest version of the aarch64 Fedora SilverBlue iso onto your MacBook Pro and write to USB drive using Balena Etcher. This will be used later to install Fedora Silverblue on MicroSDXC card.
wget https://download.fedoraproject.org/pub/fedora/linux/releases/36/Silverblue/aarch64/iso/Fedora-Silverblue-ostree-aarch64-36-1.5.iso
2. There is a UEFI firmware implementation for the RPi4 (pftf/RPi4) that attempts to make the RPi4 ServerReady (SBBC compliant) and allows you to pretend that the RPi4 is just like any other server hardware with UEFI. We prepare the MicroSDXC card image with UEFI. I used the 64GB Microsdxc card. After creating the new partition table, create the first and only new partition. This partition will hold the UEFI system on a FAT32 filesystem. This partition can be small (40MB). For Raspberry Pi 4s with 4gb or 8gb of RAM, UEFI is configured by default to limit the RAM reported to the operating system to 3gb. We use the customized UEFI build that removes this limit.
git clone https://github.com/thinkahead/podman.git
cd podman
vagrant up
vagrant ssh
sudo su -
yum -y install unzip wget
dd if=/dev/zero of=/home/silverblue.img bs=1024 count=40960 # fallocate -l 40M "/home/silverblue.img"
losetup -fP /home/silverblue.img
losetup --list
parted --script /dev/loop0 mklabel msdos
parted --script /dev/loop0 mkpart primary fat32 0% 38M
lsblk /dev/loop0
# Create the FAT filesystem
mkfs.vfat -s1 -F32 /dev/loop0p1
#VERSION=v1.33 # use latest one from https://github.com/pftf/RPi4/releases
VERSION=v1.32 # use latest one from https://github.com/FwMotion/RPi4/releases
UEFIDISK=/dev/loop0p1
sudo mkfs.vfat $UEFIDISK
mkdir /tmp/UEFIdisk
sudo mount $UEFIDISK /tmp/UEFIdisk
pushd /tmp/UEFIdisk
#wget https://github.com/pftf/RPi4/releases/download/${VERSION}/RPi4_UEFI_Firmware_${VERSION}.zip -O /tmp/RPi4_UEFI_Firmware_${VERSION}.zip
wget https://github.com/FwMotion/RPi4/releases/download/${VERSION}/RPi4_UEFI_Firmware_${VERSION}.zip -O /tmp/RPi4_UEFI_Firmware_${VERSION}.zip
sudo unzip /tmp/RPi4_UEFI_Firmware_${VERSION}.zip
sudo rm -f /tmp/RPi4_UEFI_Firmware_${VERSION}.zip
popd
sudo umount /tmp/UEFIdisk
losetup -d /dev/loop0
exit
exit
vagrant scp :/home/silverblue.img .
3. Write the silverblue.img created above to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
4. Have a Keyboard, Mouse and Monitor connected to the Raspberry Pi 4
5. Insert Microsdxc card and USB drive into Raspberry Pi 4 and poweron
6. Go through the install process.
Click on “Reboot System”
Remove the USB drive.
7. Hit ESC on the keyboard when it boots from the UEFI
8. If you used the pftf/RPi4, disable the 3G memory limitation. In the UEFI firmware menu go to:
Device Manager -> Raspberry Pi Configuration -> Advanced Configuration -> Limit RAM to 3GB -> Disabled
F10 to save -> Y to confirm
Esc to top level menu.
9. With the UEFI Firmware in ACPI mode (the default) you won’t get access to GPIO (i.e. no Pi HATs will work). To get access to GPIO pins you’ll need to change the setting to DeviceTree mode in the UEFI menus.
Device Manager -> Raspberry Pi Configuration -> Advanced Configuration -> System Table Selection -> DeviceTree
F10 to save -> Y to confirm
Esc to top level menu and select Reset to cycle the system.
10. The system will boot into the UEFI, Hit Enter (or wait for the PXE boot to timeout) and it will show the Booting `Fedora Linux 36.1.5 (Silverblue) (ostree:0)’. Then it will start the configuration UI showing the “Welcome to Fedora Linux 36!”. Note that it will take a couple of minutes to get to that UI.
11. Do not enable 3rd party repositories. If you do, you can disable them later with
sudo fedora-third-party disable
rm -f /etc/yum.repos.d/_copr_phracek-PyCharm.rep
rm -f /etc/yum.repos.d/google-chrome.repo
rm -f /etc/yum.repos.d/rpmfusion-nonfree-*
This is required to avoid Last error: Status code: 404 for https://copr-be.cloud.fedoraproject.org/results/phracek/PyCharm/fedora-36-aarch64/repodata/repomd.xml. phracek/PyCharm is not available for arm64.
12. Set the user rpi and password. You can also optionally enable the Wifi using the installer GUI. When install is complete, open a terminal from the attached Keyboard/Mouse and Monitor. Enable the ssh daemon. Find the ip address so that you can establish a remote ssh session.
systemctl enable --now sshd
ip addr
13. You can also find the dhcp ipaddress assigned to the Raspberry Pi 4 using the following command for your subnet on the MacBook Pro
sudo nmap -sn 192.168.1.0/24
Login using rpi user that we created with password set earlier.
ssh rpi@$ipaddress
sudo su -
14. Enable wifi (optional) if you did not in the GUI installer
ssh rpi@$ipaddress
sudo su -
nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask
15. Upgrade the system. The update command is an alias for the upgrade command.
rpm-ostree upgrade
systemctl reboot
16. Set the hostname with a domain
hostnamectl hostname microshift.example.com
echo "$ipaddress microshift microshift.example.com" >> /etc/hosts
17. Check the cgroup information
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present
Output:
[root@microshift ~]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
[root@microshift ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name hierarchy num_cgroups enabled
cpuset 0 119 1
cpu 0 119 1
cpuacct 0 119 1
blkio 0 119 1
memory 0 119 1
devices 0 119 1
freezer 0 119 1
net_cls 0 119 1
perf_event 0 119 1
net_prio 0 119 1
pids 0 119 1
misc 0 119 1
18. Check the release
cat /etc/redhat-release
cat /etc/os-release # cat /usr/lib/os-release
Output:
[root@microshift ~]# cat /etc/redhat-release
Fedora release 36 (Thirty Six)
[root@microshift ~]# uname -a
Linux microshift.example.com 5.18.5-200.fc36.aarch64 #1 SMP PREEMPT_DYNAMIC Thu Jun 16 14:28:32 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
[root@microshift microshift]# cat /etc/os-release
NAME="Fedora Linux"
VERSION="36.20220617.0 (Silverblue)"
ID=fedora
VERSION_ID=36
VERSION_CODENAME=""
PLATFORM_ID="platform:f36"
PRETTY_NAME="Fedora Linux 36.20220617.0 (Silverblue)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:36"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-silverblue/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=36
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=36
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Silverblue"
VARIANT_ID=silverblue
OSTREE_VERSION='36.20220617.0'
Install the dependencies for MicroShift and SenseHat
Configure RPM repositories
curl -L -o /etc/yum.repos.d/fedora-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-modular.repo
curl -L -o /etc/yum.repos.d/fedora-updates-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-updates-modular.repo
curl -L -o /etc/yum.repos.d/group_redhat-et-microshift-fedora-36.repo https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift/repo/fedora-36/group_redhat-et-microshift-fedora-36.repo
Enable cri-o and install microshift. We can run multiple rpm-ostree install commands before rebooting. When rebooting, all the changes will be applied at once.
rpm-ostree ex module enable cri-o:1.23 # Experimental enable
rpm-ostree install i2c-tools cri-o cri-tools microshift
Install dependencies to build RTIMULib
rpm-ostree install git zlib-devel libjpeg-devel gcc gcc-c++ python3-devel python3-pip cmake
rpm-ostree install kernel-devel kernel-headers ncurses-devel
Setting up libvirtd on the host
rpm-ostree install libvirt-client libvirt-nss qemu-system-aarch64 virt-manager virt-install virt-viewer libguestfs-tools # dmidecode
# Works with nftables on Fedora IoT and SilverBlue
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl reboot
systemctl enable --now libvirtd
virt-host-validate qemu
rpm-ostree status
Output:
[root@microshift microshift]# rpm-ostree status
State: idle
Deployments:
● fedora:fedora/36/aarch64/silverblue
Version: 36.20220617.0 (2022-06-17T00:50:56Z)
BaseCommit: fa33551f70331c08e86db0bb5b15c42fb4cb6ef403014046f8f407f682688422
GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
LayeredPackages: 'gcc-c++' cmake cri-o cri-tools gcc git i2c-tools kernel-devel kernel-headers libguestfs-tools libjpeg-devel libvirt-client
libvirt-nss microshift ncurses-devel python3-devel python3-pip qemu-system-aarch64 virt-install virt-manager virt-viewer
zlib-devel
EnabledModules: cri-o:1.23
fedora:fedora/36/aarch64/silverblue
Version: 36.20220617.0 (2022-06-17T00:50:56Z)
BaseCommit: fa33551f70331c08e86db0bb5b15c42fb4cb6ef403014046f8f407f682688422
GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
LayeredPackages: i2c-tools
EnabledModules: cri-o:1.23
Installing sense_hat and RTIMULib on Fedora Silverblue
The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries. We will install the default libraries; then overwrite the /usr/local/lib/python3.10/site-packages/sense_hat-2.2.0-py3.10.egg/sense_hat/sense_hat.py to use the smbus after installing RTIMULib in a few steps below.
Install sensehat
i2cget -y 1 0x6A 0x75
i2cget -y 1 0x5f 0xf
i2cdetect -y 1
lsmod | grep st_
pip3 install Cython Pillow numpy sense_hat smbus
Output:
[root@microshift ~]# i2cget -y 1 0x6A 0x75
0x57
[root@microshift ~]# i2cget -y 1 0x5f 0xf
0xbc
[root@microshift ~]# i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
The Raspberry Pi build comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly.
[root@microshift ~]# lsmod | grep st_
st_magn_spi 16384 0
st_pressure_spi 16384 0
st_sensors_spi 16384 2 st_pressure_spi,st_magn_spi
regmap_spi 16384 1 st_sensors_spi
st_magn_i2c 16384 0
st_pressure_i2c 16384 0
st_magn 20480 2 st_magn_i2c,st_magn_spi
st_pressure 16384 2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c 16384 2 st_pressure_i2c,st_magn_i2c
st_sensors 28672 6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer 16384 2 st_pressure,st_magn
industrialio 98304 9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
We need to blacklist the modules and reboot to take effect.
cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF
reboot
Check the Sense Hat with i2cdetect
ssh rpi@$ipaddress
sudo su -
i2cdetect -y 1
Output:
[root@microshift ~]# i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
[root@microshift ~]# lsmod | grep st_
Empty
Install RTIMULib
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break
Replace the sense_hat.py with the new file that uses SMBus
cd /root
git clone https://github.com/thinkahead/microshift.git
cd /root/microshift/raspberry-pi/sensehat-fedora-iot
# Update the python package to use the i2cbus
cp -f sense_hat.py.new /usr/local/lib/python3.10/site-packages/sense_hat/sense_hat.py
Test the SenseHat samples for the Sense Hat's LED matrix
# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt
# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt
# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt
# Show two digits for multiple numbers
python3 digits.py
# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py
# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press
# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py
# Find Magnetic North
python3 compass.py
Test the USB camera
Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.
pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp
Install the oc and kubectl client
ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
tar -xzvf oc.tar.gz && \
rm -f oc.tar.gz && \
install -t /usr/local/bin {kubectl,oc} && \
rm -f {README.md,kubectl,oc}
Replace MicroShift binary - The microshift service references the microshift binary in the /usr/bin directory
[root@microshift ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service
[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root
[Install]
WantedBy=multi-user.target
The microshift binary from Apr 20, 2022 installed in /usr/bin by the rpm installer does not work with Fedora 36. It causes a crash-loop with the following error in journalctl:
Jun 10 21:12:36 microshift.example.com microshift[5336]: unexpected fault address 0x0
Jun 10 21:12:36 microshift.example.com microshift[5336]: fatal error: fault
Jun 10 21:12:36 microshift.example.com microshift[5336]: [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x1c0cbdf]
Jun 10 21:12:36 microshift.example.com microshift[5336]: goroutine 48915 [running]:
We will replace it with a newer binary from May 11, 2022 downloaded from https://github.com/openshift/microshift/releases/. Note that the microshift-linux-arm64 binaries from 05-11-2022 Latest Nightly Build or 04-20-2022 (both 4.8.0-0.microshift-2022-04-20-182108 and 4.8.0-0.microshift-2022-04-20-141053) work. The microshift version installed by rpm shows 4.8.0-0.microshift-2022-04-20-141053 but it does not work. You may also build the microshift binary from sources as shown in Section later.
Files that ship in packages downloaded from distribution repository go into /usr/lib/systemd/system. Modifications done by users go into /etc/systemd/system/. The /etc/systemd/system overrides the unit files in /usr/lib/systemd/system.
curl -L https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-arm64 > /usr/local/bin/microshift
chmod +x /usr/local/bin/microshift
/usr/local/bin/microshift version
cp /usr/lib/systemd/system/microshift.service /etc/systemd/system/microshift.service
# vi /etc/systemd/system/microshift.service # Change path to /usr/local/bin
sed -i "s|/usr/bin|/usr/local/bin|" /etc/systemd/system/microshift.service
We need to run systemctl daemon-reload to take changed configurations from filesystem and regenerate dependency trees.
systemctl daemon-reload
Start MicroShift
ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins
systemctl enable --now crio microshift
Configure firewalld - You may read about selecting zones for your interfaces.
systemctl enable firewalld --now
firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5353/udp --permanent
firewall-cmd --reload
Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:
firewall-cmd --zone=public --permanent --add-port=6443/tcp
For access to services through NodePort, add the port range 30000-32767:
firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp
#firewall-cmd --zone=public --add-port=10250/tcp --permanent
#firewall-cmd --zone=public --add-port=10251/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
firewall-cmd --list-all
Check the microshift and crio logs
journalctl -u microshift -f
journalctl -u crio -f
It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"
Samples to run on MicroShift
We will run a few samples that will show the use of persistent volume, template, SenseHat sensor data and the USB camera.
1. Nginx web server with persistent volume
The source code is in github
cd ~/microshift/raspberry-pi/nginx
oc project default
Create the data in /var/hpvolumes/nginx/data1. The data1 is because we use the subPath in the volumeMounts in the nginx.yaml
mkdir -p /var/hpvolumes/nginx/data1/
cp index.html /var/hpvolumes/nginx/data1/.
cp 50x.html /var/hpvolumes/nginx/data1/.
We have the selinux set to Enforcing. The /var/hpvolumes used for creating persistent volumes will give permission denied errors when it runs the initContainers. Files labeled with container_file_t are the only files that are writable by containers. We relabel the /var/hpvolumes.
restorecon -R -v "/var/hpvolumes/*"
Now we create the pv, pvc, deployment and service. There will be two replicas of nginx sharing the same persistent volume.
#oc apply -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml
oc apply -f hostpathpv.yaml # Persistent Volume
oc apply -f hostpathpvc.yaml # Persistent Volume Claim
oc apply -f nginx.yaml # Deployment and Service
Let’s login to one of the pods to see the index.html. Also submit a curl request to nginx
oc get pods,deployments
oc exec -it deployment.apps/nginx-deployment -- cat /usr/share/nginx/html/index.html
curl localhost:30080 # Will return the standard nginx response from index.html
We can add a file hello in the shared volume from within the container and test the request with curl.
oc rsh deployment.apps/nginx-deployment
echo "Hello" > /usr/share/nginx/html/hello
exit
curl localhost:30080/hello
Output
[root@microshift nginx]# curl localhost:30080/hello
Hello
Change the replicas to 1
oc scale deployment.v1.apps/nginx-deployment --replicas=1
Output:
[root@microshift nginx]# oc scale deployment.v1.apps/nginx-deployment --replicas=1
deployment.apps/nginx-deployment scaled
[root@microshift nginx]# oc get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-7f888f8ff7-4rr8f 0/1 Terminating 0 14m
nginx-deployment-7f888f8ff7-gn6j2 1/1 Running 0 14m
We can test nginx by exposing the nginx-svc as a route. We can delete the deployment and service after we are done.
oc delete -f nginx.yaml
Then, delete the persistent volume and claim
oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml
2. Nginx web server with template
The source code is in github
cd ~/microshift/raspberry-pi/nginx
oc project default
If you use a different namespace xxxx instead of default, you will need to change the /etc/hosts to match the nginx-xxxx.cluster.local accordingly. The nginx-template* uses the image docker.io/nginxinc/nginx-unprivileged:alpine. The deployment config does not get processed in MicroShift. So, we use a template with a deployment instead.
#oc process -f nginx-template-deploymentconfig.yml | oc apply -f - # deploymentconfig does not work in microshift
oc process -f nginx-template-deployment-8080.yml | oc apply -f - # deployment works in microshift
oc get routes
Add the following to /etc/hosts on the Raspberry Pi 4
127.0.0.1 localhost nginx-default.cluster.local
Then, submit a curl request to nginx
curl nginx-default.cluster.local
To delete nginx, run
oc process -f nginx-template-deployment-8080.yml | oc delete -f -
We can also create the template in MicroShift and process the template by name
# Either of the following two may be used:
oc create -f nginx-template-deployment-8080.yml
#oc create -f nginx-template-deployment-80.yml
oc get templates
oc process nginx-template | oc apply -f -
curl nginx-default.cluster.local
oc process nginx-template | oc delete -f -
oc delete template nginx-template
rm -rf /var/hpvolumes/nginx/
Output:
[root@microshift nginx]# oc create -f nginx-template-deployment-8080.yml
template.template.openshift.io/nginx-template created
[root@microshift nginx]# oc get templates
NAME DESCRIPTION PARAMETERS OBJECTS
nginx-template run a simple nginx server 2 (all set) 3
[root@microshift nginx]#
[root@microshift nginx]# oc process nginx-template | oc apply -f -
deployment.apps/nginx created
service/nginx created
route.route.openshift.io/nginx created
[root@microshift nginx]# curl nginx-default.cluster.local
…
<h1>Welcome to nginx!</h1>
…
[root@microshift nginx]# oc process nginx-template | oc delete -f -
deployment.apps "nginx" deleted
service "nginx" deleted
route.route.openshift.io "nginx" deleted
[root@microshift nginx]# oc delete template nginx-template
template.template.openshift.io "nginx-template" deleted
3. Postgresql database server
The source code is in github
cd ~/microshift/raspberry-pi/pg
Create a new project pg. Create the configmap, pv, pvc and deployment for PostgreSQL
oc new-project pg
mkdir -p /var/hpvolumes/pg
If you have the selinux set to Enforcing, run the
restorecon -R -v "/var/hpvolumes/*"
#oc apply -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc create -f pg-configmap.yaml
oc create -f hostpathpv.yaml
oc create -f hostpathpvc.yaml
oc apply -f pg.yaml
oc get configmap
oc get svc pg-svc
oc get all -lapp=pg
oc logs deployment/pg-deployment -f
Connect to the database
oc exec -it deployment.apps/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
Create a TABLE cities and insert a couple of rows
CREATE TABLE cities (name varchar(80), location point);
\t
INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
SELECT * from cities;
\d
\q
exit
Let's delete the deployment and recreate it
oc delete deployment.apps/pg-deployment
oc apply -f pg.yaml
oc logs deployment/pg-deployment -f
Now we can connect to the database and look at the cities table
oc exec -it deployment.apps/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
SELECT * FROM cities;
\q
exit
Finally, we delete the deployment and project
oc delete -f pg.yaml
oc delete -f pg-configmap.yaml
oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml
oc delete project pg
rm -rf /var/hpvolumes/pg/
4. Running a Virtual Machine Instance on MicroShift
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
# LATEST=20220624
LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt
Output:
[root@microshift vmi]# oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}"
Deployed
[root@microshift vmi]# oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
kubevirt.kubevirt.io/kubevirt condition met
We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.
cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'
Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.
vi okd-web-console-install.yaml
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system
Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/. If you see a blank page, you probably have the value of BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT set incorrectly.
We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)
crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.
cd /root/microshift/raspberry-pi/vmi
oc project default
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods
The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.
oc get vmi
ssh -o StrictHostKeyChecking=no fedora@vmipaddress ping -c 2 google.com
Another way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4 and connect to the VMI using “virtctl console” command.
id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora # Ctrl-] to detach
When done, we can delete the VMI
oc delete -f vmi-fedora.yaml
5. Running a Virtual Machine on MicroShift
Install the kubevirt-operator-arm64 as in “4. Running a Virtual Machine Instance on MicroShift” above. Then, create the VM using the vm-fedora.yaml
cd /root/microshift/raspberry-pi/vmi
oc project default
oc apply -f vm-fedora.yaml
oc get pods,vm,vmi -n default
Output:
[root@microshift vmi]# oc apply -f vm-fedora.yaml
virtualmachine.kubevirt.io/vm-example created
[root@microshift vmi]# oc get pods,vm -n default
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 23s Stopped False
Start the virtual machine
# virtctl start vm-example
kubectl patch virtualmachine vm-example --type merge -p '{"spec":{"running":true}}' -n default
Note down the ip address of the vm-example Virtual Machine Instance. Then, ssh to the Fedora VMI from this sshclient container.
[root@microshift vmi]# oc apply -f vm-fedora.yaml
virtualmachine.kubevirt.io/vm-example created
[root@microshift vmi]# oc get pods,vm -n default
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 41s Stopped False
[root@microshift vmi]# kubectl patch virtualmachine vm-example --type merge -p '{"spec":{"running":true}}' -n default
virtualmachine.kubevirt.io/vm-example patched
[root@microshift vmi]# oc get pods,vm,vmi -n default
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-vm-example-cvp5v 2/2 Running 0 19s
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 113s Running True
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/vm-example 19s Running 10.42.0.13 microshift.example.com True
[root@microshift vmi]# kubectl run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.13
The authenticity of host '10.42.0.13 (10.42.0.13)' can't be established.
ED25519 key fingerprint is SHA256:G9gRf3kD5B87FSB33zd2aBw0K8qhYfegAHav/sZYvps.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.13' (ED25519) to the list of known hosts.
fedora@10.42.0.13's password:
[fedora@vm-example ~]$ ping -c 2 google.com
PING google.com (142.251.32.110) 56(84) bytes of data.
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=1 ttl=116 time=6.77 ms
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=2 ttl=116 time=5.96 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.962/6.367/6.772/0.405 ms
[fedora@vm-example ~]$ exit
logout
Connection to 10.42.0.13 closed.
/ # exit
Session ended, resume using 'kubectl attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted
We can access the OKD Web Console and run Actions (Restart, Stop, Pause etc) on the VM http://console-np-service-kube-system.cluster.local/
Stop the virtual machine using kubectl:
# virtctl stop vm-example
kubectl patch virtualmachine vm-example --type merge -p '{"spec":{"running":false}}' -n default
Delete the VM
oc delete -f vm-fedora.yaml
We can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9. When done, you may delete kubevirt operator.
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
6. Install Metrics Server
This will enable us to run the “kubectl top” and “oc adm top” commands.
curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml > metrics-server-components.yaml
oc apply -f metrics-server-components.yaml
# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
oc get --raw /apis/metrics.k8s.io/v1beta1/nodes
oc get --raw /apis/metrics.k8s.io/v1beta1/pods
oc get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary
Output:
[root@microshift vmi]# oc adm top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
microshift.example.com 717m 17% 3072Mi 40%
[root@microshift vmi]# oc adm top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system console-deployment-559c9d4445-8pdj6 2m 18Mi
kube-system kube-flannel-ds-b6w8z 10m 16Mi
kube-system metrics-server-64cf6869bd-zlzvt 13m 14Mi
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-rqp4l 1m 6Mi
openshift-dns dns-default-66s7n 6m 19Mi
openshift-dns node-resolver-rld7k 0m 13Mi
openshift-ingress router-default-85bcfdd948-5dzg4 5m 38Mi
openshift-service-ca service-ca-7764c85869-8cnzt 55m 41Mi
7. InfluxDB/Telegraf/Grafana
The source code is available for this influxdb sample in github.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb
restorecon -R -v /var/hpvolumes
If you want to run all the steps in a single command, get the nodename.
oc get nodes
Output:
[root@microshift influxdb]# oc get nodes
NAME STATUS ROLES AGE VERSION
microshift.example.com Ready <none> 12h v1.21.0
Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-fedora-dynamic.sh. This will create a new project influxdb. Note that the node name within the container may be different when running MicroShift with the all-in-one containerized approach (microshift-aio.example.com used in later section).
sed -i "s|coreos|microshift.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|microshift.example.com|" grafana/grafana-data-dynamic.yaml
./runall-fedora-dynamic.sh
We create and push the “measure-fedora:latest” image using the Dockerfile that uses SMBus. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.
This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.
annotations:
kubevirt.io/provisionOnNode: microshift.example.com
spec:
storageClassName: kubevirt-hostpath-provisioner
Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.
Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh
./deleteall-fedora-dynamic.sh
Deleting the persistent volume claims automatically deletes the persistent volumes.
8. Node Red live data dashboard with SenseHat sensor charts
We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered
Build and push the arm64v8 image “karve/nodered-fedora:arm64”
cd docker-custom/
./docker-debianonfedora.sh
podman push docker.io/karve/nodered-fedora:arm64
cd ..
Deploy Node Red with persistent volume for /data within the node red container
mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f
Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/
The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.
Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT.
If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.
We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:
cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml -f noderedroute.yaml -n nodered
9. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red
This example requires the same Node Red setup as in the previous Sample 2.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection
We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.
cp ../sensehat-fedora-iot/sense_hat.py.new .
# Use buildah or podman to build the image for object detection
buildah bud -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
#podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora . # Select the docker.io/balenalib/raspberrypi4-64-debian:latest
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest
Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address (192.168.1.209 shown below).
env:
- name: WebSocketURL
value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
- name: ImageUploadURL
value: http://nodered-svc-nodered.cluster.local/upload
hostAliases:
- hostnames:
- nodered-svc-nodered.cluster.local
ip: 192.168.1.209
oc project default
oc apply -f object-detection-fedora.yaml
We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment.
cd ~/microshift/raspberry-pi/object-detection
oc delete -f object-detection-fedora.yaml
Cleanup MicroShift
We can use the ~/microshift/hack/cleanup.sh script to remove the pods and images.
Output:
[root@microshift hack]# ./cleanup.sh
DATA LOSS WARNING: Do you wish to stop and cleanup ALL MicroShift data AND cri-o container workloads?
1) Yes
2) No
#? 1
Stopping microshift
Removing crio pods
Removing crio containers
Removing crio images
Killing conmon, pause processes
Removing /var/lib/microshift
Cleanup succeeded
Containerized MicroShift on Fedora Silverblue
We can run MicroShift within containers in two ways:
- MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
- MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Docker container and data is stored in a docker volume, microshift-data. This should be used for “Testing and Development” only.
Microshift Containerized
We will use the prebuilt image.
IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2022-04-20-182108-linux-arm64
podman pull $IMAGE
podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;oc get nodes;oc get pods -A;crictl pods"
The microshift container runs in podman and the rest of the pods in crio on the host. Now, we can run the samples shown earlier.
After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.
podman stop microshift
After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.
MicroShift Containerized All-In-One
Let’s stop the crio on the host, we will be creating an all-in-one container in docker that will have crio within the container.
systemctl stop crio
systemctl disable crio
We will run the all-in-one microshift in podman using prebuilt images.
setsebool -P container_manage_cgroup true
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift-aio.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64
We can inspect the microshift-data volume to find the path
[root@microshift vmi]# podman volume inspect microshift-data
[
{
"Name": "microshift-data",
"Driver": "local",
"Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
"CreatedAt": "2022-06-25T09:26:20.195483756-04:00",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0
}
]
On the host raspberry pi, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "podman ps;oc get nodes;oc get pods -A"
The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman.
podman exec -it microshift crictl ps -a
Output:
NAME STATUS ROLES AGE VERSION
microshift.example.com Ready <none> 2m49s v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-pll24 1/1 Running 0 2m48s
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-t78zw 1/1 Running 0 118s
openshift-dns dns-default-rdksz 2/2 Running 0 2m49s
openshift-dns node-resolver-2gdzj 1/1 Running 0 2m49s
openshift-ingress router-default-85bcfdd948-llw4m 1/1 Running 0 2m51s
openshift-service-ca service-ca-7764c85869-rrbkd 1/1 Running 0 2m53s
[root@microshift vmi]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b4c567756829 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64 /sbin/init 5 minutes ago Up 5 minutes ago 0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp, 0.0.0.0:8080->8080/tcp microshift
[root@microshift vmi]# podman exec -it microshift crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9de6a68d51193 quay.io/microshift/kube-rbac-proxy@sha256:2b5f44b11bab4c10138ce526439b43d62a890c3a02d42893ad02e2b3adb38703 About a minute ago Running kube-rbac-proxy 0 a3e97a90b3de8
b81c769754411 quay.io/microshift/coredns@sha256:07e5397247e6f4739727591f00a066623af9ca7216203a5e82e0db2fb24514a3 About a minute ago Running dns 0 a3e97a90b3de8
91e1282f35ddb quay.io/microshift/haproxy-router@sha256:706a43c785337b0f29aef049ae46fdd65dcb2112f4a1e73aaf0139f70b14c6b5 About a minute ago Running router 0 26f482a5e0735
916cbbecdcf0f quay.io/microshift/service-ca-operator@sha256:1a8e53c8a67922d4357c46e5be7870909bb3dd1e6bea52cfaf06524c300b84e8 2 minutes ago Running service-ca-controller 0 e5f186cdb241a
21c44d727d888 quay.io/microshift/hostpath-provisioner@sha256:cb0c1cc60c1ba90efd558b094ba1dee3a75e96b76e3065565b60a07e4797c04c 2 minutes ago Running kubevirt-hostpath-provisioner 0 63ed4aad6898e
4c5891bed307e 85fc911ceba5a5a5e43a7c613738b2d6c0a14dad541b1577cdc6f921c16f5b75 3 minutes ago Running kube-flannel 0 d4ef12775a6db
c2402547dbeef quay.io/microshift/flannel@sha256:13777a318497ae35593bb73499a0e2ff4cb7eda74f59c1ba7c3e88c717cbaab9 3 minutes ago Exited install-cni 0 d4ef12775a6db
c2830ee045b95 quay.io/microshift/cli@sha256:1848138e5be66753863c98b86c274bd7fb8572fe0da6f7156f1644187e4cfb84 3 minutes ago Running dns-node-resolver 0 f08f6b7224306
d85c9a74e9377 quay.io/microshift/flannel-cni@sha256:39f81dd125398ce5e679322286344a4c13dded73ea0bf4f397e5d1929b43d033 3 minutes ago Exited install-cni-bin 0 d4ef12775a6db
Now, we can run the samples shown earlier. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.
podman exec -it microshift mount --make-shared /
We may also preload the virtual machine images using "crictl pull".
podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman.
cd /root/microshift/raspberry-pi/vmi
oc project default
oc apply -f vmi-fedora.yaml
oc get vmi,pods
virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
oc get svc vmi-fedora-ssh # Get the nodeport
podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
# Replace the $podman_ip_address and $nodeport below
oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@$podman_ip_address -p $nodeport"
Output:
[root@microshift vmi]# oc get vmi,pods
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/vmi-fedora 80s Running 10.42.0.12 microshift-aio.example.com True
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-vmi-fedora-ggbj7 2/2 Running 0 80s
[root@microshift vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@microshift vmi]# oc get svc vmi-fedora-ssh # Get the nodeport
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmi-fedora-ssh NodePort 10.43.53.60 <none> 22:31520/TCP 1s
[root@microshift vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
10.88.0.2
[root@microshift vmi]# oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 31520"
If you don't see a command prompt, try pressing enter.
fedora@10.88.0.2's password:
Last login: Sat Jun 25 14:58:00 2022 from 10.42.0.1
[fedora@vmi-fedora ~]$ ping -c2 google.com
PING google.com (142.251.32.110) 56(84) bytes of data.
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=1 ttl=116 time=5.79 ms
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=2 ttl=116 time=5.76 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.763/5.776/5.790/0.013 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted
Building the microshift binary
We can checkout the required commit hash from github and build the microshift binary. Run the cleanup.sh to stop microshift and restart with new binary from /usr/local/bin.
cd ~
mkdir build
cd build
git clone https://github.com/openshift/microshift.git
cd microshift/
git checkout a90c0810b62e64380cf1e6eb7cf1c5006088ffdc # 4.8.0-0.microshift-2022-04-20-141053-2-ga90c081
# git checkout 48863ff4cf9146906a7d7879c2ca93265c7ad662 # 4.8.0-0.microshift-2022-04-20-141053
# git checkout 78be44960257771cc0026213d674d9ca2321b379 4.8.0-0.microshift-2022-03-11-124751-10-g78be449
make
./microshift version
cp microshift /usr/local/bin
systemctl start microshift
Conclusion
In this Part 21, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Fedora 36 Silverblue. We ran samples that used template, persistent volume for postgresql, Sense Hat, and USB camera. We installed the OKD Web Console and saw how to manage a VM using KubeVirt on MicroShift. We saw an object detection sample that sent pictures and web socket messages to Node Red when a person was detected. We will work with Kata Containers in Part 23. In the next Part 22, we will work with EndeavourOS.
Hope you have enjoyed the article and found the samples useful. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.
References