Infrastructure as a Service

 View Only

MicroShift – Part 12: Raspberry Pi 4 with Fedora CoreOS

By Alexei Karve posted Tue April 19, 2022 10:30 AM

  

MicroShift and KubeVirt on Raspberry Pi 4 with Fedora CoreOS

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 10, we deployed MicroShift and KubeVirt on Fedora IoT and in Part 11 on Fedora Server. In this Part 12, we will set up and deploy MicroShift on Fedora CoreOS. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. Finally, we will install InfluxDB/Telegraf/Grafana with a dashboard that will show SenseHat sensor data and show the use of dynamic persistent volumes.

Fedora CoreOS (FCOS) is an automatically updating, minimal operating system for running containerized workloads securely and at scale. It came from the merging of CoreOS Container Linux and Fedora Atomic Host. Security being a first-class citizen, FCOS provides automatic updates and comes with SELinux hardening. Fedora CoreOS does not have a separate install disk. Instead, every instance starts from a generic disk image which is customized on first boot via Ignition. Ignition completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. The Fedora CoreOS instances work with rpm-ostree, a hybrid image/package system. One does not just run “yum install” or "dnf install" on a Fedora CoreOS system. These commands are not available. The running system is never modified by package-management operations like installations or system upgrades. Instead, a set of parallel system images, two of them by default are maintained. The utility managing packages in this kind of architecture must wrap RPM-based package management on top of an atomic file system management library. Every package change must be committed to the file system. Any package-management operations apply to the next image in the series. The updated image will only be set up to run on the next boot after the update completes successfully. Fedora CoreOS has a strong focus on the use of containers to install applications. The podman tool is provided for this purpose. The base OS is composed of essential tools. The running image is immutable. When trying to write to /usr, we get a read-only file system error. When the system boots, only certain portions (like /var) are made writable. The ostree architecture design states that the system read-only content is kept in the /usr directory. The /var directory is shared across all system deployments and is writable by processes, and there is an /etc directory for every system deployment. When system changes or upgrades, previous modifications in the /etc directory are merged with the copy in the new deployment.

Setting up the Raspberry Pi 4 with Fedora CoreOS

To run FCOS on a Raspberry Pi 4 via U-Boot, the SD card or USB disk needs to be prepared on another system and then the disk moved to the RPi4. We will create a disk image in a Fedora VM on the MacBook Pro, then copy the image out from the VM to the Macbook and write to MicroSDXC card. Ignition config files are written in JSON but are typically not user friendly. Configurations are thus written in a simpler format, the Butane config, that is then converted into an Ignition config. 

1. You can reuse the Fedora 35 VM from Part 1 running in VirtualBox using the Vagrantfile on your Macbook Pro if you have it handy. We do not require to install MicroShift in the VM, so the config.vm.provision section can be removed if you are creating a new VM.
git clone https://github.com/thinkahead/microshift.git
cd microshift/vagrant
vagrant up
vagrant ssh

# Inside the VM
sudo su -

2. Install the dependencies for creating a coreos image on the VM

dnf -y install butane coreos-installer make rpi-imager
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/coreosbuilder

Update the “password_hash” and “ssh_authorized_keys” in the build/coreos.bu. You can generate the secure password hash with mkpasswd. The default password that I have set in the Butane config is raspberry.

podman run -ti --rm quay.io/coreos/mkpasswd --method=yescrypt

Output:

[root@microshift coreosbuilder]# podman run -ti --rm quay.io/coreos/mkpasswd --method=yescrypt
Trying to pull quay.io/coreos/mkpasswd:latest...
Getting image source signatures
Copying blob 9c6cc3463716 done
Copying blob 3827617fefe3 done
Copying config 5af75352cb done
Writing manifest to image destination
Storing signatures
Password:
$y$j9T$c/znsJE8cPVGp0XuaXsK50$fOkhsOE5YCSmaN34Ypt7PmiXJvhkjtgmdPczSaXoQZ6

The configured password will be accepted for local authentication at the console. By default, Fedora CoreOS does not allow password authentication via SSH but it can be enabled in the Butane config as shown.

3. Create the ignition config json in the VM using butane

rm -rf dist;mkdir dist
cp -r build/etc dist/
butane --files-dir dist --pretty --strict build/coreos.bu > dist/coreos.ign

4. Prepare the disk with partitions using the coreos-installer within the VM

tmp_rpm_dest_path=/tmp/rpi4boot
tmp_pi_boot_path="/tmp/rpi4boot/boot/efi"
tmp_efipart_path=/tmp/fcosefipart
rm -rf $tmp_rpm_dest_path $tmp_efipart_path

fedora_release=35 # The target Fedora Release

# Grab RPMs from the Fedora Linux repositories
mkdir -p $tmp_pi_boot_path
dnf install -y --downloadonly --release=$fedora_release --forcearch=aarch64 --destdir=$tmp_rpm_dest_path  uboot-images-armv8 bcm283x-firmware bcm283x-overlays

# Extract the contents of the RPMs and copy the proper u-boot.bin for the RPi4 into place
for filename in `ls $tmp_rpm_dest_path/*.rpm`; do rpm2cpio $filename | cpio -idv -D $tmp_rpm_dest_path; done
cp $tmp_rpm_dest_path/usr/share/uboot/rpi_4/u-boot.bin $tmp_pi_boot_path/rpi4-u-boot.bin

# Create raw image
dd if=/dev/zero of=/home/my-coreos.img bs=1024 count=4194304
losetup -fP /home/my-coreos.img
losetup --list

# Run coreos-installer to install to the target disk
coreos-installer install -a aarch64 -i dist/coreos.ign /dev/loop0
lsblk /dev/loop0 -J -oLABEL,PATH
efi_part=`lsblk /dev/loop0 -J -oLABEL,PATH | jq -r '.blockdevices[] | select(.label == "EFI-SYSTEM")'.path`
echo $efi_part
mkdir -p $tmp_efipart_path
mount $efi_part $tmp_efipart_path
unalias cp
cp -r $tmp_pi_boot_path/* $tmp_efipart_path
ls $tmp_efipart_path/start.elf
umount $efi_part

# Detach loop device
losetup -a
losetup -d /dev/loop0

exit # from root to vagrant user
exit # from VM

5. Copy the my-coreos.img to the MacBook from the VM.

# On the Macbook Pro
vagrant plugin install vagrant-scp # If you do not already have it
vagrant scp :/home/my-coreos.img .

Write to Microsdxc card using Balena Etcher. Insert Microsdxc into Raspberry Pi4 and poweron. The Wifi does not work out of box on CoreOS, so you must use the Ethernet wire for connectivity.

6. Find the dhcp ipaddress assigned to the Raspberry Pi 4 using the following command for your subnet on the MacBook Pro and login using “core” user and “raspberry” password set earlier.

sudo nmap -sn 192.168.1.0/24
ssh core@$ipaddress
sudo su -

Output:

$ sudo nmap -sn 192.168.1.0/24
Password:
Nmap scan report for console-np-service-kube-system.cluster.local (192.168.1.209)
Host is up (0.0051s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)

7. Check the release and cgroups

cat /etc/redhat-release
uname -a
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present

Output:

[core@coreos ~]$ cat /etc/redhat-release
Fedora release 35 (Thirty Five)
[core@coreos ~]$ uname -a
Linux coreos 5.16.16-200.fc35.aarch64 #1 SMP Sat Mar 19 13:35:51 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
[core@coreos ~]$ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
[core@coreos ~]$ cat /proc/cgroups | column -t
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          75           1
cpu           0          75           1
cpuacct       0          75           1
blkio         0          75           1
memory        0          75           1
devices       0          75           1
freezer       0          75           1
net_cls       0          75           1
perf_event    0          75           1
net_prio      0          75           1
pids          0          75           1
misc          0          75           1

Install the dependencies for MicroShift and SenseHat

We already configured the required RPM repositories /etc/yum.repos.d/fedora-modular.repo, /etc/yum.repos.d/fedora-updates-modular.repo, and /etc/yum.repos.d/group_redhat-et-microshift-fedora-35.repo in coreos.bu.

Enable cri-o and install microshift

rpm-ostree ex module enable cri-o:1.21
rpm-ostree install firewalld i2c-tools cri-o cri-tools microshift

Install dependencies to build RTIMULib

rpm-ostree install git zlib-devel libjpeg-devel gcc gcc-c++ python3-devel python3-pip cmake kernel-devel kernel-headers ncurses-devel

Set up libvirtd on the host and validate qemu

rpm-ostree install libvirt-client libvirt-nss qemu-system-aarch64 virt-manager virt-install virt-viewer libguestfs-tools dmidecode
systemctl reboot

ssh core@$ipaddress # Login to the Raspberry Pi 4
sudo su -
virt-host-validate qemu
# Works with nftables on Fedora IoT and Fedora CoreOS
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd

Output:

[root@coreos microshift]# virt-host-validate qemu
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (Unknown if this platform has IOMMU support)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)
[root@coreos microshift]# systemctl enable --now libvirtd
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.

Install sensehat - The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries. We will install the default libraries; then overwrite the /usr/local/lib/python3.10/site-packages/sense_hat-2.2.0-py3.10.egg/sense_hat/sense_hat.py to use the smbus after installing RTIMULib in a few steps below.

i2cget -y 1 0x6A 0x75
i2cget -y 1 0x5f 0xf
i2cdetect -y 1
lsmod | grep st_
pip3 install Cython Pillow numpy sense_hat smbus

Output:

[root@coreos microshift]# i2cget -y 1 0x6A 0x75
0x57
[root@coreos microshift]# i2cget -y 1 0x5f 0xf
0xbc
[root@coreos microshift]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
[root@coreos microshift]# lsmod | grep st_
st_magn_spi            16384  0
st_pressure_spi        16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
regmap_spi             16384  1 st_sensors_spi
st_magn_i2c            16384  0
st_pressure_i2c        16384  0
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure            16384  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

Blacklist modules to remove the UU in the i2cdetect

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Check the Sense Hat with i2cdetect after the reboot

ssh core@$ipaddress
sudo su - i2cdetect -y 1

Output:

[root@coreos ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Install RTIMULib

git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10

Replace the old sense_hat.py with the new file that uses SMBus

git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot

# Update the python package to use the i2cbus
cp -f sense_hat.py.new /usr/local/lib/python3.10/site-packages/sense_hat/sense_hat.py

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

Test the USB camera - Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.

pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

Install the oc and kubectl client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

Start Microshift

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

The microshift service /usr/lib/systemd/system/microshift.service references the microshift binary in the /usr/bin directory.

[root@coreos /]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

The /etc/systemd/system/microshift.service is used for containerized version, we added this when we created the CoreOS image in coreos.bu. We will use it later. Move it out for now.

cat /etc/systemd/system/microshift.service 
mv /etc/systemd/system/microshift.service /root/microshift/.

Output:

[root@coreos /]# cat /etc/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target

Since we made changes to the microshift.service, we need to run "systemctl daemon-reload" to take changed configurations from filesystem and regenerate dependency trees.

systemctl daemon-reload
systemctl enable --now crio microshift

Configure firewalld

systemctl enable firewalld --now
systemctl enable firewalld --now
firewall-cmd --zone=public --permanent --add-port=6443/tcp
firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp
firewall-cmd --zone=public --permanent --add-port=2379-2380/tcp
firewall-cmd --zone=public --add-masquerade --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10251/tcp --permanent
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
firewall-cmd --reload

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

# Run microshift binary directly on Raspberry Pi
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig 
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

# The following is used when running microshift containerized in podman
# export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig

Samples to run on MicroShift

We will run a couple of samples that will show the use of SenseHat, setup KubeVirt and run the Fedora Virtual Machine. You can also run the samples from the Part 10 and Part 11.

1. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the guages for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
./docker-debianonfedora.sh
podman push karve/nodered-fedora:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc logs deployment/nodered-deployment -f

If you get errors during module installation in the logs above because of name resolution when accessing https://registry.npmjs.org/ from the container. It is probably because you did not install/setup firewalld. You can add the following printf in the nodered2.yaml and apply it so that the external names can be resolved.

        args:
          - cd /data;
            printf "search nodered.svc.cluster.local svc.cluster.local cluster.local\nnameserver 8.8.8.8\noptions ndots:5\n" > /etc/resolv.conf
            echo Installing Nodes;

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts.

SenseHat Sensors link to dashboard


You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed.

Temperature, Humidity and Pressure guages with Joystick direction


Click on the Hamburger Menu (3 lines) and select PiSenseHAT.

Temperature, Humidity and Pressure guages with Temperature Graph


If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.
You can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml -n nodered

Output:

[root@coreos nodered]# oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml -n nodered
warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "noderedpv" deleted
persistentvolumeclaim "noderedpvc" deleted
deployment.apps "nodered-deployment" deleted
service "nodered-svc" deleted
route.route.openshift.io "nodered-route" deleted

2. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 1.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc project default
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red with a person is detected and chat messages as follows at http://nodered-svc-nodered.cluster.local/chat

raspberrypi4: 1650358836: Temperature: 29.722915649414062 C [Detection(bounding_box=Rect(left=-7, top=184, right=391, bottom=466), categories=[Category(label='person', score=0.3515625, index=0)]), Detection(bounding_box=Rect(left=9, top=362, right=413, bottom=477), categories=[Category(label='person', score=0.33203125, index=0)])]
raspberrypi4: 1650358836: Temperature: 29.733333587646484 C [Detection(bounding_box=Rect(left=6, top=365, right=416, bottom=474), categories=[Category(label='person', score=0.39453125, index=0)]), Detection(bounding_box=Rect(left=-3, top=194, right=401, bottom=467), categories=[Category(label='person', score=0.3125, index=0)])]
raspberrypi4: 1650358842: Temperature: 29.735416412353516 C [Detection(bounding_box=Rect(left=3, top=363, right=406, bottom=476), categories=[Category(label='person', score=0.3515625, index=0)]), Detection(bounding_box=Rect(left=-7, top=173, right=391, bottom=466), categories=[Category(label='person', score=0.3125, index=0)])]

When we are done testing, we can delete the deployment

oc delete -f object-detection-fedora.yaml

3. Running a Virtual Machine Instance on MicroShift

We first deploy the KubeVirt Operator. You may need to use a different version if the LATEST version gives an error in the virt-handler when starting the VMI.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
LATEST=20220331 # If the latest version does not work
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

Output:

[root@coreos ~]# oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
DeployingDeployingDeployingDeployingDeployedDeployed^C
[root@coreos ~]# oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
kubevirt.kubevirt.io/kubevirt condition met
[root@coreos ~]# oc get pods -n kubevirt
NAME                               READY   STATUS    RESTARTS   AGE
virt-api-fb54f99cc-cnw8p           1/1     Running   0          6m47s
virt-api-fb54f99cc-sjb8t           1/1     Running   0          6m47s
virt-controller-8696c4f698-hdsbs   1/1     Running   0          6m12s
virt-controller-8696c4f698-kxf7r   1/1     Running   0          6m12s
virt-handler-j5pgb                 1/1     Running   0          6m12s
virt-operator-5b79f55b84-q4bn7     1/1     Running   0          8m18s
virt-operator-5b79f55b84-xbgct     1/1     Running   0          8m18s

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your ip address, and secretRef token with the console-token-* from above for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc logs deployment/console-deployment -f -n kube-system
oc get routes -n kube-system

Output:

[root@coreos console]# oc logs deployment/console-deployment -f -n kube-system
Error from server (BadRequest): container "console-app" in pod "console-deployment-dd97bbbdc-crkxt" is waiting to start: ContainerCreating
[root@coreos console]# oc logs deployment/console-deployment -f -n kube-system
W0418 14:23:19.300352       1 main.go:212] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!
W0418 14:23:19.300938       1 main.go:345] cookies are not secure because base-address is not https!
W0418 14:23:19.301039       1 main.go:650] running with AUTHENTICATION DISABLED!
I0418 14:23:19.305012       1 main.go:766] Binding to 0.0.0.0:9000...
I0418 14:23:19.305123       1 main.go:771] not using TLS
[root@coreos console]# oc get routes -n kube-system
NAME                 HOST/PORT                                      PATH   SERVICES             PORT   TERMINATION   WILDCARD
console-np-service   console-np-service-kube-system.cluster.local          console-np-service   http                 None

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”.

Output:

NAME                                            AGE   PHASE        IP    NODENAME   READY
virtualmachineinstance.kubevirt.io/vmi-fedora   21s   Scheduling                    False

NAME                                      READY   STATUS     RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        0/2     Init:0/2   0          21s

Output:

NAME                                            AGE   PHASE       IP    NODENAME READY
virtualmachineinstance.kubevirt.io/vmi-fedora   70s   Scheduled         coreos   False

NAME                                      READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        2/2     Running   0          70s

Output:

NAME                                            AGE   PHASE     IP           NODENAME READY
virtualmachineinstance.kubevirt.io/vmi-fedora   96s   Running   10.42.0.48   coreos   True

NAME                                      READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        2/2     Running   0          96s

Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password

[root@coreos vmi]# oc get vmi
NAME         AGE   PHASE     IP           NODENAME   READY
vmi-fedora   14m   Running   10.42.0.48   coreos     True 
[root@coreos vmi]# ip route
default via 192.168.1.1 dev eth0 proto dhcp metric 100
10.42.0.0/24 dev cni0 proto kernel scope link src 10.42.0.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.209 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
[root@coreos vmi]# ssh fedora@10.42.0.48
fedora@10.42.0.48's password:
[fedora@vmi-fedora ~]$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=116 time=4.02 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=116 time=3.95 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.949/3.983/4.018/0.034 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.48 closed.

Alternatively, we can create a Pod to run ssh client and connect to the Fedora VM from this pod

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this sshclient container.

Output:

[root@coreos vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.48
The authenticity of host '10.42.0.48 (10.42.0.48)' can't be established.
ED25519 key fingerprint is SHA256:s5/o6L3hLugc+jH2+L0mU4VPEIGLdcFL0J3xrd6Jin8.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.48' (ED25519) to the list of known hosts.
fedora@10.42.0.48's password:
Last login: Mon Apr 18 22:09:04 2022 from 10.42.0.1
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.48 closed.
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running

When done, you can delete the VMI

[root@coreos vmi]# oc delete -f vmi-fedora.yaml
virtualmachineinstance.kubevirt.io "vmi-fedora" deleted

4. InfluxDB/Telegraf/Grafana with dynamic persistent volumes

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, just execute the runall-fedora-dynamic.sh

./runall-fedora-dynamic.sh

We create and push the “measure-fedora:latest” image using the Dockerfile. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

[root@coreos influxdb]# ./runall-fedora-dynamic.sh
Now using project "influxdb" on server "https://127.0.0.1:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

configmap/influxdb-config created
secret/influxdb-secrets created
persistentvolumeclaim/influxdb-data created
deployment.apps/influxdb-deployment created
service/influxdb-service created
deployment.apps/influxdb-deployment condition met
deployment.apps/measure-deployment created
deployment.apps/measure-deployment condition met
configmap/telegraf-config created
secret/telegraf-secrets created
deployment.apps/telegraf-deployment created
deployment.apps/telegraf-deployment condition met
persistentvolumeclaim/grafana-data created
deployment.apps/grafana created
service/grafana-service created
deployment.apps/grafana condition met
route.route.openshift.io/grafana-service exposed

[root@coreos influxdb]# oc get routes
NAME              HOST/PORT                                PATH   SERVICES          PORT   TERMINATION   WILDCARD
grafana-service   grafana-service-influxdb.cluster.local          grafana-service   3000                 None

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV. Replace coreos with your node name.

  annotations:
    kubevirt.io/provisionOnNode: coreos
spec:
  storageClassName: kubevirt-hostpath-provisioner 

Output:

[root@coreos influxdb]# oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS                    REASON   AGE
persistentvolume/pvc-3b652e33-7978-4eab-9601-430df239bcd1   118Gi      RWO            Delete           Bound    influxdb/grafana-data    kubevirt-hostpath-provisioner            23m
persistentvolume/pvc-70f6673d-32e9-4194-84ea-56bf4d218b72   118Gi      RWO            Delete           Bound    influxdb/influxdb-data   kubevirt-hostpath-provisioner            24m

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
persistentvolumeclaim/grafana-data    Bound    pvc-3b652e33-7978-4eab-9601-430df239bcd1   118Gi      RWO            kubevirt-hostpath-provisioner   23m
persistentvolumeclaim/influxdb-data   Bound    pvc-70f6673d-32e9-4194-84ea-56bf4d218b72   118Gi      RWO            kubevirt-hostpath-provisioner   24m

[root@coreos influxdb]# oc get route grafana-service -o jsonpath --template="http://{.spec.host}/login{'\n'}"
http://grafana-service-influxdb.cluster.local/login

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage).

Open the Analysis Server dashboard to display monitoring information for MicroShift.

Show the Monitoring information for MicroShift


Open the Balena Sense dashboard to show the temperature, pressure and humidity from SenseHat.

Show temperature, pressure, and humidity


Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

Cleanup MicroShift

Use the ~/microshift/hack/cleanup.sh script to remove the pods and images.

Containerized MicroShift on Fedora IoT

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.


1. Microshift Containerized

We can use the microshift.service we had copied previously to /root/microshift. This service runs microshift in a pod and also uses a podman volume.

cd /root/microshift
cp microshift.service /etc/systemd/system/microshift.service
systemctl daemon-reload
systemctl start microshift
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located

Output:

[root@coreos microshift]# podman volume inspect microshift-data
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-04-18T19:17:17.367656467Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
] 
[root@coreos microshift]# podman ps
CONTAINER ID  IMAGE                                 COMMAND     CREATED        STATUS            PORTS       NAMES
ccd0cfe8ab47  quay.io/microshift/microshift:latest  run         6 minutes ago  Up 6 minutes ago              microshift

Watch the pod status as microshift is started

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Now we can run the samples as before. When done with running the samples, we can stop microshift

systemctl stop microshift

Instead of using the microshift.service, we can alternatively directly run the microshift image using podman. We can use the prebuilt image without the podman volume. The containers run within cri-o on the host.

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2022-03-08-195111-linux-arm64
podman pull $IMAGE

podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;oc get nodes;oc get pods -A;crictl pods"

Now, we can run the samples shown earlier on Containerized MicroShift. After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

2. MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container that will have crio within the container.

systemctl stop crio
systemctl disable crio

We will run the all-in-one microshift in podman using prebuilt images.

setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-03-11-124751-linux-nft-arm64

If the “sudo setsebool -P container_manage_cgroup true” does not work, you will need to mount the /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

# podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-03-08-195111-linux-nft-arm64

We can inspect the microshift-data volume to find the path

[root@coreos hack]# podman volume inspect microshift-data
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-04-18T20:03:03.122422149Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
]

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman.

Now we can run the samples. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the following error:

Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount
mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

Output:

[root@coreos hack]# podman exec -it microshift bash
[root@microshift /]# mount --make-shared /
[root@microshift /]# crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Image is up to date for quay.io/kubevirt/fedora-cloud-container-disk-demo@sha256:4de55b9ed3a405cdc74e763f6a7c05fe4203e34a8720865555db8b67c36d604b
[root@microshift /]# exit
exit

Copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

We can apply the fedora-vmi.yaml and login to the VMI

[root@coreos vmi]# oc get vmi
NAME         AGE     PHASE       IP           NODENAME                 READY
vmi-fedora   7m38s   Succeeded   10.42.0.15   microshift.example.com   False 

[root@coreos vmi]# ip route
default via 192.168.1.1 dev eth0 proto dhcp metric 100
10.42.0.0/24 dev cni0 proto kernel scope link src 10.42.0.1 linkdown
10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.209 metric 100
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
[root@coreos vmi]# podman exec -it microshift bash
[root@microshift /]# ip route
default via 10.88.0.1 dev eth0
10.42.0.0/24 dev cni0 proto kernel scope link src 10.42.0.1
10.88.0.0/16 dev eth0 proto kernel scope link src 10.88.0.2
[root@microshift /]# exit
exit

[root@coreos vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ping google.com
PING google.com (142.250.80.78) 56(84) bytes of data.
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=1 ttl=115 time=5.26 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=2 ttl=115 time=4.79 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=3 ttl=115 time=5.21 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 4.789/5.086/5.263/0.211 ms
[fedora@vmi-fedora ~]$ ^D
[root@coreos vmi]#

Finally delete the microshift pod and microshift-data volume

podman rm -f microshift && podman volume rm microshift-data

Problems

1. OSError: Cannot detect RPi-Sense FB device

[root@microshift ~]# python3 sparkles.py
Traceback (most recent call last):
  File "sparkles.py", line 4, in <module>
    sense = SenseHat()
  File "/usr/local/lib/python3.6/site-packages/sense_hat/sense_hat.py", line 39, in __init__
    raise OSError('Cannot detect %s device' % self.SENSE_HAT_FB_NAME)
OSError: Cannot detect RPi-Sense FB device

To solve this, use the new sense_hat.py that uses smbus.

2. CrashloopBackoff: dns-default and service-ca

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment if it keeps restarting:

oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Exploring the CoreOS

The OSTree based system it still composed via RPMs

[root@coreos vmi]# rpm -q ignition kernel moby-engine podman systemd rpm-ostree zincati
ignition-2.13.0-5.fc35.aarch64
kernel-5.16.16-200.fc35.aarch64
moby-engine-20.10.12-1.fc35.aarch64
podman-3.4.4-1.fc35.aarch64
systemd-249.9-1.fc35.aarch64
rpm-ostree-2022.5-1.fc35.aarch64
zincati-0.0.24-1.fc35.aarch64

Inspect the current revision of Fedora CoreOS. The zincati service drives rpm-ostreed with automatic updates.

[root@coreos vmi]# rpm-ostree status
State: idle
AutomaticUpdatesDriver: Zincati
  DriverState: active; periodically polling for updates (last checked Mon 2022-04-18 22:32:03 UTC)
Deployments:
* fedora:fedora/aarch64/coreos/stable
                   Version: 35.20220327.3.0 (2022-04-11T21:20:16Z)
                BaseCommit: fb414c48acda2ade90a322a43b9e324b883e734bac8b00a258ef0b90f756799e
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
           LayeredPackages: 'gcc-c++' cmake cri-o cri-tools dmidecode firewalld gcc git i2c-tools kernel-devel kernel-headers libguestfs-tools libjpeg-devel
                            libvirt-client libvirt-nss microshift ncurses-devel python3-devel python3-pip qemu-system-aarch64 virt-install virt-manager virt-viewer
                            zlib-devel
            EnabledModules: cri-o:1.21 

The ignition logs can be viewed to see the password being set and the ssh key being added for the core user.

[root@coreos vmi]# journalctl -t ignition

Conclusion

In this Part 12, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Fedora CoreOS. We ran samples that used the Sense Hat and USB camera. We saw an object detection sample that sent pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to manage a Virtual Machine Instance using KubeVirt on MicroShift. Finally, we used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. In Part 13, we will run MicroShift on Ubuntu Server 22.04 (Jammy Jellyfish).

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift and KubeVirt on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi
#fedora-coreos

​ ​​​
0 comments
34 views

Permalink