Infrastructure as a Service

 View Only

MicroShift – Part 17: Raspberry Pi 4 with AlmaLinux 8.5

By Alexei Karve posted Sun May 22, 2022 10:01 AM

  

MicroShift and KubeVirt on Raspberry Pi 4 with AlmaLinux 8.5 (Arctic Sphynx)

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit) and in Part 13 with Ubuntu 22.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively, Part 14 on Rocky Linux, Part 15 on openSUSE, and Part 16 on Oracle Linux. In this Part 17, we will work with MicroShift on AlmaLinux 8.5. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also run sample notebooks for license plate detection and object detection.

AlmaLinux is a free and open-source Linux distribution, created originally by CloudLinux to provide a community-supported, production-grade enterprise operating system that is binary-compatible with Red Hat Enterprise Linux. Both AlmaLinux and Rocky Linux emerged in response to Red Hat’s December 8, 2020 announcement stating that it will discontinue CentOS based on RedHat releases, instead shifting focus to CentOS Stream, which tracks just ahead of a current RHEL release.

Setting up the Raspberry Pi 4 with AlmaLinux 8.5

Run the following steps to download the Alma Linux image and setup the Raspberry Pi 4.

1. Download the latest AlmaLinux 8.5 64-bit Arm (aarch64) for use with RPi 4 from http://repo.almalinux.org/almalinux/8/raspberrypi/images/

2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager

3. Optionally, have a Keyboard and Monitor connected to the Raspberry Pi 4

4. Insert Microsdxc into Raspberry Pi4 and poweron

5. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet

$ sudo nmap -sn 192.168.1.0/24

Nmap scan report for 192.168.1.227
Host is up (0.010s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)

6. Login using root/almalinux to ipaddress above. You will need to change your password on first login, then ssh again with the new password.

ssh root@$ipaddress

7. Extend the partition to maximize disk usage

rootfs-expand

Note that the size of the /dev/root increased to utilize the full space on the MicroSDXC card.

[root@rpi ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       2.2G  1.6G  514M  77% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   25M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mmcblk0p1  286M   90M  197M  32% /boot
tmpfs           782M     0  782M   0% /run/user/0 
[root@rpi ~]# rootfs-expand
/dev/mmcblk0p3 /dev/mmcblk0 3
Extending partition 3 to max size ....
CHANGED: partition=3 start=788480 old: size=4687872 end=5476352 new: size=121350111 end=122138591
Resizing ext4 filesystem ...
resize2fs 1.45.6 (20-Mar-2020)
Filesystem at /dev/mmcblk0p3 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 8
The filesystem on /dev/mmcblk0p3 is now 15168763 (4k) blocks long.

Done.
[root@rpi ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        57G  1.7G   56G   3% /
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   25M  3.8G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mmcblk0p1  286M   90M  197M  32% /boot
tmpfs           782M     0  782M   0% /run/user/0

8. Update and set the hostname with a domain and add the ipv4 address to /etc/hosts

dnf update -y

hostnamectl set-hostname rpi.example.com
echo "$ipaddress rpi rpi.example.com" >> /etc/hosts

9. Optionally, enable wifi

nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask

10. Check the release and file system type. Observe also the ssd mount option for btrfs filesystem for the “/” and “/boot”. Btrfs is SSD-aware and exploits TRIM/Discard to allow the file system to report unused blocks to the storage device for reuse. On SSDs, Btrfs avoids unnecessary seek optimization and aggressively sends writes in clusters, even if they are from unrelated files. This results in larger write operations and faster write throughput, albeit at the expense of more seeks later.

cat /etc/os-release
[root@rpi ~]# cat /etc/os-release
NAME="AlmaLinux"
VERSION="8.5 (Arctic Sphynx)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="AlmaLinux 8.5 (Arctic Sphynx)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:almalinux:almalinux:8::baseos"
HOME_URL="https://almalinux.org/"
DOCUMENTATION_URL="https://wiki.almalinux.org/"
BUG_REPORT_URL="https://bugs.almalinux.org/"

ALMALINUX_MANTISBT_PROJECT="AlmaLinux-8"
ALMALINUX_MANTISBT_PROJECT_VERSION="8.5"

11. Update the kernel parameters - Concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt

 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In MicroShift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

reboot

Verify

ssh root@$ipaddress
cat /proc/cmdline
mount | grep cgroup # Check that memory and cpuset are present
cat /proc/cgroups | column -t # Check that memory and cpuset are present
ls -l /sys/fs/cgroup/cpu/cpu.cfs_quota_us # This needs to be present for MicroShift to work

Output:

[root@rpi ~]# cat /proc/cmdline
coherent_pool=1M 8250.nr_uarts=0 snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1 bcm2708_fb.fbwidth=1920 bcm2708_fb.fbheight=1200 bcm2708_fb.fbswap=1 smsc95xx.macaddr=E4:5F:01:2E:D8:95 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  console=ttyAMA0,115200 console=tty1 root=PARTUUID=4bab99d1-03 rootfstype=ext4 elevator=deadline rootwait cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
[root@rpi ~]# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
[root@rpi ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        4          1            1
cpu           3          1            1
cpuacct       3          1            1
blkio         8          1            1
memory        10         129          1
devices       9          83           1
freezer       2          1            1
net_cls       5          1            1
perf_event    6          1            1
net_prio      5          1            1
pids          7          105          1
[root@rpi ~]# ls -l /sys/fs/cgroup/cpu/cpu.cfs_quota_us # This needs to be present for MicroShift to work
-rw-r--r--. 1 root root 0 May 15 14:17 /sys/fs/cgroup/cpu/cpu.cfs_quota_us

Install sense_hat and RTIMULib on AlmaLinux

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip cmake
pip3 install Cython Pillow numpy sense_hat

Check the Sense Hat with i2cdetect

i2cdetect -y 1
[root@rpi ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Install RTIMULib

dnf -y install git
cd ~
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
dnf -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
sed -i "s/32,32,32/255,255,255/" digits.py
python3 digits.py

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

Install MicroShift on the Raspberry Pi 4 AlmaLinux host

Setup crio and MicroShift Nightly EPEL Stream 8 aarch64

rpm -qi selinux-policy # selinux-policy-3.14.3-95
dnf -y install 'dnf-command(copr)'
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/epel-8/group_redhat-et-microshift-nightly-epel-8.repo -o /etc/yum.repos.d/microshift-nightly-epel-8.repo
cat /etc/yum.repos.d/microshift-nightly-epel-8.repo

VERSION=1.22
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo
cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo

dnf -y install firewalld cri-o cri-tools microshift

Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

dnf -y install libvirt-client libvirt-nss qemu-kvm virt-manager virt-install virt-viewer
# Works with nftables on Fedora Server and Fedora IoT, Oracle Linux, AlmaLinux
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd
virt-host-validate qemu

Output:

[root@rpi ~]# virt-host-validate qemu
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (Unknown if this platform has IOMMU support)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)

Check that cni plugins are present

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

We will have systemd start and manage MicroShift. Refer to the microshift service for the three approaches.

systemctl enable --now crio microshift

# Copy flannel
cp /opt/cni/bin/flannel /usr/libexec/cni/.

You may read about selecting zones for your interfaces.

systemctl enable firewalld --now
firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5353/udp --permanent
firewall-cmd --reload

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
firewall-cmd --list-all

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

The microshift service references the microshift binary in the /usr/bin directory

[root@rpi ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

Install the kubectl and the openshift oc client

ARCH=arm64
cd /tmp
dnf -y install tar
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Install podman - We will use podman for containerized deployment of MicroShift and building images for the samples.

dnf -y install podman

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, get the nodename.

oc get nodes

Output:

[root@rpi influxdb]# oc get nodes
NAME              STATUS   ROLES    AGE     VERSION
rpi.example.com   Ready    <none>   3m36s   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-balena-dynamic.sh. Note that the node name is different when running MicroShift with the all-in-one containerized approach. So, you will use the microshift.example.com instead of the rpi.example.com below.

sed -i "s|coreos|rpi.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rpi.example.com|" grafana/grafana-data-dynamic.yaml

./runall-balena-dynamic.sh

We create and push the “measure:latest” image using the Dockerfile that uses SMBus. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: rpi.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner

Persistent Volumes and Claims Output:

[root@rpi influxdb]# oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS                    REASON   AGE
persistentvolume/pvc-361c9b38-3216-4a50-a765-6a61cea54071   56Gi       RWO            Delete           Bound    influxdb/grafana-data    kubevirt-hostpath-provisioner            47m
persistentvolume/pvc-feaed546-acd7-4c08-903a-46f71dc7c3f3   56Gi       RWO            Delete           Bound    influxdb/influxdb-data   kubevirt-hostpath-provisioner            49m

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
persistentvolumeclaim/grafana-data    Bound    pvc-361c9b38-3216-4a50-a765-6a61cea54071   56Gi       RWO            kubevirt-hostpath-provisioner   47m
persistentvolumeclaim/influxdb-data   Bound    pvc-feaed546-acd7-4c08-903a-46f71dc7c3f3   56Gi       RWO            kubevirt-hostpath-provisioner   49m

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. You can see the screenshots for these dashboards in previous blogs.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.227 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.227

Create the deployment

oc project default
oc apply -f object-detection.yaml
oc -n default wait deployment object-detection-deployment --for condition=Available --timeout=300s

We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment

oc delete -f object-detection.yaml

4. Running a Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

I used the following version:

LATEST=20220517 # If the latest version does not work

oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/. If you see a blank page, you probably have the value of BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT set incorrectly.

We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods
kubevirt VM launch flow


The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.

Output:

[root@rpi vmi]# oc get vmi
NAME         AGE     PHASE     IP           NODENAME          READY
vmi-fedora   2m20s   Running   10.42.0.18   rpi.example.com   True 
[root@rpi vmi]# ssh fedora@10.42.0.18 ping -c 2 google.com
The authenticity of host '10.42.0.18 (10.42.0.18)' can't be established.
ECDSA key fingerprint is SHA256:0P6gmZ5M2w4mqjbQdfgwF2TMUXj7aHT67p4lUY9hl6E.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.18' (ECDSA) to the list of known hosts.
fedora@10.42.0.18's password:
PING google.com (142.250.65.206) 56(84) bytes of data.
64 bytes from lga25s72-in-f14.1e100.net (142.250.65.206): icmp_seq=1 ttl=117 time=4.56 ms
64 bytes from lga25s72-in-f14.1e100.net (142.250.65.206): icmp_seq=2 ttl=117 time=4.57 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.561/4.563/4.566/0.002 ms

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

[root@rpi vmi]# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.18 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.18 (10.42.0.18)' can't be established.
ED25519 key fingerprint is SHA256:Xdw3Cm+T/AANbauy+bhBo1x63h3OWDqBOlvHUM7fOCo.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.18' (ED25519) to the list of known hosts.
fedora@10.42.0.18's password:
PING google.com (142.251.32.110) 56(84) bytes of data.
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=1 ttl=117 time=4.87 ms
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=2 ttl=117 time=4.52 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.522/4.697/4.873/0.175 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4 and connect to the VMI using “virtctl console” command.

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora

Output:

[root@rpi vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@rpi vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@rpi vmi]# podman rm -v $id
6f52989987418d4745f7f8ba9a5dddab6827e676c0ed2fe7779798a2308afd0c
[root@rpi vmi]# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.251.32.110) 56(84) bytes of data.
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=1 ttl=117 time=4.56 ms
64 bytes from lga25s77-in-f14.1e100.net (142.251.32.110): icmp_seq=2 ttl=117 time=4.54 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.544/4.553/4.562/0.009 ms
[fedora@vmi-fedora ~]$ # Ctrl-] to detach
[root@rpi vmi]#

When done, we can delete the VMI

oc delete -f vmi-fedora.yaml

We can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9. When done, you may delete kubevirt operator

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

5. Run a jupyter notebook sample for license plate recognition (RPi with 8GB RAM)

We will run the sample described at the Red Hat OpenShift Data Science Workshop License plate recognition. The Dockerfile uses the arm64 Jupyter Notebook base image: scipy-notebook. Since we do not have a tensorflow arm64 image, we install it as described at Qengineering. The notebook.yaml downloads the licence-plate-workshop sample in an initContainer.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f notebook.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=600s
oc get routes

Output:

[root@rpi tensorflow-notebook]# oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

The container image is large, it may take a while for image to be downloaded:

[root@rpi tensorflow-notebook]# # podman exec -it microshift crictl images | grep tensorflow-notebook # All in one
[root@rpi tensorflow-notebook]# crictl images | grep tensorflow-notebook
docker.io/karve/tensorflow-notebook             arm64                           c8da62870fec2       4.73GB

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Go the work folder and select and run the License-plate-recognition notebook at http://notebook-route-default.cluster.local/notebooks/work/02_Licence-plate-recognition.ipynb

We can also run it as an application and test it using the corresponding notebooks. Run the http://notebook-route-default.cluster.local/notebooks/work/03_LPR_run_application.ipynb

Wait for the following to appear:

Instructions for updating:
non-resource variables are not supported in the long term
Model Loaded successfully...
Model Loaded successfully...
[INFO] Model loaded successfully...
[INFO] Labels loaded successfully...

Then run http://notebook-route-default.cluster.local/notebooks/work/04_LPR_test_application.ipynb

We can experiment with a custom image. Let’s download the image to the pod and run the cells again with the new image at /tmp/3183KND.jpg and check the prediction.

oc exec -it notebook -- bash -c "wget \"https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz\" -O /tmp/3183KND.jpg"

Then, run the http://notebook-route-default.cluster.local/notebooks/work/05_Send_image.ipynb

Add the cell with the following code:

my_image = 'https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz'
from PIL import Image
import requests
from io import BytesIO

response = requests.get(my_image)
img = BytesIO(response.content).read()
import base64
import requests
from json import dumps
encoded_image = base64.b64encode(img).decode('utf-8')
content = {"image": encoded_image}
json_data = dumps(content)
headers = {"Content-Type" : "application/json"}
r = requests.post(my_route + '/predictions', data=json_data, headers=headers)
print(r.content)
from IPython.display import Image
from IPython.core.display import HTML 
Image(url=my_image)
Calling Flask for custom plate recognition


When we are done working with the license plate recognition sample notebook, we can delete it as follows:

oc delete -f notebook.yaml

6. Run a jupyter notebook sample for object detection

We will run the sample described at the Red Hat OpenShift Data Science Workshop Object Detection. We use the same container image as in previous Sample 5, the only change is to download the object detection sample in object-detection-rest.yaml from object-detection-rest.git.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f object-detection-rest.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=300s
oc get routes

Output will look the same as in Sample 5; we use the same service and route names.

[root@rpi tensorflow-notebook]# oc apply -f object-detection-rest.yaml
pod/notebook created
service/flask-svc created
service/notebook-svc created
route.route.openshift.io/notebook-route created
route.route.openshift.io/flask-route created
[root@rpi tensorflow-notebook]# oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

We can run the 1_explore.ipynb that will download twodogs.jpg and use a pre-trained model to identify objects in images. In the next notebooks (2_predict.ipynb, 3_run_flask.ipynb, and 4_test_flask.ipynb), this model is wrapped in a flask app that can be used as part of a larger application.

Identify two dogs in an image


In 4_test_flask.ipynb, replace the my_route as follows:

my_route = 'http://flask-svc:5000'

We can also test by downloading custom images, for example from Dogs Best Life.

oc exec -it notebook -- bash -c "wget https://dogsbestlife.com/wp-content/uploads/2016/05/two-dogs-same-litter-min.jpeg -O /home/jovyan/work/two-dogs-same-litter-min.jpeg"

In 4_test_flask.ipynb, replace the my_image and run the notebook.

my_image = 'two-dogs-same-litter-min.jpeg'

When we are done working with the object detection sample notebook, we can delete it as follows:

oc delete -f object-detection-rest.yaml

Cleanup MicroShift

We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on AlmaLinux (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only

Microshift Containerized

If you did not already install podman, you can do it now.

dnf install -y podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Normally you would run the following to start the all-in-one microshift, but it does not work:

setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

The containers will give error when starting within the microshift pod.

[root@rpi hack]# export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
[root@rpi hack]# watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"
NAME                     STATUS     ROLES    AGE    VERSION
microshift.example.com   NotReady   

You can stop it with

podman stop microshift

Since the “setsebool -P container_manage_cgroup true” does not work, we mount the /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

Now, we can run the samples shown earlier. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 30339 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 30339 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:

[root@rpi vmi]# oc get vmi,pods
NAME                                            AGE     PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   2m53s   Running   10.42.0.22   microshift.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-pbbqn   2/2     Running   0          2m53s
[root@rpi vmi]# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
[root@rpi vmi]# oc get svc vmi-fedora-ssh
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.118.42   <none>        22:30339/TCP   10s
[root@rpi vmi]# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.2
[root@rpi vmi]# oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 30339"
If you don't see a command prompt, try pressing enter.

Permission denied, please try again.
fedora@10.88.0.2's password:
Last login: Tue May 17 10:53:32 2022 from 10.42.0.1
[fedora@vmi-fedora ~]$ sudo dnf install -y qemu-guest-agent >/dev/null
[fedora@vmi-fedora ~]$ sudo systemctl enable --now qemu-guest-agent 
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.251.40.142) 56(84) bytes of data.
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=1 ttl=115 time=4.14 ms
64 bytes from lga25s80-in-f14.1e100.net (142.251.40.142): icmp_seq=2 ttl=115 time=4.30 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.142/4.222/4.303/0.080 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted

The QEMU guest agent that we installed is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks. We can see the "Guest Agent OK" in the picture below:

QEMU Guest Agent installed on Fedora VMI


After we are done, we can delete the all-in-one microshift container.

podman rm -f microshift && podman volume rm microshift-data

or if started using systemd, then

systemctl stop microshift

Conclusion

In this Part 17, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the AlmaLinux 8.5 (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with AlmaLinux 8.5. Finally we saw how to run jupyter notebooks with the license plate recognition and object detection demos. We will work with MicroShift with KubeVirt and Kata Containers on Raspberry Pi 4 with AlmaLinux 9 in Part 26. In the next Part 18, we will work with Manjaro.


Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi#almalinux

1 comment
33 views

Permalink

Comments

Mon May 23, 2022 01:33 AM

nice article.