Infrastructure as a Service

 View Only

MicroShift – Part 13: Raspberry Pi 4 with Ubuntu Server 22.04

By Alexei Karve posted Sun April 24, 2022 12:44 PM


MicroShift and KubeVirt on Raspberry Pi 4 with Ubuntu Server 22.04 (Jammy Jellyfish)


MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS.  In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively. In this Part 13, we will set up and deploy MicroShift on Ubuntu Server 22.04 (Jammy Jellyfish) directly on the host and using containerized approaches. We will run samples with InfluxDB/Telegraf/Grafana, SenseHat and Object Detection TensorFlow Lite,  and the Metrics Server. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift using each of these approaches.

Ubuntu 22.04 is the first LTS release where the entire Raspberry Pi device portfolio is supported, building upon the work that started back in Ubuntu 20.10, a release dedicated to the popular single-board computers.

Setting up the Raspberry Pi 4 with Ubuntu Server 22.04 (64 bit)

Run the following steps to setup the Raspberry Pi 4 with the Ubuntu Server 22.04:

  1. Download the image from
  2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
  3. Optionally, have a Keyboard and Monitor connected to the Raspberry Pi 4
  4. Insert Microsdxc into Raspberry Pi 4 and poweron
  5. Find the ethernet dhcp ip address of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
$ sudo nmap -sn
Nmap scan report for ubuntu.fios-router.home (
Host is up (0.0083s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
  1. Login using the keyboard attached to the Raspberry Pi 4 or ssh to the ethernet ip address above with ubuntu/ubuntu. Change the password by reentering the current password ubuntu and the new password two times. Then, ssh again with the new password.
ssh ubuntu@$ipaddress
sudo su –
  1. Now we set up the wifi using the command line on the Raspberry Pi 4
ls /sys/class/net # Identify the wireless network interface name wlan0
ls /etc/netplan/

sudo vi /etc/netplan/50-cloud-init.yaml # add the following section

            optional: true
                    password: "PASSWORD-HERE"
            dhcp4: true

sudo netplan apply
#sudo netplan --debug apply # to debug if you run into problems
ip a # Get ipaddress
You can get the wifi ip address above and ssh to the Raspberry Pi 4 from your Macbook using the userid ubuntu
ssh ubuntu@$ipaddress
sudo su -
  1. Check the release
cat /etc/os-release

root@ubuntu:~# cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04 LTS"
VERSION="22.04 (Jammy Jellyfish)"

Wait until the unattended-upgr completes

watch "ps aux | grep unatt | sort +3 -rn"
root         999  0.0  0.2 109928 19956 ?        Ssl  15:26   0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root        2321  0.0  0.0   6420  1856 pts/1    S+   15:46   0:00 grep unatt
  1. Update the cgroup kernel parameters - Concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.


ssh ubuntu@$ipaddress
sudo su -
cat /proc/cmdline
mount | grep cgroup # Check that memory and cpuset are present
cat /proc/cgroups | column -t # Check that memory and cpuset are present


root@ubuntu:~# mount | grep cgroup # Check that memory and cpuset are present
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
root@ubuntu:~# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          105          1
cpu           0          105          1
cpuacct       0          105          1
blkio         0          105          1
memory        0          105          1
devices       0          105          1
freezer       0          105          1
net_cls       0          105          1
perf_event    0          105          1
net_prio      0          105          1
hugetlb       0          105          1
pids          0          105          1
rdma          0          105          1
misc          0          105          1
  1. Install the VXLAN support. Reference vxlan required for flannel, vxlan failing to route, linux-modules-extra-raspi
modprobe vxlan
# vxlan modules were moved by upstream Ubuntu 21.10 to this separate package
apt install -y linux-modules-extra-raspi

Install sense_hat and RTIMULib

Install the rest of the updates and dependencies for SenseHat

apt-get upgrade -y
apt install -y python3 python3-dev python3-pip python3-venv  \
                   build-essential autoconf libtool          \
                   pkg-config cmake libssl-dev               \
                   i2c-tools openssl libcurl4-openssl-dev

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, test it with i2cdetect. The 1C and 5C show UU, we need to fix these.

root@ubuntu:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Add the i2c-dev line to /etc/modules:

cat << EOF >> /etc/modules

Create the file /etc/udev/rules.d/99-i2c.rules with the following contents:

cat << EOF >> /etc/udev/rules.d/99-i2c.rules

The Raspberry Pi build of Ubuntu Server comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

root@ubuntu:~# lsmod | grep st_
st_pressure_spi        16384  0
st_magn_spi            16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
st_pressure_i2c        16384  0
st_magn_i2c            16384  0
st_pressure            20480  2 st_pressure_i2c,st_pressure_spi
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the modules

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio


Login back to the Raspberry Pi 4

ssh ubuntu@$ipaddress
sudo su –

Check the config.txt and the i2cdetect output

root@ubuntu:~# cat /boot/firmware/config.txt
initramfs initrd.img followkernel


# Enable the audio output, I2C and SPI interfaces on the GPIO header. As these
# parameters related to the base device-tree they must appear *before* any
# other dtoverlay= specification

# Comment out the following line if the edges of the desktop appear outside
# the edges of your display

# If you have issues with audio, you may try uncommenting the following line
# which forces the HDMI output into HDMI mode instead of DVI (which doesn't
# support audio output)

# Enable the serial pins

# Autoload overlays for any recognized cameras or displays that are attached
# to the CSI/DSI ports. Please note this is for libcamera support, *not* for
# the legacy camera stack

# Config settings specific to arm64

# Enable the USB2 outputs on the IO board (assuming your CM4 is plugged into
# such a board)


root@ubuntu:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

root@ubuntu:~# i2cdetect -F 1
Functionalities implemented by /dev/i2c-1:
I2C                              yes
SMBus Quick Command              yes
SMBus Send Byte                  yes
SMBus Receive Byte               yes
SMBus Write Byte                 yes
SMBus Read Byte                  yes
SMBus Write Word                 yes
SMBus Read Word                  yes
SMBus Process Call               yes
SMBus Block Write                yes
SMBus Block Read                 no
SMBus Block Process Call         no
SMBus PEC                        yes
I2C Block Write                  yes
I2C Block Read                   yes

Install the RTIMULib. This is required to use the SenseHat.

git clone
cd RTIMULib/
cd Linux/python
python3 build
python3 install
cd ../..
mkdir build
cd build
cmake ..
make -j4
make install
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
cd /root/RTIMULib/Linux/RTIMULibDemoGL
apt-get -y install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
make -j4
make install

Install the sense_hat

pip3 install Cython Pillow numpy sense_hat smbus

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

git clone
cd microshift
cd raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 # Ctrl-C to interrupt

# First time you run the, you may see “Temperature: 0 C”. Just run it again.

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second

# Find Magnetic North

Install MicroShift on Ubuntu 22.04 host

Set the hostname with domain (if not already set)

hostnamectl set-hostname # The host needs a fqdn domain for MicroShift to work well

Clone the microshift repository so we can run the

sudo su –
git clone
cd microshift

Run the install script


To check the microshift systemd service, check the file /lib/systemd/system/microshift.service. It shows that the microshift binary is in /usr/local/bin/ directory.

root@ubuntu:# cat /lib/systemd/system/microshift.service 

ExecStart=microshift run


To start microshift and check the status and logs, you can run

#iptables-legacy not required
#update-alternatives --set iptables /usr/sbin/iptables-legacy
systemctl enable --now crio microshift
systemctl status microshift
journalctl -u microshift -f

Install the oc and kubectl client

cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Install Podman on Ubuntu 22.04

Although we do not need podman for installing microshift directly on the host Raspberry Pi 4, we will build images using podman and test some containers. We will also need it for the containerized version of MicroShift. So, let’s install podman.

apt -y install podman

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone
cd microshift/raspberry-pi/influxdb

Replace the coreos nodename in the persistent volume claims with the (our current nodename)

sed -i "s|coreos||" influxdb-data-dynamic.yaml
sed -i "s|coreos||" grafana/grafana-data-dynamic.yaml

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  storageClassName: kubevirt-hostpath-provisioner 

We create and push the “measure:latest” image using the Dockerfile. If you want to run all the steps in a single command, just execute the


The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the


Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in and run it
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. The screenshots for these dashboards are similar to those shown in previous blogs.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile -t .
podman push

Update the env: WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address ( shown below).

          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      - hostnames:
        - nodered-svc-nodered.cluster.local

Create the deployment

oc project default
oc apply -f object-detection.yaml

We will see pictures being sent to Node Red when a person is detected and chat messages as follows at http://nodered-svc-nodered.cluster.local/chat

Pictures sent to Node Red when a person is detected

When we are done testing, we can delete the deployment

oc delete -f object-detection.yaml

4. Running a Virtual Machine Instance on MicroShift

Install KVM on the host, set the firewalld to use iptables and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

# This is only run on the host Raspberry Pi 4, not in the container
sudo apt install -y virt-manager libvirt0 qemu-system vi /etc/firewalld/firewalld.conf # Change FirewallBackend=iptables systemctl restart firewalld virt-host-validate qemu

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L
echo $LATEST

Note that LATEST version gave me errors. You may need to use a different version if the LATEST version continues to give the message such as follows in the virt-handler logs when starting the VMI:

{"component":"virt-launcher","level":"info","msg":"Still missing PID for 5f67eebe-a107-4d5a-b37b-aabb594a31b1, Process 5f67eebe-a107-4d5a-b37b-aabb594a31b1 not found in /proc","pos":"monitor.go:123","timestamp":"2022-04-23T20:54:21.153137Z"}

We expect the correct message in the virt-handler to show:

{"component":"virt-launcher","level":"info","msg":"Found PID for 5f67eebe-a107-4d5a-b37b-aabb594a31b1: 82","pos":"monitor.go:139","timestamp":"2022-04-23T20:54:22.158113Z"}

I used the following version:

LATEST=20220331 # If the latest version does not work
oc apply -f${LATEST}/kubevirt-operator-arm64.yaml oc apply -f${LATEST}/kubevirt-cr-arm64.yaml oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed oc get -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio

crictl pull

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password.


root@ubuntu:~/microshift/raspberry-pi/vmi# oc get vmi,pods
NAME                                            AGE     PHASE     IP           NODENAME             READY   4m30s   Running   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-h2vb4   2/2     Running   0          4m29s
root@ubuntu:~/microshift/raspberry-pi/vmi# ssh fedora@
The authenticity of host ' (' can't be established.
ED25519 key fingerprint is SHA256:VRlWRZutD2uIUtdf9i2sFrYtufkJHZInChPOhP1VMmk.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '' (ED25519) to the list of known hosts.
fedora@'s password:
[fedora@vmi-fedora ~]$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=116 time=3.92 ms
64 bytes from ( icmp_seq=2 ttl=116 time=3.45 ms
--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.453/3.687/3.922/0.234 ms
[fedora@vmi-fedora ~]$ exit
Connection to closed.

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client


oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.


root@ubuntu:~/microshift/raspberry-pi/vmi# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@
The authenticity of host ' (' can't be established.
ED25519 key fingerprint is SHA256:VRlWRZutD2uIUtdf9i2sFrYtufkJHZInChPOhP1VMmk.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '' (ED25519) to the list of known hosts.
fedora@'s password:
Last login: Sat Apr 23 20:57:48 2022 from
[fedora@vmi-fedora ~]$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=116 time=4.73 ms
64 bytes from ( icmp_seq=2 ttl=116 time=4.55 ms
--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.547/4.639/4.731/0.092 ms
[fedora@vmi-fedora ~]$ exit
Connection to closed.
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

id=$(podman create
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id


root@ubuntu:~/microshift/raspberry-pi/vmi# id=$(podman create
Trying to pull
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
root@ubuntu:~/microshift/raspberry-pi/vmi# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
root@ubuntu:~/microshift/raspberry-pi/vmi# podman rm -v $id
root@ubuntu:~/microshift/raspberry-pi/vmi# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Last login: Sat Apr 23 21:04:20 from
Failed Units: 1
[fedora@vmi-fedora ~]$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=116 time=3.61 ms
64 bytes from ( icmp_seq=2 ttl=116 time=3.44 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 3.437/3.521/3.605/0.084 ms
[fedora@vmi-fedora ~]$ # ^] to disconnect

When done, we can delete the VMI

root@ubuntu:~/microshift/raspberry-pi/vmi# oc delete -f vmi-fedora.yaml "vmi-fedora" deleted

Also delete kubevirt operator

oc delete -f${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f${LATEST}/kubevirt-operator-arm64.yaml

5. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

wget -O metrics-server-components.yaml
kubectl apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
kubectl get deployment metrics-server -n kube-system
kubectl get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
kubectl get --raw /apis/
kubectl get --raw /apis/
apt-get install -y jq
kubectl get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0]')/proxy/stats/summary

watch "kubectl top nodes;kubectl top pods -A"
watch "oc adm top nodes;oc adm top pods -A"


NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   346m         8%     3903Mi          50%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     console-deployment-66489dbd56-fjs4g   1m           13Mi
kube-system                     kube-flannel-ds-g2whl                 13m          10Mi
kube-system                     metrics-server-64cf6869bd-t2nqw       12m          14Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-8xmhx   2m           7Mi
openshift-dns                   dns-default-58rl9                     7m           20Mi
openshift-dns                   node-resolver-d4jbp                   0m           4Mi
openshift-ingress               router-default-85bcfdd948-m9l7j       5m           45Mi
openshift-service-ca            service-ca-7764c85869-fc8dm           12m          36Mi

Cleanup MicroShift

We can use the script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack

Containerized MicroShift on Ubuntu 22.04 (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume ( we can store it in /var/lib/microshift and /var/lib/kubelet on the host VM as shown in previous blogs).
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.

Microshift Containerized

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /etc/systemd/system/microshift.service
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1) crio.service crio.service

ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"


root@ubuntu:~/microshift/hack# podman ps -a
CONTAINER ID  IMAGE                                 COMMAND     CREATED         STATUS             PORTS       NAMES
b8115a3b0b1f  run         52 seconds ago  Up 52 seconds ago              microshift
root@ubuntu:~/microshift/hack# podman volume inspect microshift-data
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-04-23T21:24:48.086888785Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Normally you would run the following to start the all-in-one microshift:

setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80

If SELinux is enabled on your system, you must turn on the container_manage_cgroup boolean to allow the container process systemd to make changes to cgroup configuration. Notice that SELinux is disabled, so the setsebool command will not work.

root@ubuntu:~/microshift/raspberry-pi/vmi# getenforce
root@ubuntu:~/microshift/raspberry-pi/vmi# sestatus
SELinux status:                 disabled
root@ubuntu:~/microshift/raspberry-pi/vmi# setsebool -P container_manage_cgroup true
Cannot set persistent booleans without managed policy.

If the “sudo setsebool -P container_manage_cgroup true” does not work, you may mount the /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

podman run -d --rm --name microshift -h --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80

We can inspect the microshift-data volume to find the path

podman volume inspect microshift-data

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to update the AppArmour profile. The virt-handler invokes the QEMU binary at /usr/libexec/qemu-kvm, which gets blocked by the AppArmor profile for libvirtd on Ubuntu-based systems. Also, the qemu-kvm package on Ubuntu installs the binary with a different location and name (e.g., /usr/bin/qemu-system-aarch64) as seen below:

root@ubuntu:~# ls -las /usr/bin/kvm* /usr/bin/qemu-system-aarch64
    0 lrwxrwxrwx 1 root root       19 Apr  8 07:36 /usr/bin/kvm -> qemu-system-aarch64
19644 -rwxr-xr-x 1 root root 20112136 Apr  8 07:36 /usr/bin/qemu-system-aarch64

Set the symbolic link to /usr/libexec/qemu-kvm

sudo ln -s /usr/bin/kvm /usr/libexec/qemu-kvm
vi /etc/apparmor.d/usr.sbin.libvirtd

Add the following line in /etc/apparmor.d/usr.sbin.libvirtd and reload the apparmor service.

  /usr/libexec/qemu-kvm PUx,

This is seen in the image below:

Add the line
systemctl reload apparmor.service

Additionally, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull

Now, we can run the samples shown earlier. When we run the sample with VMI, it shows up in the web console.

Virtual Machine Instance vmi-fedora

After we are done, we can delete the microshift container. The --rm we used in the docker run will delete the container when we stop it.

podman rm -f microshift && podman volume rm microshift-data

After it is stopped, we can run the as in previous section.


In this Part 13, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Ubuntu Server 22.04 (Jammy Jellyfish - 64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red. We installed the OKD Web Console and saw how to manage a Virtual Machine Instance using KubeVirt on MicroShift with Ubuntu 22.04. In Part 14, we will run MicroShift on Rocky Linux 8.5.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.