Infrastructure as a Service

 View Only

MicroShift – Part 10: Raspberry Pi 4 with Fedora IoT

By Alexei Karve posted Fri March 11, 2022 12:20 PM

  

MicroShift and KubeVirt on Raspberry Pi 4 with Fedora IoT

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In this Part 10, we will set up and deploy MicroShift on Fedora IoT. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift.

Fedora IoT is meant to be deployed on devices that make up the "Internet of Things". The Fedora IoT images work with rpm-ostree, a hybrid image/package system. One does not just run "dnf install" on a Fedora IoT system. There is no dnf command available. The running system is never modified by package-management operations like installations or system upgrades. Instead, a set of parallel system images, two of them by default are maintained. The running image is immutable. Any package-management operations apply to the next image in the series. The updated image will only be set up to run on the next boot after the update completes successfully. Fedora IoT has a strong focus on the use of containers to install applications. The podman tool is provided for this purpose. The base OS is composed of essential tools.

Setting up the Raspberry Pi 4 with Fedora IoT

Run the following steps to download the Fedora IoT image and setup the Raspberry Pi 4

1. Download the latest version of the Fedora IoT Fedora 35: Raw Image for aarch64 onto your MacBook Pro

wget https://download.fedoraproject.org/pub/alt/iot/35/IoT/aarch64/images/Fedora-IoT-35-20220101.0.aarch64.raw.xz

2. Write to Microsdxc card using Balena Etcher. I used 64GB Microsdxc to allow space for Virtual Machine Images. Insert Microsdxc into Raspberry Pi4 and poweron.

3. Find the dhcp ipaddress assigned to the Raspberry Pi 4 using the following command for your subnet on the MacBook Pro.

sudo nmap -sn 192.168.1.0/24

Refer to the instructions for Setting up a Device with Zezere. Login to the Zezere provisioning server URL that you will see when the Fedora IoT system is booted. Add your public ssh key id_rda.pub for your laptop in the SSH Key Management Tab. Then Claim your device based as per the Mac address seen above on the Claim Unknown Devices Tab. The device should now show up under the Device Management Tab. Find the MAC address for your new deployment and click "Submit provision request". To copy the ssh key to the device choose "fedora-installed" and select "Schedule". After a few minutes, the ssh key should be copied to the root account of the new deployment on your device. Now you can login using root user without password.

ssh root@$ipaddress

4. Resize the third partition with parted. Then, resize the file system on the partition (64GB below) and upgrade. The base layer of an rpm-ostree is an atomic entity. When installing a local package, any dependency that is part of the ostree with an older version will not be updated. This is the reason why it is mandatory to perform an upgrade before manually installing MicroShift.

df -h
parted

p
resizepart 3
63.9GB
p
quit

resize2fs /dev/mmcblk0p3
df -h
rpm-ostree upgrade
systemctl reboot

Output:

ssh root@192.168.1.209
The authenticity of host '192.168.1.209 (192.168.1.209)' can't be established.
ED25519 key fingerprint is SHA256:nsGOtvQU/jfP/DPAmGMW0SfuKcQ4f+ms0W4aXPZkWss.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.1.209' (ED25519) to the list of known hosts.
Script '01_update_platforms_check.sh' FAILURE (exit code '1'). Continuing...
Boot Status is GREEN - Health Check SUCCESS
[root@microshift ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           1.6G   26M  1.6G   2% /run
/dev/mmcblk0p3  2.4G  1.5G  838M  64% /sysroot
/dev/mmcblk0p2  974M   96M  811M  11% /boot
/dev/mmcblk0p1  501M   32M  469M   7% /boot/efi
tmpfs           783M     0  783M   0% /run/user/0
[root@microshift ~]# parted
GNU Parted 3.4
Using /dev/mmcblk0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: SD SD64G (sd/mmc)
Disk /dev/mmcblk0: 63.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  526MB   525MB   primary  fat16        boot
 2      526MB   1600MB  1074MB  primary  ext4
 3      1600MB  4294MB  2694MB  primary  ext4

(parted) resizepart 3
Warning: Partition /dev/mmcblk0p3 is being used. Are you sure you want to continue?
Yes/No? Yes
End?  [4294MB]? 63.9GB
(parted) p
Model: SD SD64G (sd/mmc)
Disk /dev/mmcblk0: 63.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  526MB   525MB   primary  fat16        boot
 2      526MB   1600MB  1074MB  primary  ext4
 3      1600MB  63.9GB  62.3GB  primary  ext4

(parted) quit
Information: You may need to update /etc/fstab.

[root@microshift ~]# resize2fs /dev/mmcblk0p3
resize2fs 1.46.3 (27-Jul-2021)
Filesystem at /dev/mmcblk0p3 is mounted on /sysroot; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 8
The filesystem on /dev/mmcblk0p3 is now 15201280 (4k) blocks long.

[root@microshift ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.8G     0  3.8G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           1.6G   26M  1.6G   2% /run
/dev/mmcblk0p3   58G  1.5G   54G   3% /sysroot
/dev/mmcblk0p2  974M   96M  811M  11% /boot
/dev/mmcblk0p1  501M   32M  469M   7% /boot/efi
tmpfs           783M     0  783M   0% /run/user/0
[root@microshift ~]#

5. Enable wifi (optional)

ssh root@$ipaddress

nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid -ask

6. Set the hostname with a domain and check the cgroups

hostnamectl hostname microshift.example.com
echo "$ipaddress microshift microshift.example.com" >> /etc/hosts

mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present

Output:

[root@microshift ~]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
[root@microshift ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          92           1
cpu           0          92           1
cpuacct       0          92           1
blkio         0          92           1
memory        0          92           1
devices       0          92           1
freezer       0          92           1
net_cls       0          92           1
perf_event    0          92           1
net_prio      0          92           1
pids          0          92           1
misc          0          92           1

7. Check the release

[root@microshift ~]# cat /etc/redhat-release
Fedora release 35 (Thirty Five)
[root@fedora ~]# uname -a
Linux microshift.example.com 5.16.11-200.fc35.aarch64 #1 SMP Wed Feb 23 16:51:50 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
[root@microshift ~]# cat /etc/os-release
NAME="Fedora Linux"
...
ID=fedora VERSION_ID=35 ...
VARIANT="IoT Edition" VARIANT_ID=iot

Install the dependencies for MicroShift and SenseHat

Configure RPM repositories. [Use the https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift/repo/fedora-36/group_redhat-et-microshift-fedora-36.repo for Fedora 36 and https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift/repo/fedora-rawhide/group_redhat-et-microshift-fedora-rawhide.repo for Fedora Rawhide below]

curl -L -o /etc/yum.repos.d/fedora-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-modular.repo 
curl -L -o /etc/yum.repos.d/fedora-updates-modular.repo https://src.fedoraproject.org/rpms/fedora-repos/raw/rawhide/f/fedora-updates-modular.repo 
curl -L -o /etc/yum.repos.d/group_redhat-et-microshift-fedora-35.repo https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift/repo/fedora-35/group_redhat-et-microshift-fedora-35.repo

Enable cri-o and install microshift. [Use cri-o:1.23 for Fedora 36 and cri-o:1.24 for Fedora Rawhide]

rpm-ostree ex module enable cri-o:1.21
rpm-ostree install i2c-tools cri-o cri-tools microshift

For Fedora 36 and Rawhide, replace the microshift binary, see Part 21 for details.

curl -L https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-arm64 > /usr/local/bin/microshift
chmod +x /usr/local/bin/microshift
cp /usr/lib/systemd/system/microshift.service /etc/systemd/system/microshift.service
sed -i "s|/usr/bin|/usr/local/bin|" /etc/systemd/system/microshift.service
systemctl daemon-reload

Install dependencies to build RTIMULib

rpm-ostree install git zlib-devel libjpeg-devel gcc gcc-c++ python3-devel python3-pip cmake
rpm-ostree install kernel-devel kernel-headers ncurses-devel
systemctl reboot

Set up libvirtd on the host and validate qemu

rpm-ostree install libvirt-client libvirt-nss qemu-system-aarch64 virt-manager virt-install virt-viewer
# Works with nftables on Fedora IoT
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd
systemctl reboot

ssh root@$ipaddress
virt-host-validate qemu
rpm-ostree status

Output

[root@microshift ~]# rpm-ostree status
State: idle
Deployments:
● fedora-iot:fedora/stable/aarch64/iot
                   Version: 36.20220618.0 (2022-06-18T10:41:25Z)
                BaseCommit: 48fb8542e522a5004e4f0bb5f15e7cb8c3da4b54e69a970ef69a9858365cc678
              GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
           LayeredPackages: 'gcc-c++' cmake cockpit cri-o cri-tools gcc git i2c-tools kernel-devel kernel-headers libjpeg-devel libvirt-client
                            libvirt-nss microshift ncurses-devel python3-devel python3-pip qemu-system-aarch64 virt-install virt-manager virt-viewer
                            zlib-devel
            EnabledModules: cri-o:1.21

Install sensehat - The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries. We will install the default libraries; then overwrite the /usr/local/lib/python3.10/site-packages/sense_hat-2.2.0-py3.10.egg/sense_hat/sense_hat.py to use the smbus after installing RTIMULib in a few steps below.

i2cget -y 1 0x6A 0x75
i2cget -y 1 0x5f 0xf
i2cdetect -y 1
lsmod | grep st_
pip3 install Cython Pillow numpy sense_hat smbus

Output:

[root@fedora ~]# i2cget -y 1 0x6A 0x75
0x57
[root@fedora ~]# i2cget -y 1 0x5f 0xf
0xbc
[root@fedora ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

[root@fedora ~]# lsmod | grep st_
st_magn_spi            16384  0
st_pressure_spi        16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
regmap_spi             16384  1 st_sensors_spi
st_pressure_i2c        16384  0
st_magn_i2c            16384  0
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure            16384  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

Blacklist modules to remove the UU in the i2cdetect

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Check the Sense Hat with i2cdetect after the reboot

ssh root@$ipaddress
i2cdetect -y 1

Output:

[root@microshift ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Install RTIMULib

git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10

Replace the old sense_hat.py with the new code that uses SMBus

git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot

# Update the python package to use the i2cbus
cp -f sense_hat.py.new /usr/local/lib/python3.10/site-packages/sense_hat/sense_hat.py

Test the SenseHat samples for the Sense Hat's LED matrix - If you have the sense_hat.py in local directory, you will get the Error: "OSError: /var/roothome/microshift/raspberry-pi/sensehat-fedora-iot/sense_hat_text.png not found", so remove/rename the sense_hat.py from local directory.

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press

Test the USB camera - Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.

pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

Install the oc and kubectl client

ARCH=arm64
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc}

Start Microshift

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

systemctl enable --now crio microshift
#systemctl enable --now crio #systemctl start crio #systemctl enable --now microshift #systemctl start microshift

Configure firewalld

systemctl enable firewalld --now
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=30080/tcp --permanent # For samples
firewall-cmd --reload

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

The microshift service references the microshift binary in the /usr/bin directory

[root@microshift ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

If you make changes to the above microshift.service, you need to run the following to take changed configurations from filesystem and regenerate dependency trees and run "systemctl daemon-reload"

Samples to run on MicroShift

We will run a few samples that will show the use of helm, persistent volume, template, SenseHat and the USB camera. We will also setup KubeVirt and run the Fedora Virtual Machine.

1. Nginx web server with persistent volume

The source code is in github.

cd ~/microshift/raspberry-pi/nginx
oc project default

Create the data in /var/hpvolumes/nginx/data1. The data1 is because we use the subPath in the volumeMounts in the nginx.yaml

mkdir -p /var/hpvolumes/nginx/data1/
cp index.html /var/hpvolumes/nginx/data1/.
cp 50x.html /var/hpvolumes/nginx/data1/.

If you have the selinux set to Enforcing, the /var/hpvolumes used for creating persistent volumes will give permission denied errors when it runs the initContainers. Files labeled with container_file_t are the only files that are writable by containers. We relabel the /var/hpvolumes.

restorecon -R -v "/var/hpvolumes/*"

Output:

[root@microshift ~]# cd ~/microshift/raspberry-pi/nginx
[root@microshift nginx]# oc project default
Already on project "default" on server "https://127.0.0.1:6443".
[root@microshift nginx]# mkdir -p /var/hpvolumes/nginx/data1/
[root@microshift nginx]# cp index.html /var/hpvolumes/nginx/data1/.
[root@microshift nginx]# cp 50x.html /var/hpvolumes/nginx/data1/.
[root@microshift nginx]# getenforce
Enforcing
[root@microshift nginx]# ls -lZ /var/hpvolumes/
total 4
drwxr-xr-x. 3 root root unconfined_u:object_r:var_t:s0 4096 Mar  9 18:55 nginx
[root@microshift nginx]# restorecon -R -v "/var/hpvolumes/*"
Relabeled /var/hpvolumes/nginx from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/hpvolumes/nginx/data1 from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/hpvolumes/nginx/data1/index.html from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/hpvolumes/nginx/data1/50x.html from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
[root@microshift nginx]# ls -lZ /var/hpvolumes/
total 4
drwxr-xr-x. 3 root root unconfined_u:object_r:container_file_t:s0 4096 Mar  9 18:55 nginx

Now we create the pv, pvc, deployment and service. There will be two replicas of nginx sharing the same persistent volume.

oc apply -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml
#oc apply -f hostpathpv.yaml # Persistent Volume
#oc apply -f hostpathpvc.yaml # Persistent Volume Claim
#oc apply -f nginx.yaml # Deployment and Service

Let’s login to one of the pods to see the index.html. Also submit a curl request to nginx

oc get pods,deployments
oc exec -it deployment.apps/nginx-deployment -- cat /usr/share/nginx/html/index.html
curl localhost:30080 # Will return the standard nginx response from index.html

We can add a file hello in the shared volume from within the container

oc rsh deployment.apps/nginx-deployment
echo "Hello" > /usr/share/nginx/html/hello
exit

curl localhost:30080/hello

Output:

[root@microshift nginx]# curl localhost:30080/hello
Hello

Change the replicas to 1

oc scale deployment.v1.apps/nginx-deployment --replicas=1

We can test nginx by exposing the nginx-svc as a route. We can delete the deployment and service after we are done.

oc delete -f nginx.yaml

Then, delete the pvc

oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml

2. Nginx web server with template

The source code is in github.

cd ~/microshift/raspberry-pi/nginx
oc project default

If you use a different namespace xxxx instead of default, you will need to change the /etc/hosts to match the nginx-xxxx.cluster.local accordingly. The nginx-template* uses the image docker.io/nginxinc/nginx-unprivileged:alpine. The deployment config does not get processed in MicroShift. So, we use a template with a deployment instead.

#oc process -f nginx-template-deploymentconfig.yml | oc apply -f - # deploymentconfig does not work in microshift
oc process -f nginx-template-deployment-8080.yml | oc apply -f - # deployment works in microshift
oc get templates
oc get routes

Add the following to /etc/hosts on the Raspberry Pi 4

127.0.0.1 localhost nginx-default.cluster.local

Then, submit a curl request to nginx

curl nginx-default.cluster.local

To delete nginx, run

oc process -f nginx-template-deployment-8080.yml | oc delete -f -

We can also create the template in MicroShift and process the template by name

# Either of the following two may be used:
oc create -f nginx-template-deployment-8080.yml
#oc create -f nginx-template-deployment-80.yml

oc process nginx-template | oc apply -f -
curl nginx-default.cluster.local
oc process nginx-template | oc delete -f -
oc delete template nginx-template

rm -rf /var/hpvolumes/nginx/

3. Postgresql database server

The source code is in github.

cd ~/microshift/raspberry-pi/

Create a new project pg. Create the configmap, pv, pvc and deployment for PostgreSQL

oc new-project pg
mkdir -p /var/hpvolumes/pg

If you have the selinux set to Enforcing, run the

restorecon -R -v "/var/hpvolumes/*"

Output:

[root@microshift pg]# mkdir -p /var/hpvolumes/pg
[root@microshift pg]# restorecon -R -v "/var/hpvolumes/*"
Relabeled /var/hpvolumes/pg from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
#oc apply -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc create -f pg-configmap.yaml
oc create -f hostpathpv.yaml
oc create -f hostpathpvc.yaml
oc apply -f pg.yaml
oc get configmap
oc get svc pg-svc
oc get all -lapp=pg
oc logs deployment/pg-deployment -f

Output:

[root@microshift pg]# oc get all -lapp=pg
NAME                                 READY   STATUS    RESTARTS   AGE
pod/pg-deployment-78cbc9cc88-wfgs7   1/1     Running   0          78s

NAME             TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/pg-svc   NodePort   10.43.123.126   <none>        5432:30080/TCP   78s

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pg-deployment   1/1     1            1           78s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/pg-deployment-78cbc9cc88   1         1         1       78s

First time we start postgresql, we can check the logs where it creates the database

[root@microshift pg]# oc logs deployment.apps/pg-deployment
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start


WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG:  database system was shut down at 2022-03-09 19:23:56 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
 done
server started
CREATE DATABASE


/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

LOG:  received fast shutdown request
LOG:  aborting any active transactions
waiting for server to shut down....LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2022-03-09 19:24:00 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections

Connect to the database

oc exec -it deployment.apps/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password

Create a TABLE cities and insert a couple of rows

CREATE TABLE cities (name varchar(80), location point);
\t
INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
SELECT * from cities;
\d
\q
exit

Let's delete the deployment and recreate it

oc delete deployment.apps/pg-deployment

Check that the data still exists

[root@microshift pg]# ls /var/hpvolumes/pg/data/
PG_VERSION  pg_clog       pg_hba.conf    pg_multixact  pg_serial     pg_stat_tmp  pg_twophase           postgresql.conf
base        pg_commit_ts  pg_ident.conf  pg_notify     pg_snapshots  pg_subtrans  pg_xlog               postmaster.opts
global      pg_dynshmem   pg_logical     pg_replslot   pg_stat       pg_tblspc    postgresql.auto.conf  postmaster.pid

Let's recreate the deployment and look at the deployment logs. This time it already has a database.

oc apply -f pg.yaml
oc logs deployment/pg-deployment -f

Output:

[root@microshift pg]# oc logs deployment/pg-deployment -f

PostgreSQL Database directory appears to contain a database; Skipping initialization

LOG:  database system was shut down at 2022-03-09 19:40:32 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections

Now we can connect to the database and look at the cities table

oc exec -it deployment.apps/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
SELECT * FROM cities;
\q

Output:

root@pg-deployment-78cbc9cc88-mjc77:/# psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb
psql (9.6.24)
Type "help" for help.

postgresdb=#
\q
root@pg-deployment-78cbc9cc88-mjc77:/# psql --host 192.168.1.209 --port 30080 --user postgresadmin --dbname postgresdb
Password for user postgresadmin:
psql (9.6.24)
Type "help" for help.

postgresdb=# SELECT * FROM cities;
     name      |    location
---------------+-----------------
 Madison       | (89.4,43.07)
 San Francisco | (-122.43,37.78)
(2 rows)

postgresdb=# \q
root@pg-deployment-78cbc9cc88-mjc77:/# exit
exit

Finally, we delete the deployment and project

oc delete -f pg.yaml
oc delete -f pg-configmap.yaml
oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml
oc delete project pg
rm -rf /var/hpvolumes/pg/

4. Sense Hat and USB camera sending to Node Red

We will install, configure and use Node Red to show pictures and chat messages sent from the Raspberry Pi 4 when a person is detected. We will first install Node Red on MicroShift.

The source code is in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered/
mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml
oc expose svc nodered-svc
oc get routes

Output:

[root@microshift nodered]# mkdir /var/hpvolumes/nodered
[root@microshift nodered]# restorecon -R -v "/var/hpvolumes/*"
Relabeled /var/hpvolumes/nodered from unconfined_u:object_r:var_t:s0 to unconfined_u:object_r:container_file_t:s0
…
[root@microshift nodered]# firewall-cmd --zone=public --add-port=30080/tcp --permanent
success
[root@microshift nodered]# oc get svc
NAME          TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
nodered-svc   NodePort   10.43.175.201   

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local

  1. In Manage Palette, Install the node-red-contrib-image-tools, node-red-contrib-image-output, and node-red-node-base64
  2. Import the Chat flow and the Picture (Image) display flow.
  3. On another browser tab, start the http://nodered-svc-nodered.cluster.local/chat
  4. On the Image flow, click on the square box to the right of image preview or viewer to Deactivate and Activate the Node. You will be able to see the picture when you Activate the Node and run samples below
cd ~/microshift/raspberry-pi/sensehat

a. Use a container

Check that we can access the Sense Hat and the camera from a container in docker. We need to update the container with the modified sense_hat.py.

podman build -f Dockerfile -t docker.io/karve/sensehat .
podman push docker.io/karve/sensehat
podman run --privileged --name sensehat -e ImageUploadURL=http://nodered-svc-nodered.cluster.local/upload -e WebSocketURL=ws://nodered-svc-nodered.cluster.local/ws/chat -ti docker.io/karve/sensehat bash
apt-get -y install python-smbus

Copy the sense_hat.py to /usr/lib/python2.7/dist-packages/sense_hat/sense_hat.py

1. From another console Tab, run

podman cp ../sensehat-fedora-iot/sense_hat.py.new sensehat:/tmp

2. Back to the previous console (Replace 192.168.1.209 with your Raspberry PI 4 ip address)

mv /tmp/sense_hat.py.new /usr/lib/python2.7/dist-packages/sense_hat/sense_hat.py

# Inside the container
python sparkles.py # Tests the Sense Hat's LED matrix

# Update the URL to your node red instance
sed -i "s|http://nodered2021.mybluemix.net/upload|http://nodered-svc-nodered.cluster.local/upload|g" send*
echo "192.168.1.209 nodered-svc-nodered.cluster.local" >> /etc/hosts
python sendimages1.py # Ctrl-C to terminate
python sendtonodered.py # Ctrl-C to stop
exit

# When we are done, delete the container
podman rm -f sensehat

Output:

root@53a910251cb2:/# python sendimages1.py
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded

root@53a910251cb2:/# echo $ImageUploadURL
http://nodered-svc-nodered.cluster.local/upload
root@53a910251cb2:/# echo $WebSocketURL
ws://nodered-svc-nodered.cluster.local/ws/chat
root@53a910251cb2:/# python sendtonodered.py # Ctrl-C to stop
ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1646858753: Temperature: 0 C"}
waiting 5 seconds...
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1646858764: Temperature: 32.0520820618 C"}
waiting 5 seconds...
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1646858776: Temperature: 32.0499992371 C"}
waiting 5 seconds...
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1646858789: Temperature: 31.9500007629 C"}

b. Run in MicroShift

We will use the image by building from the Dockerfile.fedora

podman build -f Dockerfile.fedora -t docker.io/karve/sensehat-fedora .
podman push docker.io/karve/sensehat-fedora:latest

We send pictures and web socket chat messages to Node Red using a pod in MicroShift. Update the URL to point to nodered-svc-nodered.cluster.local and the ip address in hostAliases to your Raspberry Pi 4 ip address in the sensehat-fedora.yaml.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: VideoSource
            value: "/dev/video0"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc apply -f sensehat-fedora.yaml

When we are done, we can delete the deployment

oc delete -f sensehat-fedora.yaml

5. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 4.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red with a person is detected. When we are done testing, we can delete the deployment

oc delete -f object-detection-fedora.yaml

4. Running a Virtual Machine Instance on MicroShift

We first deploy the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

Output:

[root@microshift vmi]# oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}"
Deployed
[root@microshift vmi]# oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
kubevirt.kubevirt.io/kubevirt condition met

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part-9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT and secretRef token for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc logs deployment/console-deployment -f -n kube-system
oc get routes -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The Output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”.

Output:

NAME                                            AGE   PHASE        IP    NODENAME   READY
virtualmachineinstance.kubevirt.io/vmi-fedora   21s   Scheduling                    False

NAME                                      READY   STATUS     RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        0/2     Init:0/2   0          21s

Output:

NAME                                            AGE   PHASE       IP    NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   70s   Scheduled         microshift.example.com   False

NAME                                      READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        2/2     Running   0          70s

Output:

NAME                                            AGE   PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   96s   Running   10.42.0.15   microshift.example.com   True

NAME                                      READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-7kxmp        2/2     Running   0          96s

Now we create a Pod to run ssh client and connect to the Fedora VM from this pod

kubectl run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

kubectl run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#kubectl attach sshclient -c sshclient -i -t

Note down the ip address of the vmi-fedora Virtual Machine Instance. Then, ssh to the Fedora VMI from this sshclient container.

Output:

[root@microshift vmi]# oc get vmi
NAME         AGE    PHASE     IP           NODENAME                 READY
vmi-fedora   5m1s   Running   10.42.0.15   microshift.example.com   True
[root@microshift vmi]# kubectl run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.15
The authenticity of host '10.42.0.15 (10.42.0.15)' can't be established.
ED25519 key fingerprint is SHA256:9ovBowl7n4nvD2flI1RG4C7nrGIpxh7ak/VlXcOOQBM.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.15' (ED25519) to the list of known hosts.
fedora@10.42.0.15's password:
[fedora@vmi-fedora ~]$ ping google.com
PING google.com (142.250.65.174) 56(84) bytes of data.
64 bytes from lga25s71-in-f14.1e100.net (142.250.65.174): icmp_seq=1 ttl=117 time=5.60 ms
64 bytes from lga25s71-in-f14.1e100.net (142.250.65.174): icmp_seq=2 ttl=117 time=6.71 ms
64 bytes from lga25s71-in-f14.1e100.net (142.250.65.174): icmp_seq=3 ttl=117 time=6.69 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 5.603/6.334/6.706/0.516 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.15 closed.
/ # exit
Session ended, resume using 'kubectl attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted
[root@microshift vmi]#
[root@microshift vmi]# oc delete -f vmi-fedora.yaml
virtualmachineinstance.kubevirt.io "vmi-fedora" deleted

5. Running a Virtual Machine on MicroShift

Install the kubevirt-operator-arm64 as in “4. Running a Virtual Machine Instance on MicroShift” above. Then, create the VM using the vm-fedora.yaml

cd /root/microshift/raspberry-pi/vmi
oc apply -f vm-fedora.yaml
oc get pods,vm,vmi -n default

Output:

[root@microshift vmi]# oc apply -f vm-fedora.yaml
virtualmachine.kubevirt.io/vm-example created
[root@microshift vmi]# oc get pods,vm -n default
NAME                                    AGE   STATUS    READY
virtualmachine.kubevirt.io/vm-example   23s   Stopped   False 

Start the virtual machine

kubectl patch virtualmachine vm-example --type merge -p  '{"spec":{"running":true}}' -n default

Note down the ip address of the vm-example Virtual Machine Instance. Then, ssh to the Fedora VMI from this sshclient container.

Output:

[root@microshift vmi]# oc get pods,vm,vmi -n default
NAME                                 READY   STATUS     RESTARTS   AGE
pod/virt-launcher-vm-example-6lvf6   0/2     Init:0/2   0          30s

NAME                                    AGE   STATUS     READY
virtualmachine.kubevirt.io/vm-example   97s   Starting   False

NAME                                            AGE   PHASE        IP    NODENAME   READY
virtualmachineinstance.kubevirt.io/vm-example   30s   Scheduling                    False
…
[root@microshift vmi]# oc get pods,vm,vmi -n default
NAME                                      READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vm-example-6lvf6        2/2     Running   0          105s

NAME                                    AGE    STATUS    READY
virtualmachine.kubevirt.io/vm-example   2m52s   Running   True

NAME                                            AGE   PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vm-example   105s   Running   10.42.0.17   microshift.example.com   True

[root@microshift vmi]# kubectl run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.17
The authenticity of host '10.42.0.17 (10.42.0.17)' can't be established.
ED25519 key fingerprint is SHA256:ktl0N4ALs6EiXuU27E2gXRMhx5ZsNoKzgeYpPoTqkxs.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.17' (ED25519) to the list of known hosts.
fedora@10.42.0.17's password:
[fedora@vm-example ~]$ cat /etc/redhat-release
Fedora release 32 (Thirty Two)
[fedora@vm-example ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.0.2.2  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::5054:ff:fed6:3ae9  prefixlen 64  scopeid 0x20
        ether 52:54:00:d6:3a:e9  txqueuelen 1000  (Ethernet)
        RX packets 283  bytes 28802 (28.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 202  bytes 25708 (25.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[fedora@vm-example ~]$ exit
logout
Connection to 10.42.0.17 closed.
/ # exit
Session ended, resume using 'kubectl attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted
[root@microshift vmi]#

We can use the virtctl to connect to the VM. The virtctl binary can be built or downloaded from existing image for the arm64 as shown in the “Building KubeVirt” section.

virtctl console vm-example
# Login as fedora/fedora
sudo sed -i "s/nameserver.*/nameserver 8.8.8.8/" /etc/resolv.conf
ping google.com
exit

Output:

[root@microshift kubevirt]# virtctl console vm-example
Successfully connected to vm-example console. The escape sequence is ^]

vm-example login: fedora
Password:
Last login: Thu Mar 10 22:51:28 on ttyAMA0
[fedora@vm-example ~]$ sudo sed -i "s/nameserver.*/nameserver 8.8.8.8/" /etc/resolv.conf 
[fedora@vm-example ~]$ ping google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=114 time=6.37 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=114 time=3.63 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.630/5.001/6.372/1.371 ms
[fedora@vm-example ~]$ exit
logout

We can access the OKD Web Console and run Actions (Restart, Stop, Pause etc) on the VM http://console-np-service-kube-system.cluster.local/

Web Console with Serial Console for Virtual Machine Instance


Stop the virtual machine using kubectl:

kubectl patch virtualmachine vm-example --type merge -p '{"spec":{"running":false}}' -n default

Output:

[root@microshift vmi]# oc get pods,vm,vmi -n default
NAME                                      READY   STATUS        RESTARTS   AGE
pod/virt-launcher-vm-example-6lvf6        2/2     Terminating   0          7m39s

NAME                                    AGE   STATUS    READY
virtualmachine.kubevirt.io/vm-example   14m   Stopped   False

NAME                                            AGE     PHASE       IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vm-example   7m39s   Succeeded   10.42.0.17   microshift.example.com   False

6. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml > metrics-server-components.yaml
oc apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system

If you see that the metrics pod cannot scrape because of no route to host: E0309 23:05:33.611303 1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.1.209:10250/metrics/resource\": dial tcp 192.168.1.209:10250: connect: no route to host" node="microshift.example.com"

Add the following hostNetwork: true under “template spec:”. If required, also add kubelet-preferred-address-types=InternalIP by editing the metrics-server deployment:

oc edit deployments -n kube-system metrics-server
    spec:
      hostNetwork: true
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-use-node-status-port
        - --metric-resolution=15s

# Wait for a couple of minutes for metrics to be collected

oc get --raw /apis/metrics.k8s.io/v1beta1/nodes
oc get --raw /apis/metrics.k8s.io/v1beta1/pods
oc get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

Output:

[root@microshift vmi]# oc adm top nodes
NAME                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
microshift.example.com   768m         19%    3007Mi          38%
[root@microshift vmi]# oc adm top pods -A
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     kube-flannel-ds-zds5d                 9m           11Mi
kube-system                     metrics-server-6cc6768f44-d824s       11m          14Mi
kubevirt                        virt-api-5775b79ffd-jsxcz             17m          91Mi
kubevirt                        virt-api-5775b79ffd-w2zt2             26m          92Mi
kubevirt                        virt-controller-569c95fcb9-2kpr4      27m          91Mi
kubevirt                        virt-controller-569c95fcb9-9lhjs      20m          85Mi
kubevirt                        virt-handler-c7ghq                    2m           105Mi
kubevirt                        virt-operator-77fb67d456-hqp9m        7m           96Mi
kubevirt                        virt-operator-77fb67d456-ks5t2        10m          109Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-j5pzk   1m           6Mi
openshift-dns                   dns-default-t5zvk                     7m           18Mi
openshift-dns                   node-resolver-9f5q8                   10m          12Mi
openshift-ingress               router-default-85bcfdd948-hvs2j       3m           37Mi
openshift-service-ca            service-ca-7764c85869-9lc7j           22m          36Mi

Cleanup MicroShift

We can use the ~/microshift/hack/cleanup.sh script to remove the pods and images.

Output:

[root@microshift hack]# ./cleanup.sh
DATA LOSS WARNING: Do you wish to stop and cleanup ALL MicroShift data AND cri-o container workloads?
1) Yes
2) No
#? 1
Stopping microshift
Removing crio pods
Removing crio containers
Removing crio images
Killing conmon, pause processes
Removing /var/lib/microshift
Cleanup succeeded

Containerized MicroShift on Fedora IoT

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Docker container and data is stored in a docker volume, microshift-data. This should be used for “Testing and Development” only.

1. Microshift Containerized

We will use the prebuilt image.

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2022-03-08-195111-linux-arm64
podman pull $IMAGE

podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;oc get nodes;oc get pods -A;crictl pods"

The output shows the microshift container running using podman and the rest of the pods in crio

podman ps

CONTAINER ID  IMAGE                                                                           COMMAND CREATED        STATUS            PORTS NAMES
fe2ca06de3ee  quay.io/microshift/microshift:4.8.0-0.microshift-2022-03-08-195111-linux-arm64  run     4 minutes ago  Up 4 minutes ago        microshift

oc get nodes

NAME                     STATUS   ROLES    AGE     VERSION
microshift.example.com   Ready    

oc get pods -A

NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-8g7kz                 1/1     Running   0          2m42s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-2jx8v   1/1     Running   0          2m31s
openshift-dns                   dns-default-6fmcs                     2/2     Running   0          2m41s
openshift-dns                   node-resolver-s65cw                   1/1     Running   0          2m42s
openshift-ingress               router-default-85bcfdd948-x77x6       1/1     Running   0          2m46s
openshift-service-ca            service-ca-7764c85869-rcxk7           1/1     Running   0          2m47s

The containers run within cri-o on the host: crictl pods

POD ID              CREATED              STATE     NAME                                  NAMESPACE	              ATTEMPT RUNTIME
e5ddf74b6fc0c       23 seconds ago       Ready     dns-default-6fmcs                     openshift-dns                   0       (default)
66e1b34f16776       About a minute ago   Ready     router-default-85bcfdd948-x77x6	 openshift-ingress               0       (default)
3555c52afd6f4       About a minute ago	  Ready     kubevirt-hostpath-provisioner-2jx8v   kubevirt-hostpath-provisioner   0       (default)
dcaca1c97a426       About a minute ago	  Ready     service-ca-7764c85869-rcxk7           openshift-service-ca            0       (default)
bbf82170fd431       2 minutes ago        Ready     node-resolver-s65cw                   openshift-dns                   0       (default)
5defcc5655bf0       2 minutes ago        Ready     kube-flannel-ds-8g7kz                 kube-system                     0       (default)

Now, we can run the samples shown earlier on Containerized MicroShift.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

2. MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in docker that will have crio within the container.

systemctl stop crio
systemctl disable crio

Output:

[root@microshift sensehat]# ls -d /sys/fs/cgroup/system.slice/crio.service/*
/sys/fs/cgroup/system.slice/crio.service/cgroup.controllers      /sys/fs/cgroup/system.slice/crio.service/memory.low
/sys/fs/cgroup/system.slice/crio.service/cgroup.events           /sys/fs/cgroup/system.slice/crio.service/memory.max
/sys/fs/cgroup/system.slice/crio.service/cgroup.freeze           /sys/fs/cgroup/system.slice/crio.service/memory.min
/sys/fs/cgroup/system.slice/crio.service/cgroup.kill             /sys/fs/cgroup/system.slice/crio.service/memory.numa_stat
/sys/fs/cgroup/system.slice/crio.service/cgroup.max.depth        /sys/fs/cgroup/system.slice/crio.service/memory.oom.group
/sys/fs/cgroup/system.slice/crio.service/cgroup.max.descendants  /sys/fs/cgroup/system.slice/crio.service/memory.pressure
/sys/fs/cgroup/system.slice/crio.service/cgroup.procs            /sys/fs/cgroup/system.slice/crio.service/memory.stat
/sys/fs/cgroup/system.slice/crio.service/cgroup.stat             /sys/fs/cgroup/system.slice/crio.service/memory.swap.current
/sys/fs/cgroup/system.slice/crio.service/cgroup.subtree_control  /sys/fs/cgroup/system.slice/crio.service/memory.swap.events
/sys/fs/cgroup/system.slice/crio.service/cgroup.threads          /sys/fs/cgroup/system.slice/crio.service/memory.swap.high
/sys/fs/cgroup/system.slice/crio.service/cgroup.type             /sys/fs/cgroup/system.slice/crio.service/memory.swap.max
/sys/fs/cgroup/system.slice/crio.service/cpu.pressure            /sys/fs/cgroup/system.slice/crio.service/misc.current
/sys/fs/cgroup/system.slice/crio.service/cpu.stat                /sys/fs/cgroup/system.slice/crio.service/misc.events
/sys/fs/cgroup/system.slice/crio.service/io.pressure             /sys/fs/cgroup/system.slice/crio.service/misc.max
/sys/fs/cgroup/system.slice/crio.service/memory.current          /sys/fs/cgroup/system.slice/crio.service/pids.current
/sys/fs/cgroup/system.slice/crio.service/memory.events           /sys/fs/cgroup/system.slice/crio.service/pids.events
/sys/fs/cgroup/system.slice/crio.service/memory.events.local     /sys/fs/cgroup/system.slice/crio.service/pids.max
/sys/fs/cgroup/system.slice/crio.service/memory.high
[root@microshift sensehat]# systemctl stop crio
[root@microshift sensehat]# ls -d /sys/fs/cgroup/system.slice/crio.service/*
ls: cannot access '/sys/fs/cgroup/system.slice/crio.service/*': No such file or directory

We will run the all-in-one microshift in podman using prebuilt images. I had to mount the /sys/fs/cgroup in the podman run command.  The “sudo setsebool -P container_manage_cgroup true” did not work. We can just volume mount /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount in /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-03-08-195111-linux-nft-arm64

We can inspect the microshift-data volume to find the path

[root@microshift sensehat]# podman volume inspect microshift-data
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-03-10T16:50:03.109703319Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
]

On the host raspberry pi, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman.

podman exec -it microshift crictl ps -a

Output:

NAME                     STATUS   ROLES    AGE     VERSION
microshift.example.com   Ready    <none>   3m31s   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-62clf                 1/1     Running   0          3m30s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-n992k   1/1     Running   0          3m19s
openshift-dns                   dns-default-mtknh                     2/2     Running   0          3m30s
openshift-dns                   node-resolver-mvlhk                   1/1     Running   0          3m30s
openshift-ingress               router-default-85bcfdd948-qncc4       1/1     Running   0          3m34s
openshift-service-ca            service-ca-7764c85869-ng2t9           1/1     Running   0          3m35s

[root@microshift sensehat]# podman ps
CONTAINER ID  IMAGE                                                                                   COMMAND     CREATED        STATUS            PORTS                                                               NAMES
750fe06b1692  quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-03-08-195111-linux-nft-arm64  /sbin/init  6 minutes ago  Up 6 minutes ago  0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp, 0.0.0.0:8080->8080/tcp  microshift

[root@microshift sensehat]# podman exec -it microshift crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 3ae5f39423fbc 67a95c8f1590260996af20f441d259b489ec32151e4e93c057224f3084cd8316 26 seconds ago Running dns 1 7a92be709ab97 fc117d080fbb1 quay.io/microshift/kube-rbac-proxy@sha256:2b5f44b11bab4c10138ce526439b43d62a890c3a02d42893ad02e2b3adb38703 2 minutes ago Running kube-rbac-proxy 0 7a92be709ab97 ad01136d6f908 quay.io/microshift/coredns@sha256:07e5397247e6f4739727591f00a066623af9ca7216203a5e82e0db2fb24514a3 2 minutes ago Exited dns 0 7a92be709ab97 7202654b802b5 quay.io/microshift/haproxy-router@sha256:706a43c785337b0f29aef049ae46fdd65dcb2112f4a1e73aaf0139f70b14c6b5 3 minutes ago Running router 0 7bf88cd6fd5be d343570ac0f85 85fc911ceba5a5a5e43a7c613738b2d6c0a14dad541b1577cdc6f921c16f5b75 3 minutes ago Running kube-flannel 0 76ca1eaee3f02 949c1360ae16b quay.io/microshift/flannel@sha256:13777a318497ae35593bb73499a0e2ff4cb7eda74f59c1ba7c3e88c717cbaab9 3 minutes ago Exited install-cni 0 76ca1eaee3f02 2a3c45fa198ec quay.io/microshift/service-ca-operator@sha256:1a8e53c8a67922d4357c46e5be7870909bb3dd1e6bea52cfaf06524c300b84e8 3 minutes ago Running service-ca-controller 0 49b80a4ba15c0 eff5e1377cce6 quay.io/microshift/cli@sha256:1848138e5be66753863c98b86c274bd7fb8572fe0da6f7156f1644187e4cfb84 3 minutes ago Running dns-node-resolver 0 301f2a28812f7 ef6e9be61f3b9 quay.io/microshift/hostpath-provisioner@sha256:cb0c1cc60c1ba90efd558b094ba1dee3a75e96b76e3065565b60a07e4797c04c 3 minutes ago Running kubevirt-hostpath-provisioner 0 bc1277ac1d84c 3edbbbbe64dfd quay.io/microshift/flannel-cni@sha256:39f81dd125398ce5e679322286344a4c13dded73ea0bf4f397e5d1929b43d033 4 minutes ago Exited install-cni-bin 0 76ca1eaee3f02

Let’s try the nginx sample.

Still on the host raspberry pi, copy the index.html to /var/hpvolumes/nginx/data1/ and shown in Sample 1 earlier and then run the nginx sample with

cd ~/microshift/raspberry-pi/nginx
mkdir -p /var/hpvolumes/nginx/data1/
cp index.html /var/hpvolumes/nginx/data1/.
restorecon -R -v "/var/hpvolumes/*"
oc apply -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml
oc expose svc nginx-svc
oc get routes

Then, login to the microshift all-in-one pod

podman exec -it microshift bash

Within the pod, get the ipaddress of the node with

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
kubectl get nodes -o wide

Output:

[root@microshift /]# kubectl get nodes -o wide
NAME                     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                               KERNEL-VERSION             CONTAINER-RUNTIME
microshift.example.com   Ready    <none>   14m   v1.21.0   10.88.0.2     <none>        Red Hat Enterprise Linux 8.4 (Ootpa)   5.16.11-200.fc35.aarch64   cri-o://1.21.4

Add the ipaddress with nginx-svc-default.cluster.local to /etc/hosts within the microshift pod

10.88.0.2       microshift.example.com microshift nginx-svc-default.cluster.local

Now within the microshift pod, you can give a request to nginx

curl nginx-svc-default.cluster.local:30080
curl nginx-svc-default.cluster.local

On host raspberry pi, add the ipaddress of raspberry pi (or 127.0.0.1) with nginx-svc-default.cluster.local to /etc/hosts

127.0.0.1 localhost.localdomain localhost nginx-svc-default.cluster.local

Now on host raspberry pi, you can give a request to nginx

curl nginx-svc-default.cluster.local

Since the name server cannot resolve the hostname in the route, you can give command as follows instead of modifying the /etc/hosts:

curl -H "Host: nginx-svc-default.cluster.local" 10.88.0.2

or

curl --resolv nginx-svc-default.cluster.local:80:10.88.0.2 http://nginx-svc-default.cluster.local
# The following will work if you expose the 30080 ip to the host when running microshift container
# curl --resolv nginx-svc-default.cluster.local:30080:10.88.0.2 http://nginx-svc-default.cluster.local:30080

The --resolv provides a custom address for a specific host and port pair

Alternatively, to avoid modifying the /etc/hosts, you can use the nip.io with the ip address of your microshift container. The nip.io is a free service that provides mapping any IP Address to a hostname with certain formats. Replace the 10.88.0.2 with the nginx.<ip address of the microshift container>.nip.io in commands below:

[root@microshift nginx]# podman inspect microshift | grep IPAddress
            "IPAddress": "10.88.0.2",
                    "IPAddress": "10.88.0.2",
[root@microshift nginx]# oc delete route nginx-svc
route.route.openshift.io "nginx-svc" deleted
[root@microshift nginx]# oc expose svc nginx-svc --hostname=nginx.10.88.0.2.nip.io
route.route.openshift.io/nginx-svc exposed
[root@microshift nginx]# podman exec -it microshift bash
[root@microshift /]# curl nginx.10.88.0.2.nip.io
...
<title>Welcome to nginx from MicroShift!</title>
...
[root@microshift /]# exit exit

This will allow you to give curl commands from the Raspberry Pi, microshift container in podman and from the nginx pods in crio within the microshift container.

Instead of the microshift container ip, you can replace with your host raspberry pi ip address as shown in the below snippet. In my case it is 192.168.1.209. Now on the host raspberry pi,

oc delete route nginx-svc
oc expose svc nginx-svc --hostname=nginx.192.168.1.209.nip.io --name=nginx-route

The new route inherits the name from the service unless you specify one using the --name option. You can now give the curl request from your Mac or from any machine that has access to the ip address of the raspberry pi, even from within the microshift container or the pod within the microshift container.

curl nginx.192.168.1.209.nip.io

Output:

MBP:~ karve$ # On my Laptop
MBP:~ karve$ curl nginx.192.168.1.239.nip.io
…
<title>Welcome to nginx from MicroShift!</title> 
…
MBP:~ karve$ ssh root@192.168.1.209 # Logging onto the Raspberry Pi
[root@microshift nginx]# curl nginx.192.168.1.239.nip.io # On raspberry Pi
…
<title>Welcome to nginx from MicroShift!</title> 
…
[root@microshift nginx]# podman exec -it microshift bash # On microshift container in podman
[root@microshift /]# curl nginx.192.168.1.209.nip.io
…
<title>Welcome to nginx from MicroShift!</title> 
…
[root@microshift /]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift /]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f888f8ff7-45h5z   1/1     Running   0          96m
nginx-deployment-7f888f8ff7-x62td   1/1     Running   0          96m
[root@microshift /]# kubectl exec -it deployment/nginx-deployment -- sh # within one of the two nginx containers in crio
Defaulted container "nginx" out of: nginx, volume-permissions (init)
/ $ curl localhost:8080
…
<title>Welcome to nginx from MicroShift!</title> 
…
/ $ curl http://nginx.192.168.1.209.nip.io # Cannot resolv?
/ $ curl --resolv nginx.192.168.1.209.nip.io:80:192.168.1.209 http://nginx.192.168.1.209.nip.io
/ $ curl --resolv nginx.192.168.1.209.nip.io:80:10.88.0.2 http://nginx.192.168.1.209.nip.io
/ $ exit # Exit out of nginx container in crio
[root@microshift /]# exit # Exit out of microshift container in podman
[root@microshift nginx]# 
[root@microshift nginx]# exit # Exit out of Raspberry Pi 4
logout
Connection to 192.168.1.239 closed.
MBP:~ karve$ # On my Laptop

Cleanup the nginx

oc delete -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml
rm -rf /var/hpvolumes/nginx/

We can run the postgresql sample in similar way. The sensehat and the object detection samples also work through the crio within the all-in-one podman container. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the following error:

Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount
mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

Output:

[root@microshift kubevirt]# podman exec -it microshift bash
[root@microshift /]# mount --make-shared /
[root@microshift /]# crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Image is up to date for quay.io/kubevirt/fedora-cloud-container-disk-demo@sha256:4de55b9ed3a405cdc74e763f6a7c05fe4203e34a8720865555db8b67c36d604b
[root@microshift /]# exit
exit

Building KubeVirt

We can either build the KubeVirt binaries from source or download the existing ones from a prebuilt container image.

a. Building virtctl from source

git clone https://github.com/kubevirt/kubevirt.git
cd kubevirt
vi ~/kubevirt/hack/common.sh

Hardcode the podman in the determine_cri_bin() as shown below to avoid the error “no working container runtime found. Neither docker nor podman seems to work.”

determine_cri_bin() {
    echo podman
}

Run the make command below to build KubeVirt

make bazel-build

After the build is complete, we can view the kubevirt-bazel-server container

podman exec kubevirt-bazel-server ls _out/cmd/virtctl
podman stop kubevirt-bazel-server
[root@microshift vmi]# podman exec kubevirt-bazel-server ls _out/cmd/virtctl
virtctl
virtctl-v0.51.0-96-gadd52d8c0-darwin-amd64
virtctl-v0.51.0-96-gadd52d8c0-linux-amd64
virtctl-v0.51.0-96-gadd52d8c0-windows-amd64.exe

Copy the virtctl binary to /usr/local/bin

cp _out/cmd/virtctl/virtctl /usr/local/bin
cd ..
rm -rf kubevirt

b. Copying virtctl arm64 binary from prebuilt image to /usr/local/bin

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

Problems

1. OSError: Cannot detect RPi-Sense FB device

[root@microshift ~]# python3 sparkles.py
Traceback (most recent call last):
  File "sparkles.py", line 4, in <module>
    sense = SenseHat()
  File "/usr/local/lib/python3.6/site-packages/sense_hat/sense_hat.py", line 39, in __init__
    raise OSError('Cannot detect %s device' % self.SENSE_HAT_FB_NAME)
OSError: Cannot detect RPi-Sense FB device

To solve this, use the new sense_hat.py that uses smbus.

2. CrashloopBackoff: dns-default and service-ca

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment if it keeps restarting:

oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Conclusion

In this Part 10, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Fedora IoT. We ran samples that used template, persistent volume for postgresql, Sense Hat, and USB camera. We saw an object detection sample that sent pictures and web socket messages to Node Red when a person was detected. Finally, we installed the OKD Web Console and saw how to manage a VM using KubeVirt on MicroShift. We will work with Kata Containers in Part 23. In the next Part 11, we will work with MicroShift on Raspberry Pi 4 with Fedora 35 Server.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift and KubeVirt on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#raspberry-pi
#fedora-iot

​ ​​

0 comments
55 views

Permalink