Infrastructure as a Service

 View Only

MicroShift – Part 11: Raspberry Pi 4 with Fedora 35 Server

By Alexei Karve posted Mon March 14, 2022 04:39 PM

  

MicroShift and KubeVirt on Raspberry Pi 4 with Fedora Server

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, we deployed MicroShift and KubeVirt on Fedora IoT. In this Part 11, we will deploy MicroShift on Fedora Server. We will deploy Node Red on MicroShift with a dashboard to show gauges for SenseHat sensor data. We will also run an object detection sample and send messages to Node Red deployment. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. Finally, we will install InfluxDB/Telegraf/Grafana with a dashboard that will show SenseHat sensor data.

Fedora distributes a generic Fedora Server Edition for Single Board Computers (SBC) for Raspberry Pi. Fedora on SBC hardware uses UEFI as boot system. It creates an efi partition and a small /boot partition, used by grub2 bootloader. Thereafter, it creates another partition including one volume group (VG), which provides a logical volume with XFS file system for the operating system and its software. The rest is left free for other logical volumes that are to hold user data. Separation of system and user data eases system administration, increases security, and decreases error-proneness.

Setting up the Raspberry Pi 4 with Fedora Server

Run the following steps to download the Fedora IoT image and setup the Raspberry Pi 4. Have a display and keyboard attached to the Raspberry Pi 4. For the first boot, these are required to perform an initial bare minimum configuration. Afterwards you will perform everything using either ssh or more comfortably using Cockpit, a graphical, web-based user interface, which is preinstalled and activated by default.

1. Download the latest version of the Fedora IoT Fedora: Raw Image for aarch64 onto your MacBook Pro
wget http://download.fedoraproject.org/pub/fedora/linux/releases/35/Server/aarch64/images/Fedora-Server-35-1.2.aarch64.raw.xz -O Fedora-Server-35-1.2.aarch64.raw.xz
2. Write to Microsdxc card using Balena Etcher, insert Microsdxc into Raspberry Pi4 and poweron. I used the 64GB Microsdxc card.
3. After the initial boot, Fedora allows you to directly customize the basic settings that will help you to create a user, configure the network and more. The customization is available on a display connected to the Raspberry Pi and not via SSH. If you have a DHCP server on your LAN, the only “strictly necessary” action is to configure the User. Press 5 and then 1 for User Creation, 2 to set the name to fedora, 5 to set the password, 6 for Administrator access and finally c twice to continue.
4. We can login to the cockpit at https://$ipaddress:9090 and configure the storage. We will instead ssh to the VM and perform the necessary steps.

5. Login using fedora user and password created above and enlarge partition and volume group to fill the disk space.

ssh fedora@$ipaddress
sudo su –

sudo fdisk -lu
sudo growpart /dev/mmcblk0 3
sudo fdisk -lu /dev/mmcblk0
sudo pvresize /dev/mmcblk0p3
sudo lvextend -l +100%FREE /dev/mapper/fedora_fedora/root
sudo xfs_growfs -d /

If you used the USB flash drive instead of the Microsdxc, switch the above to use /dev/sda instead of /dev/mmcblk0.
Output:

[root@fedora ~]# fdisk -lu
Disk /dev/mmcblk0: 59.48 GiB, 63864569856 bytes, 124735488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8f4e3975

Device         Boot   Start      End  Sectors  Size Id Type
/dev/mmcblk0p1 *       2048  1230847  1228800  600M  6 FAT16
/dev/mmcblk0p2      1230848  3327999  2097152    1G 83 Linux
/dev/mmcblk0p3      3328000 14680063 11352064  5.4G 8e Linux LVM


Disk /dev/mapper/fedora_fedora-root: 5.41 GiB, 5809111040 bytes, 11345920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/zram0: 7.64 GiB, 8205107200 bytes, 2003200 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

[root@fedora ~]# growpart /dev/mmcblk0 3
CHANGED: partition=3 start=3328000 old: size=11352064 end=14680064 new: size=121407455 end=124735455

[root@fedora ~]# fdisk -lu /dev/mmcblk0
Disk /dev/mmcblk0: 59.48 GiB, 63864569856 bytes, 124735488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8f4e3975

Device         Boot   Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *       2048   1230847   1228800  600M  6 FAT16
/dev/mmcblk0p2      1230848   3327999   2097152    1G 83 Linux
/dev/mmcblk0p3      3328000 124735454 121407455 57.9G 8e Linux LVM

[root@fedora ~]# sudo pvresize /dev/mmcblk0p3
  Physical volume "/dev/mmcblk0p3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized

[root@fedora ~]# df -h
Filesystem                      Size  Used Avail Use% Mounted on
devtmpfs                        3.8G     0  3.8G   0% /dev
tmpfs                           3.9G     0  3.9G   0% /dev/shm
tmpfs                           1.6G  9.3M  1.6G   1% /run
/dev/mapper/fedora_fedora-root  5.5G  2.5G  3.0G  45% /
tmpfs                           3.9G   20K  3.9G   1% /tmp
/dev/mmcblk0p2                 1014M  154M  861M  16% /boot
/dev/mmcblk0p1                  599M   31M  569M   6% /boot/efi
tmpfs                           783M     0  783M   0% /run/user/1000

[root@fedora ~]# sudo lvextend -l +100%FREE /dev/mapper/fedora_fedora/root
  Size of logical volume fedora_fedora/root changed from 5.41 GiB (1385 extents) to <57.89 GiB (14819 extents).
  Logical volume fedora_fedora/root successfully resized.

[root@fedora ~]# sudo xfs_growfs -d /
meta-data=/dev/mapper/fedora_fedora-root isize=512    agcount=4, agsize=354560 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=1418240, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 1418240 to 15174656

6. Enable wifi (optional)

nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask

7. Set the hostname with a domain (update with your raspberry pi ipaddress below)

sudo dnf update -y
hostnamectl hostname fedora.example.com
echo "$ipaddress fedora fedora.example.com" >> /etc/hosts

reboot

8. Check the release

[root@fedora ~]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)
[root@fedora ~]# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          93           1
cpu           0          93           1
cpuacct       0          93           1
blkio         0          93           1
memory        0          93           1
devices       0          93           1
freezer       0          93           1
net_cls       0          93           1
perf_event    0          93           1
net_prio      0          93           1
pids          0          93           1
misc          0          93           1
[root@fedora ~]# cat /etc/redhat-release
Fedora release 35 (Thirty Five)
[root@fedora ~]# uname -a
Linux fedora.example.com 5.16.12-200.fc35.aarch64 #1 SMP Wed Mar 2 18:49:17 UTC 2022 aarch64 aarch64 aarch64 GNU/Linux
[fedora@fedora ~]$ cat /etc/os-release
NAME="Fedora Linux"
VERSION="35 (Server Edition)"
ID=fedora
VERSION_ID=35
VERSION_CODENAME=""
PLATFORM_ID="platform:f35"
PRETTY_NAME="Fedora Linux 35 (Server Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:35"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f35/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=35
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=35
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Server Edition"
VARIANT_ID=server

Install the dependencies for MicroShift and SenseHat

Dependencies to build RTIMULib

sudo dnf -y install git cmake zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip

Dependencies for MicroShift. [Use cri-o:1.23 for Fedora 36 and cri-o:1.24 for Fedora Rawhide]

sudo dnf module enable -y cri-o:1.21
sudo dnf install -y cri-o cri-tools
sudo systemctl enable crio --now
sudo dnf copr enable -y @redhat-et/microshift
sudo dnf install -y microshift-selinux
ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

Setting up libvirtd on the host

sudo dnf -y install libvirt-client libvirt-nss qemu-system-aarch64 virt-manager virt-install virt-viewer
# Works with nftables on Fedora Server and Fedora IoT
# vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl enable --now libvirtd

Installing sense_hat and RTIMULib on Fedora IoT

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries. We will install the default libraries; then overwrite the /usr/local/lib/python3.10/site-packages/sense_hat-2.2.0-py3.10.egg/sense_hat/sense_hat.py to use the smbus after installing RTIMULib in a few steps below.

0x5c: LPS25H Pressure
0x1c: LSM9DS1 9-axis iNEMO inertial module (IMU): 3D magnetometer, 3D accelerometer, 3D gyroscope
0x5f: HTS221 Humidity and Temperature
0x46: LED2472G 24-Channels LED driver with LED error detection and gain control
0x6a: LSM9DS1 Accelerometer Gyro Magnetometer

Install sensehat

i2cget -y 1 0x6A 0x75
i2cget -y 1 0x5f 0xf
i2cdetect -y 1
lsmod | grep st_
pip3 install Cython Pillow numpy sense_hat smbus

Output:

[root@fedora ~]# i2cget -y 1 0x6A 0x75
0x57
[root@fedora ~]# i2cget -y 1 0x5f 0xf
0xbc
[root@fedora ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
[root@fedora ~]# lsmod | grep st_
st_magn_spi            16384  0
st_pressure_spi        16384  0
st_magn_i2c            16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
regmap_spi             16384  1 st_sensors_spi
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure_i2c        16384  0
st_pressure            16384  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the Industrial I/O modules.

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Check the Sense Hat with i2cdetect
After the reboot, the 1C and 5C show up.

ssh root@$ipaddress
i2cdetect -y 1

Output:

[root@microshift ~]# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Install RTIMULib

git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

Replace the sense_hat.py with the new file that uses SMBus

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift
cd raspberry-pi/sensehat-fedora-iot

pip3 install Cython Pillow numpy sense_hat smbus

# Update the python package to use the i2cbus
cp -f sense_hat.py.new /usr/local/lib/python3.10/site-packages/sense_hat/sense_hat.py

Test the SenseHat samples for the Sense Hat's LED matrix

If you have the sense_hat.py in local directory, you will get the Error: "OSError: /var/roothome/microshift/raspberry-pi/sensehat-fedora-iot/sense_hat_text.png not found"

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press

Test the USB camera

Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.

pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

Install the oc and kubectl client

ARCH=arm64
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc}

Start MicroShift

We will have systemd start and manage MicroShift on this rpm-based host. You may read about selecting zones for your interfaces.

sudo dnf install -y microshift firewalld
sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload

For Fedora 36 and Rawhide, replace the microshift binary, see Part 21 for details.

curl -L https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-arm64 > /usr/local/bin/microshift
chmod +x /usr/local/bin/microshift
cp /usr/lib/systemd/system/microshift.service /etc/systemd/system/microshift.service
sed -i "s|/usr/bin|/usr/local/bin|" /etc/systemd/system/microshift.service
systemctl daemon-reload
sudo systemctl enable microshift --now

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

sudo firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

sudo firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
firewall-cmd --set-default-zone=public
firewall-cmd --get-active-zones

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

echo "export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig" >> ~/.bash_profile
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

The microshift service references the microshift binary in the /usr/bin directory

[root@microshift ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

If you make changes to the above microshift.service, you need to run the following to take changed configurations from filesystem and regenerate dependency trees.

systemctl daemon-reload

Samples to run on MicroShift

We will run a few samples that will show the use of persistent volume, SenseHat and the USB camera.

1. Postgresql database server

The source code is in github

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/pg

Create a new project pg. Create the configmap, pv, pvc and deployment for PostgreSQL

oc new-project pg
mkdir -p /var/hpvolumes/pg

If you have the selinux set to Enforcing, run the

restorecon -R -v "/var/hpvolumes/*"

oc apply -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml

First time we start postgresql, the logs will show that it creates the database

oc logs deployment.apps/pg-deployment

Connect to the database

oc exec -it deployment.apps/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password

Create a TABLE cities and insert a couple of rows

CREATE TABLE cities (name varchar(80), location point);
\t
INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
SELECT * from cities;
\d
\q
exit

After we are done, we can delete the deployment and project

oc delete -f pg.yaml -f pg-configmap.yaml -f hostpathpvc.yaml -f hostpathpv.yaml
oc project default
oc delete project pg
rm -rf /var/hpvolumes/pg/

2. Node Red live data dashboard with SenseHat sensor charts

In this sample, we will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the guages for temperature/pressure/humidity data from SenseHat on the dashboard. This Node Red deployment will be used in the next two samples.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
./docker-debianonfedora.sh
docker push karve/nodered-fedora:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc logs deployment/nodered-deployment -f

Output:

[root@fedora nodered]# oc logs deployment/nodered-deployment -f
Installing Nodes

> node-red-node-pi-sense-hat@0.1.1 postinstall /data/node_modules/node-red-node-pi-sense-hat
> scripts/checklib.sh

Sense HAT python library is installed
added 138 packages from 209 contributors and audited 138 packages in 27.725s

3 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

Starting NodeRed

> node-red-docker@2.1.4 start /usr/src/node-red
> node $NODE_OPTIONS node_modules/node-red/red.js $FLOWS "--userDir" "/data"

14 Mar 17:25:17 - [info]

Welcome to Node-RED
===================

14 Mar 17:25:17 - [info] Node-RED version: v2.1.4
14 Mar 17:25:17 - [info] Node.js  version: v14.19.0
14 Mar 17:25:17 - [info] Linux 5.16.12-200.fc35.aarch64 arm64 LE
14 Mar 17:25:18 - [info] Loading palette nodes
14 Mar 17:25:21 - [info] Dashboard version 3.1.6 started at /ui
14 Mar 17:25:21 - [info] Settings file  : /data/settings.js
14 Mar 17:25:21 - [info] Context store  : 'default' [module=memory]
14 Mar 17:25:21 - [info] User directory : /data
14 Mar 17:25:21 - [warn] Projects disabled : editorTheme.projects.enabled=false
14 Mar 17:25:21 - [info] Flows file     : /data/flows.json
14 Mar 17:25:21 - [info] Server now running at http://127.0.0.1:1880/
14 Mar 17:25:22 - [warn]

---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.

If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.

You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------

14 Mar 17:25:22 - [info] Starting flows
14 Mar 17:25:22 - [info] Started flows

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

Environment Control and Temperature Graph

For Flow 2, you can see the state of the Joystick Up, Down, Left, Right or Pressed.

Temperature, Humidity, Pressure and Joystick

We can continue running the next two samples that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml -f noderedroute.yaml -n nodered

Output:

[root@fedora object-detection]# cd ~/microshift/raspberry-pi/nodered
[root@fedora nodered]# oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml -f noderedroute.yaml -n nodered
warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "noderedpv" deleted
persistentvolumeclaim "noderedpvc" deleted
deployment.apps "nodered-deployment" deleted
service "nodered-svc" deleted
route.route.openshift.io "nodered-route" deleted

3. Sense Hat and USB camera sending messages to Node Red

We will use Node Red to show pictures and chat messages sent from the Raspberry Pi 4 when a person is detected. The Node Red deployment earlier had preinstalled the node-red-contrib-image-tools, node-red-contrib-image-output, and node-red-node-base64 and imported the Chat flow and the Picture (Image) display flow. The Chat can be seen at http://nodered-svc-nodered.cluster.local/ and the images at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f. On the Image flow, click on the square box to the right of image preview or viewer to Deactivate and Activate the Node. You will be able to see the picture when you Activate the Node and run sample below.

cd ~/microshift/raspberry-pi/sensehat

We will use the image by building from the Dockerfile.fedora that used the base image built from Dockerfile. The base image for arm32v7 is from  https://hub.docker.com/r/arm32v7/ubuntu and the wheel for pip from https://www.piwheels.org/project/pip/. The RTIMULib binaries are from https://archive.raspberrypi.org/debian/pool/main/r/rtimulib/.

podman build -f Dockerfile.fedora -t docker.io/karve/sensehat-fedora .
podman push docker.io/karve/sensehat-fedora:latest

We send pictures and web socket chat messages to Node Red using a pod in MicroShift. Update the URL to point to nodered-svc-nodered.cluster.local and the ip address in hostAliases to your Raspberry Pi 4 ip address in the sensehat-fedora.yaml.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: VideoSource
            value: "/dev/video0"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc project default
oc apply -f sensehat-fedora.yaml

When we are done, we can delete the deployment

oc delete -f sensehat-fedora.yaml

4. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 4.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209
oc project default
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red with a person is detected. When we are done testing, we can delete the deployment

oc delete -f object-detection-fedora.yaml

5. Running a Virtual Machine Instance on MicroShift

We first install the kubevirt operator.

virt-host-validate qemu
LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
# We wait till the virt-operator, virt-api, virt-controller and virt-handler pods are started, this can take upto 5 minutes
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part-9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT in okd-web-console-install.yaml with your Raspberry Pi 4 ip address. Also replace the secretRef token for BRIDGE_K8S_AUTH_BEARER_TOKEN from the console-token secret name from one of secrets names from the two outputs above.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc logs deployment/console-deployment -f -n kube-system
oc get routes -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio.

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The Output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Now we create a Pod to run ssh client and connect to the Fedora VM from this pod.

kubectl run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

kubectl run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#kubectl attach sshclient -c sshclient -i -t

From the sshclient pod, ssh to the Fedora VMI from this sshclient container.

[root@fedora vmi]# oc get vmi,pods
NAME                                            AGE   PHASE     IP           NODENAME             READY
virtualmachineinstance.kubevirt.io/vmi-fedora   99s   Running   10.42.0.19   fedora.example.com   True

NAME                                      READY   STATUS    RESTARTS   AGE
pod/nodered-deployment-5bdddb5d94-szw4p   1/1     Running   0          35m
pod/virt-launcher-vmi-fedora-h77q8        2/2     Running   0          99s
[root@fedora vmi]# kubectl run alpine --privileged --rm -ti --image=alpine -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # apk update && apk add --no-cache openssh-client
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/aarch64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/aarch64/APKINDEX.tar.gz
v3.15.0-341-gbacd756db7 [https://dl-cdn.alpinelinux.org/alpine/v3.15/main]
v3.15.0-340-g4ed6115e99 [https://dl-cdn.alpinelinux.org/alpine/v3.15/community]
OK: 15734 distinct packages available
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/aarch64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/aarch64/APKINDEX.tar.gz
(1/6) Installing openssh-keygen (8.8_p1-r1)
(2/6) Installing ncurses-terminfo-base (6.3_p20211120-r0)
(3/6) Installing ncurses-libs (6.3_p20211120-r0)
(4/6) Installing libedit (20210910.3.1-r0)
(5/6) Installing openssh-client-common (8.8_p1-r1)
(6/6) Installing openssh-client-default (8.8_p1-r1)
Executing busybox-1.34.1-r3.trigger
OK: 10 MiB in 20 packages
/ # ssh fedora@10.42.0.19
The authenticity of host '10.42.0.19 (10.42.0.19)' can't be established.
ED25519 key fingerprint is SHA256:wpD6lU+d2T5R++UiRfuu5UyhR7VqCIy7F7kIRBabs6Q.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.19' (ED25519) to the list of known hosts.
fedora@10.42.0.19's password:
[fedora@vmi-fedora ~]$ ping google.com
PING google.com (142.251.40.238) 56(84) bytes of data.
64 bytes from lga34s39-in-f14.1e100.net (142.251.40.238): icmp_seq=1 ttl=117 time=4.35 ms
64 bytes from lga34s39-in-f14.1e100.net (142.251.40.238): icmp_seq=2 ttl=117 time=3.99 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 3.994/4.171/4.348/0.177 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.19 closed.
/ # exit
Session ended, resume using 'kubectl attach alpine -c alpine -i -t' command when the pod is running
pod "alpine" deleted
[root@fedora vmi]# oc delete -f vmi-fedora.yaml
virtualmachineinstance.kubevirt.io "vmi-fedora" deleted
[root@fedora vmi]#

6. Running a Virtual Machine on MicroShift

This meeds the kubevirt-operator-arm64 installed as in “4. Running a Virtual Machine Instance on MicroShift” above.

Copy virtctl binary from prebuilt image to /usr/local/bin

id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

Output:

[root@fedora vmi]# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
[root@fedora vmi]# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
[root@fedora vmi]# podman rm -v $id
5d82509f5fb6472b8a98dbaf8ce02f48a61343028d28c58c4a1cd6a481bf38dd

The virtctl binary can also be built as shown in the “Building KubeVirt” section.

Create the Virtual Machine using the vm-fedora.yaml. The Virtual Machine will show Stopped Status.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vm-fedora.yaml
oc get pods,vm,vmi -n default

Output:

[root@fedora vmi]# oc apply -f vm-fedora.yaml
virtualmachine.kubevirt.io/vm-example created
[root@fedora vmi]# oc get vm,vmi,pods -n default
NAME                                    AGE   STATUS    READY
virtualmachine.kubevirt.io/vm-example   33s   Stopped   False

Start the virtual machine

virtctl start vm-example -n default

or

kubectl patch virtualmachine vm-example --type merge -p  '{"spec":{"running":true}}' -n default
oc get pods,vm,vmi -n default

Output:

[root@fedora vmi]# virtctl start vm-example -n default
VM vm-example was scheduled to start
[root@fedora vmi]# oc get vm,vmi,pods -n default
NAME                                    AGE   STATUS     READY
virtualmachine.kubevirt.io/vm-example   70s   Starting   False

NAME                                            AGE   PHASE       IP    NODENAME             READY
virtualmachineinstance.kubevirt.io/vm-example   7s    Scheduled         fedora.example.com   False

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vm-example-6h6s9   2/2     Running   0          7s

[root@fedora vmi]# oc get vm,vmi,pods -n default
NAME                                    AGE     STATUS    READY
virtualmachine.kubevirt.io/vm-example   7m54s   Running   True

NAME                                            AGE     PHASE     IP           NODENAME             READY
virtualmachineinstance.kubevirt.io/vm-example   6m51s   Running   10.42.0.22   fedora.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vm-example-6h6s9   2/2     Running   0          6m52s

We can use the virtctl to connect to the VMI vm-example VMI with “virtctl console”.

virtctl console vm-example
# Login as fedora/fedora
exit

Output:

[root@fedora vmi]# virtctl console vm-example -n default
Successfully connected to vm-example console. The escape sequence is ^]

vm-example login: fedora
Password:
ping [fedora@vm-example ~]$ ping google.com
PING google.com (142.250.64.78) 56(84) bytes of data.
64 bytes from lga34s30-in-f14.1e100.net (142.250.64.78): icmp_seq=1 ttl=116 time=4.90 ms
64 bytes from lga34s30-in-f14.1e100.net (142.250.64.78): icmp_seq=2 ttl=116 time=4.00 ms
64 bytes from lga34s30-in-f14.1e100.net (142.250.64.78): icmp_seq=3 ttl=116 time=4.33 ms

--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 3.996/4.408/4.900/0.373 ms
[fedora@vm-example ~]$ exit
logout

Fedora 32 (Cloud Edition)
Kernel 5.6.6-300.fc32.aarch64 on an aarch64 (ttyAMA0)

SSH host key: SHA256:tdv1ws2EChY3geGQMc8Hc54fpMsaE7badmW7gJci7AI (RSA)
SSH host key: SHA256:431Is8lmx16VHkN5fatO1ZKB2RpvtO1nZ4I/S4GOSV4 (ECDSA)
SSH host key: SHA256:oxxHYVY++9z6TBJ4GH4am49Gl++VQ7ao9ZHhr0iYMj8 (ED25519)
eth0: 10.0.2.2 fe80::5054:ff:fe8d:6116
vm-example login: [root@fedora vmi]# 

Alternatively note down the ip address of the vm-example and ssh to it from a sshclient container as shown in previous sample.

We can access the OKD Web Console and run Actions (Restart, Stop, Pause etc) on the VM http://console-np-service-kube-system.cluster.local/

Stop the virtual machine using virtctl:

virtctl stop vm-example -n default

or the kubectl patch command:

kubectl patch virtualmachine vm-example --type merge -p '{"spec":{"running":false}}' -n default

Output:

[root@fedora vmi]# oc get vm,vmi,pods -n default
NAME                                    AGE   STATUS     READY
virtualmachine.kubevirt.io/vm-example   61m   Stopping   False

NAME                                            AGE   PHASE     IP           NODENAME             READY
virtualmachineinstance.kubevirt.io/vm-example   60m   Running   10.42.0.22   fedora.example.com   False

NAME                                 READY   STATUS        RESTARTS   AGE
pod/virt-launcher-vm-example-6h6s9   2/2     Terminating   0          60m

7. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml > metrics-server-components.yaml
oc apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system
oc logs deployment/metrics-server -n kube-system

Output:

[root@fedora vmi]# oc adm top pods -A
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     console-deployment-77dbf58b88-2jvdp   2m           16Mi
kube-system                     kube-flannel-ds-68rgv                 7m           10Mi
kube-system                     metrics-server-64cf6869bd-n6ghb       17m          15Mi
kubevirt                        virt-api-7c66dc7874-c29t8             8m           85Mi
kubevirt                        virt-api-7c66dc7874-lnxmb             6m           85Mi
kubevirt                        virt-controller-8546dcf8c-4hh82       10m          85Mi
kubevirt                        virt-controller-8546dcf8c-vm76z       11m          82Mi
kubevirt                        virt-handler-q7n27                    3m           95Mi
kubevirt                        virt-operator-6d4bff4fd4-bpw9w        10m          104Mi
kubevirt                        virt-operator-6d4bff4fd4-n46hq        6m           93Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-xbqs4   2m           6Mi
nodered                         nodered-deployment-5bdddb5d94-szw4p   1m           83Mi
openshift-dns                   dns-default-jxt49                     7m           19Mi
openshift-dns                   node-resolver-s7p2t                   0m           4Mi
openshift-ingress               router-default-85bcfdd948-bsd44       3m           38Mi
openshift-service-ca            service-ca-7764c85869-nqnxc           13m          33Mi
[root@fedora vmi]# oc adm top nodes
NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
fedora.example.com   626m         15%    4004Mi          51%

8. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/influxdb

We create and push the “measure-fedora:latest” image using the Dockerfile. If you want to run all the steps in a single command, just execute the runall-fedora.sh

./runall-fedora.sh
oc get route grafana-service -o jsonpath --template="http://{.spec.host}/login{'\n'}"

Output:

[root@fedora influxdb]# oc get route grafana-service -o jsonpath --template="http://{.spec.host}/login{'\n'}"
http://grafana-service-influxdb.cluster.local/login

The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana. Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage).

Open the Analysis Server dashboard to display monitoring information for MicroShift.

Analysis Server


Open the Balena Sense dashboard to show the temperature, pressure and humidity from SenseHat.

SenseHat Gauges


Finally, after you are done working with this sample, you can run the deleteall-fedora.sh

./deleteall-fedora.sh

Cleanup MicroShift

We can use the ~/microshift/hack/cleanup.sh script to remove the pods and images.

Output:

[root@fedora vmi]# cd ~/microshift/hack 
[root@fedora hack]# ./cleanup.sh
DATA LOSS WARNING: Do you wish to stop and cleanup ALL MicroShift data AND cri-o container workloads?
1) Yes
2) No
#? 1
Stopping microshift
Removing crio pods
Removing crio containers
Removing crio images
Killing conmon, pause processes
Removing /var/lib/microshift
Cleanup succeeded

Containerized MicroShift on Fedora IoT

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Docker container and data is stored in a docker volume, microshift-data. This should be used for “Testing and Development” only.

Microshift Containerized

We can use the latest prebuilt image.

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2022-03-11-124751-linux-arm64
podman pull $IMAGE

podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -v /var/hpvolumes:/var/hpvolumes -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;oc get nodes;oc get pods -A;crictl pods"

The microshift container will run using podman and the rest of the pods in crio. Now, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in docker that will have crio within the container.

systemctl stop crio
systemctl disable crio

We will run the all-in-one microshift in podman using prebuilt images. I had to mount the /sys/fs/cgroup in the podman run command.  The “sudo setsebool -P container_manage_cgroup true” did not work. We can just volume mount /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount in /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-03-11-124751-linux-nft-arm64

We can inspect the microshift-data volume to find the path

[root@fedora hack]# podman volume inspect microshift-data
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2022-03-13T10:51:41.468851891-04:00",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
]

On the host raspberry pi, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman.

podman exec -it microshift crictl ps -a

We can run the samples through the crio within the all-in-one podman container. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container. We may also preload the virtual machine images.

mount --make-shared /

Output:

[root@fedora kubevirt]# podman exec -it microshift bash
[root@fedora /]# mount --make-shared /
[root@fedora /]# crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Image is up to date for quay.io/kubevirt/fedora-cloud-container-disk-demo@sha256:4de55b9ed3a405cdc74e763f6a7c05fe4203e34a8720865555db8b67c36d604b
[root@fedora /]# exit
exit

Building KubeVirt

We can build the KubeVirt binaries from source.

git clone https://github.com/kubevirt/kubevirt.git
cd kubevirt
vi ~/kubevirt/hack/common.sh

Hardcode the podman in the determine_cri_bin() as shown below to avoid the error “no working container runtime found. Neither docker nor podman seems to work.”

determine_cri_bin() {
    echo podman
}

Run the make command below to build KubeVirt

make bazel-build

After the build is complete, we can view the kubevirt-bazel-server container

podman exec kubevirt-bazel-server ls _out/cmd/virtctl
podman stop kubevirt-bazel-server

Output:

[root@fedora vmi]# podman exec kubevirt-bazel-server ls _out/cmd/virtctl
virtctl
virtctl-v0.51.0-96-gadd52d8c0-darwin-amd64
virtctl-v0.51.0-96-gadd52d8c0-linux-amd64
virtctl-v0.51.0-96-gadd52d8c0-windows-amd64.exe

Copy the virtctl binary to /usr/local/bin

cp _out/cmd/virtctl/virtctl /usr/local/bin
cd ..
rm -rf kubevirt

Problems

1. OSError: Cannot detect RPi-Sense FB device

[root@fedora ~]# python3 sparkles.py
Traceback (most recent call last):
  File "sparkles.py", line 4, in <module>
    sense = SenseHat()
  File "/usr/local/lib/python3.6/site-packages/sense_hat/sense_hat.py", line 39, in __init__
    raise OSError('Cannot detect %s device' % self.SENSE_HAT_FB_NAME)
OSError: Cannot detect RPi-Sense FB device

To solve this, use the new sense_hat.py that uses smbus.

2. CrashloopBackoff: dns-default and service-ca

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment if it keeps restarting:

oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Conclusion

In this Part 11, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Fedora 35 Server. We ran samples that used persistent volume for postgresql, Sense Hat, and USB camera. We saw an object detection sample that sent pictures and web socket messages to Node Red when a person was detected. Next, we installed the OKD Web Console and saw how to manage a VM using KubeVirt on MicroShift. Finally, we installed InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We will work with Kata Containers in Part 23. In the next Part 12, we will work with MicroShift on Raspberry Pi 4 with Fedora CoreOS.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References

​ ​
0 comments
69 views

Permalink