Infrastructure as a Service

 View Only

MicroShift – Part 15: Raspberry Pi 4 with openSUSE

By Alexei Karve posted Sun May 08, 2022 06:17 PM

  

MicroShift and KubeVirt on Raspberry Pi 4 with openSUSE Tumbleweed (64 bit)

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit) and in Part 13 with Ubuntu 22.04 (64 bit). In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively and in Part 14 on Rocky Linux. In this Part 15, we will work with MicroShift on openSUSE. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. Finally, we will run Jupyter notebooks on MicroShift.

Leap is openSUSE's regular release with specific versions released in a regular cadence. Tumbleweed is a rolling release in that the distribution is constantly updating. While Leap aims to be rock-solid, Tumbleweed rolls. Tumbleweed gets updates on a continuous basis, usually several times a week. However, to bring some order and make it easier to manage producing and consuming updates, Tumbleweed fetches updates in batches. A single batch of updates is called a snapshot. The Open Build Service produces a snapshot. Tumbleweed snapshots ought not to be confused with Btrfs snapshots that allow Btrfs users, using tools like YaST2 and snapper, to control which state they want to actualize. We will use the Just enough Operating System (JeOS) - a very basic system, no graphical desktop. JeOS provides ready to deploy server images.

Setting up the Raspberry Pi 4 with openSUSE

Run the following steps to download the openSUSE Linux image and setup the Raspberry Pi 4.

  1. Download the openSUSE Tumbleweed raspberrypi aarch64 image from http://download.opensuse.org/ports/aarch64/tumbleweed/appliances/
  2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
  3. Optionally, have a Keyboard and Monitor connected to the Raspberry Pi 4
  4. Insert Microsdxc into Raspberry Pi4 and poweron
  5. Find the ethernet dhcp ip address of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
$ sudo nmap -sn 192.168.1.0/24
Nmap scan report for 192.168.1.225
Host is up (0.0043s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
  1. Login using the keyboard attached to the Raspberry Pi 4 or ssh to the ethernet ip address above with root/linux.
$ ssh root@192.168.1.225
  1. Check the disk
localhost:~ # fdisk -lu
Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xed503f1b

Device         Boot   Start       End   Sectors  Size Id Type
/dev/mmcblk0p1         8192    139263    131072   64M  c W95 FAT32 (LBA)
/dev/mmcblk0p2       139264   1163263   1024000  500M 82 Linux swap / Solaris
/dev/mmcblk0p3      1163264 122138623 120975360 57.7G 83 Linux
  1. Optionally, enable wifi
yast

System->Network Settings->Edit wlan0

Set the Network Name and the WPA-PSK and Click OK

  1. Check the release
cat /etc/os-release
localhost:~ # cat /etc/os-release
NAME="openSUSE Tumbleweed"
# VERSION="20220419"
ID="opensuse-tumbleweed"
ID_LIKE="opensuse suse"
VERSION_ID="20220419"
PRETTY_NAME="openSUSE Tumbleweed"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:tumbleweed:20220419"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"
DOCUMENTATION_URL="https://en.opensuse.org/Portal:Tumbleweed"
LOGO="distributor-logo-Tumbleweed"
  1. Set the hostname with a domain and add the ipv4 address from above to /etc/hosts
hostnamectl set-hostname opensuse.example.com
echo $ipaddress opensuse.example.com opensuse>> /etc/hosts
  1. Install the updates and reboot
sudo zypper refresh
sudo zypper update -y
reboot
  1. Check the cgroup - A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.
ssh root@$ipaddress
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present

Output:

localhost:~ # mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
localhost:~ # cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name  hierarchy  num_cgroups  enabled
cpuset        0          42           1
cpu           0          42           1
cpuacct       0          42           1
blkio         0          42           1
memory        0          42           1
devices       0          42           1
freezer       0          42           1
net_cls       0          42           1
perf_event    0          42           1
net_prio      0          42           1
hugetlb       0          42           1
pids          0          42           1
rdma          0          42           1
misc          0          42           1

Install sense_hat and RTIMULib on openSUSE

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.

Install sensehat

zypper install -y zlib-devel gcc gcc-c++ i2c-tools python38-devel python38 python38-pip cmake git
pip3 install Cython Pillow numpy sense_hat

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, test it with i2cdetect.

modprobe i2c_dev
i2cdetect -y 1

Output:

opensuse:~ # i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Add the i2c-dev line to /etc/modules-load.d/i2c_dev.conf to load the kernel module automatically on boot. Writing to /etc/modules does not work.

cat << EOF >> /etc/modules-load.d/i2c_dev.conf
i2c-dev
EOF

Create the file /etc/udev/rules.d/99-i2c.rules with the following contents:

cat << EOF >> /etc/udev/rules.d/99-i2c.rules
KERNEL=="i2c-[0-7]",MODE="0666"
EOF

The Raspberry Pi build comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

opensuse:~ # lsmod | grep st_
st_pressure_spi        16384  0
st_magn_spi            16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
st_pressure_i2c        16384  0
st_magn_i2c            16384  0
st_pressure            20480  2 st_pressure_i2c,st_pressure_spi
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the modules and reboot to take effect

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Install RTIMULib

zypper install -y git
git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig

# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break

# Optional
zypper install -y libqt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install

Check the Sense Hat with i2cdetect and check that the i2c sensors are no longer being held.

i2cdetect -y 1
lsmod | grep st_

Output:

opensuse:~ # i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- 46 -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

opensuse:~ # lsmod | grep st_
vhost_net              32768  0
vhost                  53248  1 vhost_net
vhost_iotlb            16384  1 vhost
tap                    32768  1 vhost_net
tun                    61440  1 vhost_net

We can work with this using the smbus like the Fedora installs in previous parts. We already installed the default libraries. Now, we overwrite the /usr/local/lib/python3.8/site-packages/sense_hat/sense_hat.py with the new code that uses SMBus.

pip3 install smbus
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift
cd raspberry-pi/sensehat-fedora-iot
cp sense_hat.py.new /usr/local/lib/python3.8/site-packages/sense_hat/sense_hat.py

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt
Rainbow sample

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# Show two digits for multiple numbers
python3 digits.py

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Use the new get_state method from sense_hat.py
python3 joystick.py # U=Up D=Down L=Left R=Right M=Press 

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

Install MicroShift on the Raspberry Pi 4 openSUSE host

Install the dependencies and copy the latest microshift prebuilt binary

zypper install -y selinux-policy
zypper install -y firewalld cri-o cri-tools
ARCH=arm64
VERSION=$(curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | head -n 1 | cut -d '"' -f 4)
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/microshift-linux-$ARCH
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/release.sha256
BIN_SHA="$(sha256sum microshift-linux-$ARCH | awk '{print $1}')"
KNOWN_SHA="$(grep "microshift-linux-$ARCH" release.sha256 | awk '{print $1}')"
if [[ "$BIN_SHA" != "$KNOWN_SHA" ]]; then
    echo "SHA256 checksum failed" && exit 1
fi
sudo chmod +x microshift-linux-$ARCH
sudo mv microshift-linux-$ARCH /usr/bin/microshift
wget https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift.service
mv microshift.service /usr/lib/systemd/system/microshift.service

Install KVM on the host as shown below. If you get “curl error 52 or 16” or “Timeout exceeded error” during installation, just run the “zipper refresh” and the “zipper install” command again. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.

zypper ar -c -f -r http://download.opensuse.org/repositories/Virtualization/openSUSE_Tumbleweed/Virtualization.repo 
zypper refresh
zypper install -y libvirt virt-manager qemu-uefi-aarch64
systemctl enable --now libvirtd
virt-host-validate qemu

Check that cni plugins are present

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins

We will have systemd start and manage MicroShift. Refer to the microshift service for the three approaches.

systemctl enable --now crio microshift

# Copy flannel
cp /opt/cni/bin/flannel /usr/libexec/cni/.

You may read about selecting zones for your interfaces.

sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --reload

Additional ports may need to be opened. For external access to run kubectl or oc commands against MicroShift, add the 6443 port:

sudo firewall-cmd --zone=public --permanent --add-port=6443/tcp

For access to services through NodePort, add the port range 30000-32767:

sudo firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp

sudo firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --get-default-zone
#firewall-cmd --set-default-zone=public
#firewall-cmd --get-active-zones
firewall-cmd --list-all

Output:

opensuse:~/microshift/raspberry-pi/sensehat-fedora-iot # firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0 wlan0
  sources:
  services: dhcpv6-client ssh
  ports: 6443/tcp 30000-32767/tcp 2379-2380/tcp 80/tcp 443/tcp 10250/tcp 10251/tcp
  protocols:
  forward: yes
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

Check the microshift and crio logs

journalctl -u microshift -f
journalctl -u crio -f

The microshift service references the microshift binary in the /usr/bin working directory.

opensuse:~/microshift/raspberry-pi/sensehat-fedora-iot # cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target
Install the kubectl and the openshift oc client

Install the kubectl and the openshift oc client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf oc.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, get the nodename.

oc get nodes

Output:

opensuse:~/microshift/raspberry-pi/influxdb # oc get nodes
NAME                   STATUS   ROLES    AGE     VERSION
opensuse.example.com   Ready    <none>   3m36s   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-fedora-dynamic.sh.

sed -i "s|coreos|opensuse.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|opensuse.example.com|" grafana/grafana-data-dynamic.yaml

./runall-fedora-dynamic.sh

We create and push the “measure-fedora:latest” image using the Dockerfile that uses SMBus. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana. This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: opensuse.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner 

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
./docker-debianonfedora.sh
podman push docker.io/karve/nodered-fedora:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”. The node-red-node-pi-sense-hat module require a change in the sensehat.py in order to use the sense_hat.py.new that uses smbus and new function for joystick. This change is accomplished by overwriting with the modified sensehat.py in Dockerfile.debianonfedora (docker.io/karve/nodered-fedora:arm6 built using docker-debianonfedora.sh) and further copied from /tmp directory to the correct volume when the pod starts in nodered2.yaml.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT.

If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered2.yaml -f noderedroute.yaml -n nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile.fedora -t docker.io/karve/object-detection-raspberrypi4-fedora .
podman push docker.io/karve/object-detection-raspberrypi4-fedora:latest

Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection-fedora.yaml to point to your raspberry pi 4 ip address.

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.225

oc project default
oc apply -f object-detection-fedora.yaml

We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment.

oc delete -f object-detection-fedora.yaml

4. Running a Virtual Machine Instance on MicroShift

Find the latest version of the KubeVirt Operator.

LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST

I used the following version:

LATEST=20220507 # If the latest version does not work

oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt

We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.

cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'

Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.

oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system

Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/

We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)

crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.

cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods

The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.

Output:

opensuse:~/microshift/raspberry-pi/vmi # oc get vmi -o wide
NAME         AGE   PHASE     IP           NODENAME   READY   LIVE-MIGRATABLE   PAUSED
vmi-fedora   17m   Running   10.42.0.21   opensuse   True    False
opensuse:~/microshift/raspberry-pi/vmi # ssh fedora@10.42.0.21 "bash -c \"ping -c 2 google.com\""
fedora@10.42.0.21's password:
PING google.com (142.251.40.206) 56(84) bytes of data.
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=1 ttl=116 time=5.68 ms
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=2 ttl=116 time=9.02 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 5.680/7.350/9.020/1.670 ms

Alternatively, a second way is to create a Pod to run the ssh client and connect to the Fedora VM from this pod. Let’s create that openssh-client pod:

oc run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client

or

oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
#oc attach sshclient -c sshclient -i -t

Then, ssh to the Fedora VMI from this openssh-client container.

Output:

opensuse:~/microshift/raspberry-pi/vmi # oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ssh fedora@10.42.0.21 "bash -c \"ping -c 2 google.com\""
The authenticity of host '10.42.0.21 (10.42.0.21)' can't be established.
ED25519 key fingerprint is SHA256:QTq2EnaExn8e6AQVF62bjcuqxEwebF/L3T++u34oWlg.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.21' (ED25519) to the list of known hosts.
fedora@10.42.0.21's password:
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=116 time=4.46 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=116 time=7.57 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.458/6.014/7.571/1.556 ms
/ # exit
Session ended, resume using 'oc attach sshclient -c sshclient -i -t' command when the pod is running
pod "sshclient" deleted

A third way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4.

zypper install -y podman
id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id

Output:

opensuse:~/microshift/raspberry-pi/vmi # id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
WARN[0078] Error validating CNI config file /etc/cni/net.d/10-flannel.conflist: [failed to find plugin "flannel" in path [/usr/libexec/cni]]
opensuse:~/microshift/raspberry-pi/vmi # podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
opensuse:~/microshift/raspberry-pi/vmi # podman rm -v $id
a1d1d4b1e2de771d4c22718dcb5ff45be4aecb33136d03346ca2a8b58aed9f6a
opensuse:~/microshift/raspberry-pi/vmi # virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]

vmi-fedora login: fedora
Password:
Last login: Sat May  7 10:59:28 from 10.42.0.1
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.65.238) 56(84) bytes of data.
64 bytes from lga25s73-in-f14.1e100.net (142.250.65.238): icmp_seq=1 ttl=117 time=5.84 ms
64 bytes from lga25s73-in-f14.1e100.net (142.250.65.238): icmp_seq=2 ttl=117 time=8.45 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.838/7.145/8.452/1.307 ms
[fedora@vmi-fedora ~]$ # Ctrl-] to detach
opensuse:~/microshift/raspberry-pi/vmi #

When done, we can delete the VMI

oc delete -f vmi-fedora.yaml

We can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9. When done, you may delete kubevirt operator

oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml

5. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

zypper install -y wget jq
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml
oc apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
oc get --raw /apis/metrics.k8s.io/v1beta1/nodes
oc get --raw /apis/metrics.k8s.io/v1beta1/pods
oc get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

watch "kubectl top nodes;kubectl top pods -A"
watch "oc adm top nodes;oc adm top pods -A"

Output:

NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
opensuse   603m         15%    3171Mi          40%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     console-deployment-78c48cbd7d-ssh8s   1m           32Mi
kube-system                     kube-flannel-ds-nfnrx                 7m           27Mi
kube-system                     metrics-server-64cf6869bd-vdhf7       10m          20Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-67mx4   1m           22Mi
openshift-dns                   dns-default-brlt7                     5m           54Mi
openshift-dns                   node-resolver-9wpl4                   0m           14Mi
openshift-ingress               router-default-85bcfdd948-jct7s       2m           59Mi
openshift-service-ca            service-ca-7764c85869-7lndh           10m          59Mi

We can delete the metrics server using

oc delete -f metrics-server-components.yaml

6. Run a jupyter notebook sample for handwritten digit recognition

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook

We will create a password for accessing the jupyter notebook.

export JUPYTER_PASSWORD=mysecretpassword
pip3 install ipython_genutils
python3 jupyterpass.py

Output:

opensuse:~/microshift/raspberry-pi/tensorflow-notebook # export JUPYTER_PASSWORD=mysecretpassword
opensuse:~/microshift/raspberry-pi/tensorflow-notebook # pip3 install ipython_genutils
Requirement already satisfied: ipython_genutils in /usr/lib/python3.8/site-packages (0.2.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
opensuse:~/microshift/raspberry-pi/tensorflow-notebook # python3 jupyterpass.py
sha1:02bcaec4bfe6:cfe599ec414c186698b20dc0ffb7f13caf5edf20
True

We create a pod with the image jupyter/scipy-notebook that is available for arm64 with the digit-recognition.yaml. You may update the password for the notebook using the password sha1 that was generated above in the digit-recognition.yaml args.

vi digit-recognition.yaml # Update the password
oc apply -f digit-recognition.yaml

Output:

opensuse:~/microshift/raspberry-pi/tensorflow-notebook # oc apply -f digit-recognition.yaml
pod/digit-recognition created
service/digit-recognition-svc created
route.route.openshift.io/digit-recognition-route created
opensuse:~/microshift/raspberry-pi/tensorflow-notebook # oc get routes
NAME                      HOST/PORT                                       PATH   SERVICES                PORT   TERMINATION   WILDCARD
digit-recognition-route   digit-recognition-route-default.cluster.local          digit-recognition-svc   5001                 None

This digit-recognition pod downloads the handwritten-digits sample notebook in the initContainer, then runs the “jupyter notebook” command in the Container.

Add the ipaddress of the Raspberry Pi 4 device for digit-recognition-route-default.cluster.local to /etc/hosts on your Laptop. When the pod status shows Running, browse to http://digit-recognition-route-default.cluster.local/notebooks/work/digits.ipynb. The default password is mysecretpassword. We can run the cells in the notebook. The notebook loads a simple dataset of 8×8 gray level images of handwritten digits. We visualize the dataset in 2D and 3D using Principal Component Analysis and show PCA can also be used as a filtering approach for noisy data. Then we train a Support Vector Machine on the digits-dataset. Finally, we use cross validation to repeat the train/test split several times to get a more accurate estimate of the real test score by averaging the values found on the individual runs. You may add custom notebooks. For example: Click on File->Open from URL https://raw.githubusercontent.com/thinkahead/DeveloperRecipes/master/Notebooks/digits.ipynb

When we are done working with the digit recognition sample notebook, we can delete it as follows:

oc delete -f digit-recognition.yaml 

Ouput:

opensuse:~/microshift/raspberry-pi/tensorflow-notebook # oc delete -f digit-recognition.yaml
pod "digit-recognition" deleted
service "digit-recognition-svc" deleted
route.route.openshift.io "digit-recognition-route" deleted

7. Run a jupyter notebook sample for license plate recognition

We will run the sample described at the Red Hat OpenShift Data Science Workshop License plate recognition. The Dockerfile uses the arm64 Jupyter Notebook base image: scipy-notebook. Since we do not have a tensorflow arm64 image, we install it as described at Qengineering.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f notebook.yaml 

Check the routes

opensuse:~/microshift/raspberry-pi/tensorflow-notebook # oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Go to the work folder and run the License-plate-recognition notebook.

License Plate Detection Notebook



We can also run it as an application and test it using the corresponding notebooks and also upload custom images. When we are done working with the license plate recognition sample notebook, we can delete it as follows:

oc delete -f notebook.yaml

Cleanup MicroShift

We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

cd ~/microshift/hack
./cleanup.sh

Containerized MicroShift on openSUSE (64 bit)

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume
  2. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only

Microshift Containerized

If you did not already install podman, you can do it now.

zypper install -y podman

We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target
EOF


systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Now that microshift is started, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift && podman volume rm microshift-data

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

MicroShift Containerized All-In-One

Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.

systemctl stop crio
systemctl disable crio

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image).

sudo setsebool -P container_manage_cgroup true 
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

podman volume inspect microshift-data

On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.

To run the Virtual Machine examples in the all-in-one MicroShift, we need to update the AppArmour profile. Out of box, you will get the following error in the virt-launcher pod logs:

{"component":"virt-launcher","level":"error","msg":"internal error: Failed to start QEMU binary /usr/libexec/qemu-kvm for probing: libvirt:  error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied","pos":"qemuProcessQMPLaunch:9327","subcomponent":"libvirt","thread":"28","timestamp":"2022-05-07T22:33:47.757000Z"}

The virt-handler invokes the QEMU binary at /usr/libexec/qemu-kvm, which gets blocked by the AppArmor profile for libvirtd. Also, the qemu package on openSUSE installs the binary with a different location and name (e.g., /usr/bin/qemu-system-aarch64) as seen below:

opensuse:~ # ls -las /usr/bin/kvm* /usr/bin/qemu-system-aarch64
   64 -rwxr-xr-x 1 root root    62216 Apr 23 03:45 /usr/bin/kvm_stat
19692 -rwxr-xr-x 1 root root 20163120 Apr 24 10:39 /usr/bin/qemu-system-aarch64
#sudo ln -s /usr/bin/qemu-system-aarch64 /usr/bin/kvm
sudo ln -s /usr/bin/qemu-system-aarch64 /usr/libexec/qemu-kvm
opensuse:~ # sudo ln -s /usr/bin/qemu-system-aarch64 /usr/bin/kvm
opensuse:~ # ls -las /usr/bin/kvm* /usr/bin/qemu-system-aarch64
    0 lrwxrwxrwx 1 root root       28 May  7 22:56 /usr/bin/kvm -> /usr/bin/qemu-system-aarch64
   64 -rwxr-xr-x 1 root root    62216 Apr 23 03:45 /usr/bin/kvm_stat
19692 -rwxr-xr-x 1 root root 20163120 Apr 24 10:39 /usr/bin/qemu-system-aarch64
vi /etc/apparmor.d/usr.sbin.libvirtd

Add the following line in /etc/apparmor.d/usr.sbin.libvirtd (Line 94) and reload the apparmor service

  /usr/libexec/qemu-kvm PUx,
Changes to /etc/apparmor.d/usr.sbin.libvirtd

systemctl reload apparmor.service

Additionally, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.

podman exec -it microshift mount --make-shared /

We may also preload the virtual machine images using "crictl pull".

podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64

Now, we can run the samples shown earlier.

For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman. The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 30083 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 30083 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:

opensuse:~/microshift/raspberry-pi/vmi # oc get vmi,pods
NAME                                            AGE   PHASE     IP           NODENAME                 READY
virtualmachineinstance.kubevirt.io/vmi-fedora   17m   Running   10.42.0.15   microshift.example.com   True

NAME                                 READY   STATUS    RESTARTS   AGE
pod/virt-launcher-vmi-fedora-bgptm   2/2     Running   0          17m 
opensuse:~/microshift/raspberry-pi/vmi # virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
opensuse:~/microshift/raspberry-pi/vmi # oc get svc vmi-fedora-ssh
NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
vmi-fedora-ssh   NodePort   10.43.234.42   <none>        22:30083/TCP   15s
opensuse:~/microshift/raspberry-pi/vmi # podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.2
opensuse:~/microshift/raspberry-pi/vmi # oc run -i --tty ssh-proxy --rm --image=ubuntu --restart=Never -- /bin/sh -c "apt-get update;apt-get -y install openssh-client;ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 30083"
If you don't see a command prompt, try pressing enter.
…
The following additional packages will be installed:
  libbsd0 libcbor0.8 libedit2 libfido2-1 libmd0 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 xauth
Suggested packages:
  keychain libpam-ssh monkeysphere ssh-askpass
The following NEW packages will be installed:
  libbsd0 libcbor0.8 libedit2 libfido2-1 libmd0 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxext6 libxmuu1 openssh-client xauth
0 upgraded, 14 newly installed, 0 to remove and 3 not upgraded.
…
Warning: Permanently added '[10.88.0.2]:30083' (ED25519) to the list of known hosts.
fedora@10.88.0.2's password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.81.238) 56(84) bytes of data.
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=1 ttl=115 time=8.13 ms
64 bytes from lga25s74-in-f14.1e100.net (142.250.81.238): icmp_seq=2 ttl=115 time=22.1 ms

--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 8.128/15.098/22.069/6.970 ms
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted

After we are done, we can delete the all-in-one microshift container.

podman rm -f microshift && podman volume rm microshift-data

or if started using system, then

systemctl stop microshift

Errors

Podman shows "Error validating CNI config file /etc/cni/net.d/10-flannel.conflist: [failed to find plugin "flannel" in path [/usr/libexec/cni]]"

cp /opt/cni/bin/flannel /usr/libexec/cni/.

Conclusion

In this Part 15, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the openSUSE (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with openSUSE. Finally we saw how to run jupyter notebooks with the digit recognition and the license plate recognition demos.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


​​​​​

​​​​​​
0 comments
23 views

Permalink