MicroShift and KubeVirt on Kali Linux (64 bit)
Introduction
MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and further in Part 9, we looked at Virtualization with MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream. In Part 6, we deployed MicroShift on the Raspberry Pi 4 with Ubuntu 20.04. In Part 8, we looked at the All-In-One install of MicroShift on balenaOS. In Part 10, Part 11, and Part 12, we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively, Part 13 with Ubuntu 22.04, Part 14 on Rocky Linux, Part 15 on openSUSE, Part 16 on Oracle Linux, Part 17 on AlmaLinux, and Part 18 on Manjaro. In this Part 19, we will work with MicroShift on Kali Linux. We will run an object detection sample and send messages to Node Red installed on MicroShift. Further, we will setup KubeVirt and the OKD Web Console and run Virtual Machine Instances in MicroShift. We will also run sample notebooks for object detection and license plate detection.
Kali Linux is an open-source, Debian-based Linux distribution geared towards various information security tasks, such as Penetration Testing, Security Research, Computer Forensics and Reverse Engineering. Kali Linux has a direct lineage from this original distribution, running on through BackTrack Linux.
Setting up the Raspberry Pi 4 with Kali Linux
Run the following steps to download the Kali Linux image and setup the Raspberry Pi 4.
1. Download the Kali-ARM for Raspberry Pi 4 from https://www.kali.org/get-kali/#kali-arm
2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
3. Have a Keyboard and Monitor connected to the Raspberry Pi 4
4. Insert Microsdxc into Raspberry Pi 4 and poweron
5. Optionally, have the Keyboard and Monitor attached to the Raspberry Pi
6. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
$ sudo nmap -sn 192.168.1.0/24
Nmap scan report for 192.168.1.230
Host is up (0.080s latency).
MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
7. Login using user kali and password kali to ipaddress of your Raspberry Pi.
$ ssh kali@192.168.1.230
kali@192.168.1.230's password:
Linux kali-raspberry-pi 5.10.103-Re4son-v8l+ #1 SMP PREEMPT Debian kali-pi (2022-04-30) aarch64
The programs included with the Kali GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Kali GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
┌──(kali㉿kali-raspberry-pi)-[~]
└─$
8. Check the partition is using the full size
┌──(kali㉿kali-raspberry-pi)-[~]
└─$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 58G 9.1G 47G 17% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 1.6G 1.2M 1.6G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/mmcblk0p1 126M 84M 42M 68% /boot
tmpfs 782M 72K 782M 1% /run/user/130
tmpfs 782M 68K 782M 1% /run/user/1000
9. Update and add the ipv4 address to /etc/hosts
sudo apt update
sudo apt full-upgrade -y
sudo su -
hostnamectl set-hostname rpi.example.com
echo "$ipaddress rpi rpi.example.com" >> /etc/hosts
10. Optionally, enable wifi using nmcli
nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask
11. Check the release
cat /etc/os-release
┌──(root㉿rpi)-[~]
└─# cat /etc/os-release
PRETTY_NAME="Kali GNU/Linux Rolling"
NAME="Kali GNU/Linux"
ID=kali
VERSION="2022.2"
VERSION_ID="2022.2"
VERSION_CODENAME="kali-rolling"
ID_LIKE=debian
ANSI_COLOR="1;31"
HOME_URL="https://www.kali.org/"
SUPPORT_URL="https://forums.kali.org/"
BUG_REPORT_URL="https://bugs.kali.org/"
12. Update the kernel parameters
vim /boot/cmdline.txt
Concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In MicroShift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.
reboot
Verify
ssh kali@$ipaddress
sudo su -
cat /proc/cmdline
mount | grep cgroup # Check that memory and cpuset are present
cat /proc/cgroups | column -t # Check that memory and cpuset are present
Output:
┌──(kali㉿rpi)-[~]
└─$ sudo su -
┌──(root㉿rpi)-[~]
└─# cat /proc/cmdline
coherent_pool=1M 8250.nr_uarts=0 snd_bcm2835.enable_compat_alsa=0 snd_bcm2835.enable_hdmi=1 bcm2708_fb.fbwidth=1920 bcm2708_fb.fbheight=1200 bcm2708_fb.fbswap=1 smsc95xx.macaddr=E4:5F:01:2E:D8:95 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000 console=ttyS0,115200 console=tty1 root=PARTUUID=e1750e08-02 rootfstype=ext4 fsck.repair=yes rootwait net.ifnames=0 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
┌──(root㉿rpi)-[~]
└─# mount | grep cgroup # Check that memory and cpuset are present
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot)
┌──(root㉿rpi)-[~]
└─# cat /proc/cgroups | column -t # Check that memory and cpuset are present
#subsys_name hierarchy num_cgroups enabled
cpuset 0 93 1
cpu 0 93 1
cpuacct 0 93 1
blkio 0 93 1
memory 0 93 1
devices 0 93 1
freezer 0 93 1
net_cls 0 93 1
perf_event 0 93 1
net_prio 0 93 1
pids 0 93 1
Install sense_hat and RTIMULib on Kali Linux
The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries.
Install sensehat
apt install -y python3 python3-dev python3-pip python3-venv \
build-essential autoconf libtool \
pkg-config cmake libssl-dev \
i2c-tools openssl libcurl4-openssl-dev
pip3 install Cython Pillow numpy sense_hat
Check the Sense Hat with i2cdetect
modprobe i2c-dev
modprobe i2c-bcm2708
echo "i2c-dev" > /etc/modules-load.d/i2c-dev.conf
echo "i2c-bcm2708" > /etc/modules-load.d/i2c-bcm2708.conf
i2cdetect -y 1
Output
┌──(root㉿rpi)-[~]
└─# modprobe i2c-dev
┌──(root㉿rpi)-[~]
└─# modprobe i2c-bcm2708
┌──(root㉿rpi)-[~]
└─# i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
Install RTIMULib
cd ~
git clone https://github.com/RPi-Distro/RTIMULib.git
cd ~/RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
# Optional test the sensors
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11 # Ctrl-C to break
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10 # Ctrl-C to break
# Optional
apt -y install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake
make -j4
make install
Test the SenseHat samples for the Sense Hat's LED matrix and sensors.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot
# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt
# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt
# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py
# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt
# Show two digits for multiple numbers
sed -i "s/32,32,32/255,255,255/" digits.py
python3 digits.py
# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py
# Find Magnetic North
python3 compass.py
Install MicroShift on the Raspberry Pi 4 Kali Linux host
Use the install-kali202202.sh script to install the dependencies and copy the latest microshift prebuilt binary. You will also need to set the crio.conf.
./install-kali2022-2.sh
Install KVM on the host and validate the Host Virtualization Setup. The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.
sudo apt install -y virt-manager libvirt0 qemu-system
vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl restart firewalld
virt-host-validate qemu
Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/sensehat-fedora-iot]
└─# ls /opt/cni/bin/ # systemctl enable --now libvirtd
Created symlink /etc/systemd/system/multi-user.target.wants/libvirtd.service → /usr/lib/systemd/system/libvirtd.service.
Created symlink /etc/systemd/system/sockets.target.wants/virtlockd.socket → /usr/lib/systemd/system/virtlockd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/virtlogd.socket → /usr/lib/systemd/system/virtlogd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd.socket → /usr/lib/systemd/system/libvirtd.socket.
Created symlink /etc/systemd/system/sockets.target.wants/libvirtd-ro.socket → /usr/lib/systemd/system/libvirtd-ro.socket.
[root@rpi sensehat-fedora-iot]# virt-host-validate qemu
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for device assignment IOMMU support : WARN (Unknown if this platform has IOMMU support)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
Check that cni plugins are present
ls /opt/cni/bin/ # cni plugins
ls /usr/libexec/cni # empty
Output:
┌──(root㉿rpi)-[~/microshift]
└─# ls /opt/cni/bin/ # cni plugins
bandwidth bridge dhcp firewall flannel host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan vrf
┌──(root㉿rpi)-[~/microshift]
└─# ls /usr/libexec/cni # empty
ls: cannot access '/usr/libexec/cni': No such file or directory
Check the microshift and crio logs
journalctl -u microshift -f
journalctl -u crio -f
The microshift service references the microshift binary in the /usr/local/bin directory
┌──(root㉿rpi)-[~/microshift]
└─# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
After=crio.service
[Service]
WorkingDirectory=/usr/local/bin/
ExecStart=microshift run
Restart=always
User=root
[Install]
WantedBy=multi-user.target
Install the kubectl and the openshift oc client
ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
curl -o oc.tar.gz https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
tar -xzvf oc.tar.gz && \
rm -f oc.tar.gz && \
install -t /usr/local/bin {kubectl,oc} && \
rm -f {README.md,kubectl,oc}
It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
#watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
watch "oc get nodes;oc get pods -A;crictl pods;crictl images"
The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting. You may also need to increase the livenessProbe and readinessProbe timings.
oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'
You may also need to patch the service-ca deployment or delete the service-ca pod if it keeps restarting and goes to CrashLoopBackOff STATUS.
oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'
Install podman - We will use podman for containerized deployment of MicroShift and building images for the samples.
apt -y install podman buildah skopeo
Samples to run on MicroShift
We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.
1. InfluxDB/Telegraf/Grafana
The source code is available for this influxdb sample in github.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb
If you want to run all the steps in a single command, get the nodename.
oc get nodes
Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/influxdb]
└─# oc get nodes
NAME STATUS ROLES AGE VERSION
rpi.example.com Ready <none> 30m v1.21.0
Replace the annotation kubevirt.io/provisionOnNode with the above nodename and execute the runall-balena-dynamic.sh. Note that the node name is different when running MicroShift with the all-in-one containerized approach. So, you will use the microshift.example.com instead of the rpi.example.com.
sed -i "s|coreos|rpi.example.com|" influxdb-data-dynamic.yaml
sed -i "s|coreos|rpi.example.com|" grafana/grafana-data-dynamic.yaml
./runall-balena-dynamic.sh
We create and push the “measure:latest” image using the Dockerfile in the sensor directory. The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.
This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.
annotations:
kubevirt.io/provisionOnNode: rpi.example.com
spec:
storageClassName: kubevirt-hostpath-provisioner
Persistent Volumes and Claims Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/influxdb]
└─# oc get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-77cc519e-9e95-45f4-ab62-72f94d5995f2 58Gi RWO Delete Bound influxdb/grafana-data kubevirt-hostpath-provisioner 57s
persistentvolume/pvc-9eca9d5e-a1d3-4b16-be16-86f6f73e565a 58Gi RWO Delete Bound influxdb/influxdb-data kubevirt-hostpath-provisioner 2m21s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/grafana-data Bound pvc-77cc519e-9e95-45f4-ab62-72f94d5995f2 58Gi RWO kubevirt-hostpath-provisioner 57s
persistentvolumeclaim/influxdb-data Bound pvc-9eca9d5e-a1d3-4b16-be16-86f6f73e565a 58Gi RWO kubevirt-hostpath-provisioner 2m21s
Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.
Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh
./deleteall-balena-dynamic.sh
Deleting the persistent volume claims automatically deletes the persistent volumes.
2. Node Red live data dashboard with SenseHat sensor charts
We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered
Build and push the arm64v8 image "karve/nodered:arm64"
cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..
Deploy Node Red with persistent volume for /data within the node red container
mkdir /var/hpvolumes/nodered
restorecon -R -v "/var/hpvolumes/*"
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f
Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/
The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.
Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. You can see the screenshots for these dashboards in previous blogs.
We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:
cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered
3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red
This example requires the same Node Red setup as in the previous Sample 2.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection
We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.
podman build -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest
Update the env WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.230 shown below).
env:
- name: WebSocketURL
value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
- name: ImageUploadURL
value: http://nodered-svc-nodered.cluster.local/upload
hostAliases:
- hostnames:
- nodered-svc-nodered.cluster.local
ip: 192.168.1.230
Create the deployment
oc project default
oc apply -f object-detection.yaml
oc -n default wait deployment object-detection-deployment --for condition=Available --timeout=300s
We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages at http://nodered-svc-nodered.cluster.local/chat. When we are done testing, we can delete the deployment.
cd ~/microshift/raspberry-pi/object-detection
oc delete -f object-detection.yaml
4. Running a Virtual Machine Instance on MicroShift
Find the latest version of the KubeVirt Operator.
LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
I used the following version:
LATEST=20220529 # If the latest version does not work
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying multiple times and finally Deployed
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}" -w # Ctrl-C to break
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt
Output:
NAME READY STATUS RESTARTS AGE
virt-api-5ffc67cb65-jlpz5 1/1 Running 0 3m36s
virt-api-5ffc67cb65-m4fx7 1/1 Running 0 3m36s
virt-controller-7ffc5cf8cb-7t6jf 1/1 Running 0 2m38s
virt-controller-7ffc5cf8cb-hjr4b 1/1 Running 0 2m38s
virt-handler-jl5js 1/1 Running 0 2m38s
virt-operator-84b598f8df-hzfz9 1/1 Running 1 5m21s
virt-operator-84b598f8df-lmr6g 1/1 Running 0 5m21s
We can build the OKD Web Console (Codename: “bridge”) from the source as mentioned in Part 9. We will run the “bridge” as a container image that we run within MicroShift.
cd /root/microshift/raspberry-pi/console
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sleep 5
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[1].name}'
Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT value https://192.168.1.209:6443 with your raspberry pi 4's ip address, and secretRef token with the console-token-* from above two secret names for BRIDGE_K8S_AUTH_BEARER_TOKEN in okd-web-console-install.yaml. Then apply/create the okd-web-console-install.yaml.
vim okd-web-console-install.yaml # Update endpoint ip address, console-token
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc -n kube-system wait deployment console-deployment --for condition=Available --timeout=300s
oc logs deployment/console-deployment -f -n kube-system
Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/. If you see a blank page, you probably have the value of BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT set incorrectly.
We can optionally preload the fedora image into crio (if using the all-in-one containerized approach, this needs to be run within the microshift pod running in podman)
crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
Now let’s create a Fedora Virtual Machine Instance using the vmi-fedora.yaml.
cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-fedora.yaml
watch oc get vmi,pods
The output for the virtualmachineinstance PHASE goes from “Scheduling” to “Scheduled” to “Running” after the virt-launcher-vmi-fedora pod STATUS goes from “Init” to “Running”. Note down the ip address of the vmi-fedora Virtual Machine Instance. Directly connect to the VMI from the Raspberry Pi 4 with fedora as the password. Note that it will take another minute after the VMI goes to Running state to ssh to the instance.
Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# ssh -o StrictHostKeyChecking=no fedora@10.42.0.20 ping -c 2 google.com
Warning: Permanently added '10.42.0.20' (ED25519) to the list of known hosts.
fedora@10.42.0.20's password:
PING google.com (142.251.40.206) 56(84) bytes of data.
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=1 ttl=117 time=5.05 ms
64 bytes from lga34s38-in-f14.1e100.net (142.251.40.206): icmp_seq=2 ttl=117 time=4.84 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.842/4.946/5.050/0.104 ms
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─#
Another way to connect to the VM is to use the virtctl console. You can compile your own virtctl as was described in Part 9. To simplify, we copy virtctl arm64 binary from prebuilt container image to /usr/local/bin on the Raspberry Pi 4 and connect to the VMI using “virtctl console” command.
id=$(podman create docker.io/karve/kubevirt:arm64)
podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
podman rm -v $id
virtctl console vmi-fedora
Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# id=$(podman create docker.io/karve/kubevirt:arm64)
Trying to pull docker.io/karve/kubevirt:arm64...
Getting image source signatures
Copying blob 7065f6098427 done
Copying config 1c7a5aa443 done
Writing manifest to image destination
Storing signatures
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# podman cp $id:_out/cmd/virtctl/virtctl /usr/local/bin
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# podman rm -v $id
5f82d41b731122d71d85869bbc961be442d195c663011c82c7bfead17539de8b
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]
Fedora 32 (Cloud Edition)
Kernel 5.6.6-300.fc32.aarch64 on an aarch64 (ttyAMA0)
SSH host key: SHA256:c/8D+WUd+Tz4QvG6xw2mr+RNIMnI1xt7S4gmsR8yyJs (RSA)
SSH host key: SHA256:2bjet5m5WBtfKFji+hIrB1EvPa4KAlLJ20od5CggI8E (ECDSA)
SSH host key: SHA256:zPOwACDdwLWjN8dSD/F1AdcRO5N/ha7Qj3EQy6/Dkww (ED25519)
eth0: 10.42.0.20 fe80::389b:86ff:fe20:1199
vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.251.40.110) 56(84) bytes of data.
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=1 ttl=116 time=4.83 ms
64 bytes from lga25s79-in-f14.1e100.net (142.251.40.110): icmp_seq=2 ttl=116 time=4.20 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.197/4.514/4.831/0.317 ms
[fedora@vmi-fedora ~]$ # Ctrl-] to detach
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─#
When done, we can delete the VMI
oc delete -f vmi-fedora.yaml
We can run other VM and VMI samples for alpine, cirros and fedora images as in Part 9. When done, you may delete kubevirt operator.
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc delete -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
5. Run a jupyter notebook sample for license plate recognition
We will run the sample described at the Red Hat OpenShift Data Science Workshop License plate recognition. The Dockerfile uses the arm64 Jupyter Notebook base image: scipy-notebook. Since we do not have a tensorflow arm64 image, we install it as described at Qengineering. The notebook.yaml downloads the licence-plate-workshop sample in an initContainer.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f notebook.yaml
oc -n default wait pod notebook --for condition=Ready --timeout=600s
oc get routes
Output:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
flask-route flask-route-default.cluster.local notebook-svc 5000 None
notebook-route notebook-route-default.cluster.local notebook-svc 5001 None
The image is large, it may take a while for image to be downloaded: crictl images | grep tensorflow-notebook
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# crictl images | grep tensorflow-notebook
docker.io/karve/tensorflow-notebook arm64 4ea80c8404877 4.96GB
If running in the all-in-one microshift container, you need to run the command within the container
podman exec -it microshift crictl images | grep tensorflow-notebook # All in one
Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword
Output of top:
Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.3 us, 2.3 sy, 0.0 ni, 95.2 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 7812.7 total, 489.9 free, 1080.1 used, 6242.7 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 6603.3 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12964 root 20 0 12.8g 672824 113356 S 13.5 8.4 3:39.91 microshift
13239 root 20 0 754820 57020 37888 S 2.6 0.7 0:09.57 service-ca-oper
851 root 20 0 2082892 71560 40024 S 1.3 0.9 9:27.25 crio
Go the work folder and select and run the License-plate-recognition notebook at http://notebook-route-default.cluster.local/notebooks/work/02_Licence-plate-recognition.ipynb
Output of top while running the notebook
Tasks: 168 total, 1 running, 167 sleeping, 0 stopped, 0 zombie
%Cpu(s): 46.4 us, 7.5 sy, 0.0 ni, 45.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
MiB Mem : 7812.7 total, 148.7 free, 1909.9 used, 5754.2 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 5781.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
14869 kali 20 0 4246452 1.1g 310600 S 169.5 14.3 2:03.35 python
12964 root 20 0 12.8g 672840 113356 S 37.1 8.4 4:28.48 microshift
86 root 20 0 0 0 0 S 4.0 0.0 0:04.18 kswapd0
13239 root 20 0 754820 57020 37888 S 1.3 0.7 0:12.19 service-ca-oper
13328 kali 20 0 255484 95196 15464 S 1.0 1.2 0:08.77 jupyter-noteboo
We can also run it as an application and test it using the corresponding notebooks. Run the http://notebook-route-default.cluster.local/notebooks/work/03_LPR_run_application.ipynb
Wait for the following to appear.
Instructions for updating:
non-resource variables are not supported in the long term
Model Loaded successfully...
Model Loaded successfully...
[INFO] Model loaded successfully...
[INFO] Labels loaded successfully...
Then run http://notebook-route-default.cluster.local/notebooks/work/04_LPR_test_application.ipynb
We can experiment with a custom image. Let’s download the image to the pod and run the cells again with the new image and check the prediction.
oc exec -it notebook -- bash -c "wget \"https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz\" -O /tmp/3183KND.jpg"
Then, run the http://notebook-route-default.cluster.local/notebooks/work/05_Send_image.ipynb
Add the cell with the following code:
my_image = 'https://unsplash.com/photos/MgfKoRdI948/download?force=true&ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjUyNDY4Mjcz'
from PIL import Image
import requests
from io import BytesIO
response = requests.get(my_image)
img = BytesIO(response.content).read()
import base64
import requests
from json import dumps
encoded_image = base64.b64encode(img).decode('utf-8')
content = {"image": encoded_image}
json_data = dumps(content)
headers = {"Content-Type" : "application/json"}
r = requests.post(my_route + '/predictions', data=json_data, headers=headers)
print(r.content)
from IPython.display import Image
from IPython.core.display import HTML
Image(url=my_image)
When we are done working with the license plate recognition sample notebook, we can delete it as follows:
oc delete -f notebook.yaml
6. Run a jupyter notebook sample for object detection (8GB Raspberry Pi 4)
We will run the sample described at the Red Hat OpenShift Data Science Workshop Object Detection. We use the same container image as in previous Sample 5, the only change is to download the object detection sample in object-detection-rest.yaml from object-detection-rest.git.
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f object-detection-rest.yaml
oc -n default wait pod notebook --for condition=Ready --timeout=300s
oc get routes
We use the same service and route names as in Sample 5.
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# cd ~/microshift/raspberry-pi/tensorflow-notebook
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc apply -f object-detection-rest.yaml
pod/notebook created
service/flask-svc created
service/notebook-svc created
route.route.openshift.io/notebook-route created
route.route.openshift.io/flask-route created
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc -n default wait pod notebook --for condition=Ready --timeout=300s
pod/notebook condition met
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
flask-route flask-route-default.cluster.local notebook-svc 5000 None
notebook-route notebook-route-default.cluster.local notebook-svc 5001 None
Login at http://notebook-route-default.cluster.local/tree/work with the default password mysecretpassword. We can run the 1_explore.ipynb that will download twodogs.jpg and use a pre-trained model to identify objects in images. In the next notebooks (2_predict.ipynb, 3_run_flask.ipynb, and 4_test_flask.ipynb), this model is wrapped in a flask app that can be used as part of a larger application.
Output of top
Tasks: 172 total, 1 running, 171 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.3 us, 2.8 sy, 0.0 ni, 87.7 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 7812.7 total, 83.3 free, 5009.7 used, 2719.8 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 2672.9 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
12964 root 20 0 12.8g 676240 112316 S 35.0 8.5 8:06.63 microshift
17705 kali 20 0 753104 61256 13944 S 6.3 0.8 0:15.75 python
13239 root 20 0 754820 57216 37888 S 2.6 0.7 0:22.75 service-ca-oper
17989 kali 20 0 3690064 2.1g 285232 S 1.3 28.1 1:41.86 python3.9
851 root 20 0 2082892 73948 40044 S 1.0 0.9 9:36.64 crio
4507 root 20 0 749352 24564 14644 S 0.7 0.3 0:05.68 flanneld
In 4_test_flask.ipynb, replace the my_route as follows:
my_route = 'http://flask-svc:5000'
We can also test by downloading custom images, for example from Dogs Best Life.
oc exec -it notebook -- bash -c "wget https://dogsbestlife.com/wp-content/uploads/2016/05/two-dogs-same-litter-min.jpeg -O /home/jovyan/work/two-dogs-same-litter-min.jpeg"
oc exec -it notebook -- bash -c "wget https://dogsbestlife.com/wp-content/uploads/2016/05/two-dogs-min.jpeg -O /home/jovyan/work/two-dogs-min.jpeg"
In 4_test_flask.ipynb, replace the my_image and run the notebook.
my_image = 'two-dogs-min.jpeg'
When we are done working with the object detection sample notebook, we can delete it as follows:
oc delete -f object-detection-rest.yaml
7. Tutorial Notebooks from tensorflow.org
We can run the tutorials from https://www.tensorflow.org/tutorials using the tutorials.yaml. We use the same container image as in previous Sample 5, the only change is that it pulls notebooks from https://github.com/tensorflow/docs.git. Login at http://notebook-route-default.cluster.local/tree/work with the default password mysecretpassword
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc apply -f tutorials.yaml
pod/notebook created
service/flask-svc created
service/notebook-svc created
route.route.openshift.io/notebook-route created
route.route.openshift.io/flask-route created
┌──(root㉿rpi)-[~/microshift/raspberry-pi/tensorflow-notebook]
└─# oc get routes notebook-route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
notebook-route notebook-route-default.cluster.local notebook-svc 5001 None
You will need to make a few minor changes to prepend the /tmp/ to download temporary file and cache folders used in the notebooks to avoid permission denied errors because the local directory is not writable.
1. Tensorflow 2 quickstart beginner http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/quickstart/beginner.ipynb
2. TensorFlow 2 quickstart for experts http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/quickstart/advanced.ipynb
3. Segmentation http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/images/segmentation.ipynb
4. Classification of flowers that shows overfitting, data augmentation for generating additional training data http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/images/classification.ipynb
5. Audio recognition: Recognizing keywords http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/audio/simple_audio.ipynb
6. Time series forecasting - It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs) http://notebook-route-default.cluster.local/notebooks/work/site/en/tutorials/structured_data/time_series.ipynb
8. Compiling and deploying your Quarkus native app on MicroShift
Quarkus is built from the ground up to transform Java into the ideal language for building native binaries and Kubernetes applications. Combining the optimization capabilities of GraalVM with the build-time capability of Quarkus leads to the smallest possible memory footprint and startup time. Quarkus can run on a Raspberry Pi. We show how to use the multistage build to build the executable in a container. Reference https://quarkus.io/guides/building-native-image
We use the Dockerfile.graalvmaarch64 with the ldd version 2.28 in the base image ghcr.io/graalvm/graalvm-ce:latest and in the registry.access.redhat.com/ubi8/ubi-minimal:8.3 for the final image.
ssh kali@$ipaddress
git clone https://github.com/quarkusio/quarkus-quickstarts.git
cd quarkus-quickstarts/getting-started
mv .dockerignore test.dockerignore # The default .dockerignore filters everything except the target directory
wget https://raw.githubusercontent.com/thinkahead/microshift/main/raspberry-pi/quarkus/Dockerfile.graalvmaarch64
podman build -f Dockerfile.graalvmaarch64 -t quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64 .
podman login quay.io
podman push quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64
Note: You may need to specifically set the Repository Visibility to public in quay.io if events in the quarkus project show error in pulling the image and your repository is private.
Output for both RUN ldd --version during building shows ldd 2.28. The ldd version where the application is built needs to match that where it is executed.
[1/2] STEP 3/11: RUN ldd --version
ldd (GNU libc) 2.28
…
[2/2] STEP 4/8: RUN ldd --version
ldd (GNU libc) 2.28
Then, run the sample quarkus application in MicroShift as before with the new image.
sudo su -
cd ~/microshift/raspberry-pi/quarkus/
oc new-project quarkus --display-name "Sample Quarkus App"
oc project quarkus # If it already exists
# Update the quarkus-getting-started.yaml to use the quay.io/thinkahead/quarkus-getting-started:ldd-2.28-arm64
oc apply -f quarkus-getting-started.yaml -f quarkus-getting-started-route.yaml
Add the ipaddress of the Raspberry Pi 4 device for quarkus-getting-started-route-quarkus.cluster.local to /etc/hosts on your Laptop. The http://quarkus-getting-started-route-quarkus.cluster.local/ will show the “Congratulations, you have created a new Quarkus application.” and the http://quarkus-getting-started-route-quarkus.cluster.local/hello will show hello.
Finally, after we are done testing, we can delete the sample Quarkus application:
oc delete -f quarkus-getting-started.yaml -f quarkus-getting-started-route.yaml
Cleanup MicroShift
We can use the cleanup.sh script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.
cd ~/microshift/hack
./cleanup.sh
Containerized MicroShift on Kali Linux (64 bit)
We can run MicroShift within containers in two ways:
- MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored in a podman volume
- MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only
Microshift Containerized
If you did not already install podman, you can do it now.
apt install -y podman
We will use a new microshift.service that runs microshift in a pod using the prebuilt image and uses a podman volume. Rest of the pods run using crio on the host.
cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--replace \
--sdnotify=container \
--label io.containers.autoupdate=registry \
--network=host \
--privileged \
-d \
--name microshift \
-v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
-v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
-v microshift-data:/var/lib/microshift:rw,rshared \
-v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
-v /var/log:/var/log \
-v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target default.target
EOF
systemctl daemon-reload
systemctl enable --now crio microshift
podman ps -a
podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl pods;crictl images;podman ps"
Output:
NAME STATUS ROLES AGE VERSION
rpi.example.com Ready <none> 2m2s v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-wnc84 1/1 Running 0 2m2s
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-2hlvr 1/1 Running 0 111s
openshift-dns dns-default-wrjll 2/2 Running 0 2m2s
openshift-dns node-resolver-stz4c 1/1 Running 0 2m2s
openshift-ingress router-default-85bcfdd948-ls6m5 1/1 Running 0 2m5s
openshift-service-ca service-ca-7764c85869-6x8t9 1/1 Running 0 2m6s
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
RUNTIME
8bc4a07d7f7de 36 seconds ago Ready router-default-85bcfdd948-ls6m5 openshift-ingress 0
(default)
fd6e5906a9d36 56 seconds ago Ready dns-default-wrjll openshift-dns 0
(default)
bc6f67a70e604 About a minute ago Ready kubevirt-hostpath-provisioner-2hlvr kubevirt-hostpath-provisioner 0
(default)
65e31caa673a2 About a minute ago Ready service-ca-7764c85869-6x8t9 openshift-service-ca 0
(default)
1e03c12973c37 About a minute ago Ready kube-flannel-ds-wnc84 kube-system 0
(default)
2915497ec21fe About a minute ago Ready node-resolver-stz4c openshift-dns 0
(default)
IMAGE TAG IMAGE ID SIZE
quay.io/microshift/cli 4.8.0-0.okd-2021-10-10-030117 33a276ba2a973 205MB
quay.io/microshift/coredns 4.8.0-0.okd-2021-10-10-030117 67a95c8f15902 265MB
quay.io/microshift/flannel-cni 4.8.0-0.okd-2021-10-10-030117 0e66d6f50c694 8.78MB
quay.io/microshift/flannel 4.8.0-0.okd-2021-10-10-030117 85fc911ceba5a 68.1MB
quay.io/microshift/haproxy-router 4.8.0-0.okd-2021-10-10-030117 37292c44812e7 225MB
quay.io/microshift/hostpath-provisioner 4.8.0-0.okd-2021-10-10-030117 fdef3dc1264ad 39.3MB
quay.io/microshift/kube-rbac-proxy 4.8.0-0.okd-2021-10-10-030117 7f149e453e908 41.5MB
quay.io/microshift/microshift latest bdccb7de6c282 406MB
quay.io/microshift/service-ca-operator 4.8.0-0.okd-2021-10-10-030117 0d3ab44356260 276MB
registry.k8s.io/pause 3.6 7d46a07936af9 492kB
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ab5353e9d2b4 quay.io/microshift/microshift:latest run 3 minutes ago Up 3 minutes ago microshift
Now that microshift is started, we can run the samples shown earlier.
After we are done, we can stop microshift
systemctl stop microshift
podman volume rm microshift-data
Alternatively, delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.
podman stop microshift && podman volume rm microshift-data
After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.
MicroShift Containerized All-In-One
Let’s stop the crio on the host, we will be creating an all-in-one container in podman that will run crio within the container.
systemctl stop crio
systemctl disable crio
mkdir /var/hpvolumes
The SELinux status is disabled
┌──(root㉿rpi)-[~]
└─# sestatus
SELinux status: disabled
┌──(root㉿rpi)-[~]
└─# getenforce
Disabled
We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image).
podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64
We can set the KUBECONFIG and continue with running the samples.
Now that you know the above podman command to start the microshift all-in-one, you may alternatively use the following microshift service.
wget https://raw.githubusercontent.com/thinkahead/microshift/main/packaging/systemd/microshift-aio.service -O /usr/lib/systemd/system/microshift.service
# Add the "-p 80:80" after the "-p 6443:6443" so we can expose the applications
# Add the "-h microshift.example.com"
or
cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=multi-user.target default.target
EOF
Then run:
systemctl daemon-reload
systemctl start microshift
On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above.
export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman ps;podman exec -it microshift crictl ps -a"
Output:
NAME STATUS ROLES AGE VERSION
microshift.example.com Ready <none> 2m39s v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-8mtdz 1/1 Running 0 2m39s
kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-bxqpg 1/1 Running 0 109s
openshift-dns dns-default-fjv6s 2/2 Running 0 2m40s
openshift-dns node-resolver-v7m6d 1/1 Running 0 2m40s
openshift-ingress router-default-85bcfdd948-qsz9j 1/1 Running 0 2m44s
openshift-service-ca service-ca-7764c85869-qvw99 1/1 Running 0 2m45s
CONTAINER ID IMAGE COMMAND CREATED STATUS PORT
S NAMES
9cfc7c785011 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64 /sbin/init 4 minutes ago Up 4 minutes ago 0.0.
0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp, 0.0.0.0:8080->8080/tcp microshift
CONTAINER IMAGE CREATED
STATE NAME ATTEMPT POD IDD
de1b37376efcb quay.io/microshift/kube-rbac-proxy@sha256:2b5f44b11bab4c10138ce526439b43d62a890c3a02d42893ad02e2b3adb38703 8 seconds ago
Running kube-rbac-proxy 0 bce1ada7774ee7
3cb968400e205 quay.io/microshift/coredns@sha256:07e5397247e6f4739727591f00a066623af9ca7216203a5e82e0db2fb24514a3 22 seconds ago
Running dns 0 bce1ada7774ee6
d8d03cef32d6e quay.io/microshift/haproxy-router@sha256:706a43c785337b0f29aef049ae46fdd65dcb2112f4a1e73aaf0139f70b14c6b5 44 seconds ago
Running router 0 2de43a852614f7
becba51d25e41 quay.io/microshift/service-ca-operator@sha256:1a8e53c8a67922d4357c46e5be7870909bb3dd1e6bea52cfaf06524c300b84e8 About a minute ag
o Running service-ca-controller 0 ca2edbe1b716ba
e03bccf088386 quay.io/microshift/hostpath-provisioner@sha256:cb0c1cc60c1ba90efd558b094ba1dee3a75e96b76e3065565b60a07e4797c04c About a minute ag
o Running kubevirt-hostpath-provisioner 0 1a259eba1a76a1
45ec141829454 85fc911ceba5a5a5e43a7c613738b2d6c0a14dad541b1577cdc6f921c16f5b75 About a minute ag
o Running kube-flannel 0 413a54ce914ec1
1b6e5488d67f2 quay.io/microshift/flannel@sha256:13777a318497ae35593bb73499a0e2ff4cb7eda74f59c1ba7c3e88c717cbaab9 About a minute ag
o Exited install-cni 0 413a54ce914eca
c806194a55112 quay.io/microshift/cli@sha256:1848138e5be66753863c98b86c274bd7fb8572fe0da6f7156f1644187e4cfb84 About a minute ag
o Running dns-node-resolver 0 458d27f01a3b51
8a2a16f1d9cbf quay.io/microshift/flannel-cni@sha256:39f81dd125398ce5e679322286344a4c13dded73ea0bf4f397e5d1929b43d033 2 minutes ago
Exited install-cni-bin 0 413a54ce914ec
The crio service is stopped on the Raspberry Pi, so crictl command will not work directly on the Pi. The crictl commands will work within the microshift container in podman as shown in the watch command above.
Now, we can run the samples shown earlier. To run the Virtual Machine examples in the all-in-one MicroShift, we need to execute the mount with --make-shared as follows in the microshift container to prevent the “Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount” event from virt-handler.
podman exec -it microshift mount --make-shared /
We may also preload the virtual machine images using "crictl pull".
podman exec -it microshift crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
For the Virtual Machine Instance Sample 4, we can connect to the vmi-fedora by exposing the ssh port for the Virtual Machine Instance as a NodePort Service after the instance is started. This NodePort is within the all-in-one pod that is running in podman.
oc get vmi,pods
virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
oc get svc vmi-fedora-ssh # Get the nodeport
podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift # Get the podman_ip_address
oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@$podman_ip_address -p $nodeport"
The ip address of the all-in-one microshift podman container is 10.88.0.2. We expose the target port 22 on the VM as a service on port 22 that is in turn exposed on the microshift container with allocated port 31871 as seen below. We run and exec into a new pod called ssh-proxy, install the openssh-client on the ssh-proxy and ssh to the port 31871 on the all-in-one microshift container. This takes us to the VMI port 22 as shown below:
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# oc get vmi,pods
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/vmi-fedora 3m5s Running 10.42.0.14 microshift.example.com True
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-vmi-fedora-6zzgl 2/2 Running 0 2m40s
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# virtctl expose vmi vmi-fedora --port=22 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# oc get svc vmi-fedora-ssh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmi-fedora-ssh NodePort 10.43.52.250 <none> 22:31871/TCP 15s
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# podman inspect --format "{{.NetworkSettings.IPAddress}}" microshift
10.88.0.2
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─# oc run -i --tty ssh-proxy --rm --image=karve/alpine-sshclient:arm64 --restart=Never -- /bin/sh -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null fedora@10.88.0.2 -p 31871"
If you don't see a command prompt, try pressing enter.
[fedora@vmi-fedora ~]$ ping -c 2 google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=115 time=3.41 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=115 time=3.57 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.411/3.489/3.567/0.078 ms
[fedora@vmi-fedora ~]$ sudo dnf install -y qemu-guest-agent >/dev/null
[fedora@vmi-fedora ~]$ sudo systemctl enable --now qemu-guest-agent
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.88.0.2 closed.
pod "ssh-proxy" deleted
┌──(root㉿rpi)-[~/microshift/raspberry-pi/vmi]
└─#
The QEMU guest agent that we installed is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.
After we are done, we can delete the all-in-one microshift container.
podman rm -f microshift && podman volume rm microshift-data
or if started using systemd, then
systemctl stop microshift && podman volume rm microshift-data
Error Messages in MicroShift Logs
Message: failed to fetch hugetlb info
The following journalctl command will continuously show warning messages “failed to fetch hugetlb info”. The default kernel for the Raspberry Pi OS does not support HugeTLB hugepages.
journalctl -u microshift -f
Output:
...
May 23 11:49:20 rpi.example.com microshift[3112]: W0523 11:49:20.347604 3112 container.go:586] Failed to update stats for container "/system.slice/crio-00c0b63eeee509979e7652cca8b91a1e9e900c1989b7fe54f6a05f6591de0108.scope": error while statting cgroup v2: [open /sys/kernel/mm/hugepages: no such file or directory
May 23 11:49:20 rpi.example.com microshift[3112]: failed to fetch hugetlb info
...
To remove these messages, we can recompile the microshift binary using the changes from hugetlb.go.
apt -y update pkg-config
# Install golang
wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH
git clone https://github.com/thinkahead/microshift.git
cd microshift
Edit the file vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/hugetlb.go and remove the return on err.
func statHugeTlb(dirPath string, stats *cgroups.Stats) error {
hugePageSizes, _ := cgroups.GetHugePageSize()
//hugePageSizes, err := cgroups.GetHugePageSize()
//if err != nil {
// return errors.Wrap(err, "failed to fetch hugetlb info")
//}
hugetlbStats := cgroups.HugetlbStats{}
Build and replace the microshift binary. Restart MicroShift.
make
mv microshift /usr/local/bin/.
systemctl restart microshift
Conclusion
In this Part 19, we saw multiple options to run MicroShift on the Raspberry Pi 4 with the Kali Linux (64 bit). We used dynamic persistent volumes to install InfluxDB/Telegraf/Grafana with a dashboard to show SenseHat sensor data. We ran samples that used the Sense Hat/USB camera and worked with a sample that sent the pictures and web socket messages to Node Red when a person was detected. We installed the OKD Web Console and saw how to connect to a Virtual Machine Instance using KubeVirt on MicroShift with Kali Linux. Finally, we saw how to run jupyter notebooks with the license plate recognition, object detection, image segmentation, image classification and audio keyword recognition. In Part 20, we will work with ArchLinux.
Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.
References