Infrastructure as a Service

 View Only

MicroShift – Part 30: Raspberry Pi 4 with Yocto Langdale

By Alexei Karve posted Tue January 31, 2023 03:24 PM

  

Raspberry Pi 4 with Linux Distribution built using Yocto

Introduction

MicroShift is a Red Hat-led open-source community project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. Red Hat Device Edge delivers an enterprise-ready and supported distribution of MicroShift. Red Hat Device Edge is planned as a developer preview early 2023 and expected to be generally available with full support later in 2023.

Over the last 29 parts, we have worked with MicroShift 4.8 on multiple distros of Linux on the Raspberry Pi 4 and Jetson Nano. In this Part 30, we will build a Linux distribution with Yocto for the Raspberry Pi 4 and work with MicroShift in an all-in-one container using podman. We will run an object detection sample and send messages to Node Red installed on MicroShift. We will also use .NET to drive a Raspberry Pi Sense HAT and run sample Python Operators with kopf to connect to Postgresql and MongoDB and finally run a jupyter notebook.

The Yocto Project is an open-source collaboration project that helps developers create custom Linux-based systems that are designed for embedded products regardless of the product’s hardware architecture. The Yocto Project employs a collection of components and tools that are open source projects and metadata that are separate from the reference distribution (Poky) and the OpenEmbedded Build System (BitBake and OpenEmbedded-Core). The initial build times are long due to the large number of packages initially built from scratch for a fully functioning Linux system. Once that initial build is completed, however, the shared-state (sstate) cache mechanism Yocto Project uses keeps the system from rebuilding packages that have not been “touched” since the last build. The sstate mechanism significantly reduces times for successive builds. With CROPS, which leverages containers, you can create a Yocto Project development environment that is operating system agnostic. You may run the Yocto builds in Linux or within a poky container in Windows or Mac.

Building an Image for the Raspberry Pi 4 with Yocto

We use the Release 4.1 (langdale) of the Yocto Project. Directly building with Yocto on macOS does not work. We need to run within a Linux VM or container. We will run in a container with podman. If Yocto tries to build its files using podman with a directory volume mounted on MacOS, the build fails. The macOS file system does not support this in the same way that Linux expects. So, we need to use a podman volume. Run the following steps to create the Yocto image on your Macbook Pro.

brew install qemu podman

Podman machine defaults to 1 vCPU and 2 GiB RAM. We change it to use 6 CPUs and 10GB RAM. Default disk size is 100GB. This is sufficient to build the image with "rm_work" set later in the local.conf to lower the amounts of data stored in the data cache as well as on disk. If you want to save the work, you need at least 120GB.

podman machine init --cpus 6 --memory $(( 1024 * 10 )) # --disk-size 120

If you started the machine with defaults, you may change it by stopping the machine and setting the resources as follows:

podman machine stop
podman machine set --cpus=6 --memory=$(( 1024 * 10 ))
podman machine start
podman machine list

Output:

MBP:~ karve$ podman machine init
Downloading VM image: fedora-coreos-37.20230122.2.0-qemu.x86_64.qcow2.xz: done
Extracting compressed file
Image resized.
Machine init complete
To start your machine run:

	podman machine start

MBP:~ karve$ podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Mounting volume... /Users/karve:/Users/karve

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

	podman machine set --rootful

API forwarding listening on: /Users/karve/.local/share/containers/podman/machine/podman-machine-default/podman.sock

The system helper service is not installed; the default Docker API socket
address can't be used by podman. If you would like to install it run the
following commands:

	sudo /usr/local/Cellar/podman/4.3.1/bin/podman-mac-helper install
	podman machine stop; podman machine start

You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:

	export DOCKER_HOST='unix:///Users/karve/.local/share/containers/podman/machine/podman-machine-default/podman.sock'

Machine "podman-machine-default" started successfully

If you stop and restart the machine, you may need to add a system connection

podman system connection add yocto unix:///Users/$Username/.local/share/containers/podman/machine/podman-machine-default/podman.sock
podman system connection list

The /Users/$Username/.config/containers/podman/machine/qemu/podman-machine-default.json contains the parameters and you may modify that file directly.

podman machine set --rootful
podman machine start

Check that the cpus are allocated by logging onto the VM

podman machine ssh
cat /proc/cpuinfo
exit

On your Macbook Pro, run the container where we will setup yocto. We use the container image from Dockerfile. As stated earlier, using a path on macOS as a workdir volume will not work, we have to use a podman/docker volume

# This locally mounted volume will not work
# podman run --privileged --name=yocto -it -v `pwd`/workdir:/workdir gmacario/build-yocto

# A docker volume works - We use this workdirvol as the volume name
podman run --privileged --name=yocto -it -v workdirvol:/workdir gmacario/build-yocto

Install the required additional packages in the yocto container

sudo apt update
sudo apt -y install zstd liblz4-tool vim

Download the required layers that we will use for building and update the bblayers.conf

cd /workdir
# sudo chown -R build:build *
git clone -b langdale --depth=1 git://git.yoctoproject.org/poky.git
git clone -b langdale --depth=1 git://git.yoctoproject.org/meta-raspberrypi.git
git clone -b langdale --depth=1 git://git.yoctoproject.org/meta-virtualization.git
git clone -b langdale --depth=1 git://git.yoctoproject.org/meta-security.git
git clone -b langdale --depth=1 git://git.yoctoproject.org/meta-selinux.git
git clone -b langdale --depth=1 git://git.openembedded.org/meta-openembedded.git

source poky/oe-init-build-env

bitbake-layers add-layer ../meta-openembedded/meta-oe
bitbake-layers add-layer ../meta-openembedded/meta-python
bitbake-layers add-layer ../meta-openembedded/meta-multimedia
bitbake-layers add-layer ../meta-openembedded/meta-networking
bitbake-layers add-layer ../meta-openembedded/meta-perl
bitbake-layers add-layer ../meta-openembedded/meta-filesystems
bitbake-layers add-layer ../meta-virtualization
bitbake-layers add-layer ../meta-selinux
bitbake-layers add-layer ../meta-security
bitbake-layers add-layer ../meta-raspberrypi
bitbake-layers show-layers

If you disconnect from the container (or reboot your Mac), you can restart the container and exec back into it

podman start yocto
podman exec -it yocto bash
cd /workdir
source poky/oe-init-build-env

Update the local.conf

vi build/conf/local.conf

Comment out the MACHINE line that is currently enabled and append the following to the local.conf:

# target
MACHINE ?= "raspberrypi4-64"
ENABLE_UART = "1"
ENABLE_I2C = "1"
KERNEL_MODULE_AUTOLOAD:rpi += "i2c-dev i2c-bcm2708"
# add a feature
EXTRA_IMAGE_FEATURES:append = " debug-tweaks ssh-server-dropbear package-management tools-sdk"
DISTRO_FEATURES:append = " bluez5 bluetooth wifi polkit acl xattr pam virtualization security systemd"
DISTRO_FEATURES:remove = " sysvinit"
VIRTUAL-RUNTIME_init_manager = "systemd"
LICENSE_FLAGS_ACCEPTED = "synaptics-killswitch"
# add a recipe
CORE_IMAGE_EXTRA_INSTALL:append = " vim"
DISTRO_FEATURES_BACKFILL_CONSIDERED = ""
IMAGE_INSTALL:append = " ntpdate i2c-tools podman crun buildah skopeo cgroup-lite procps ca-certificates kernel-modules python3-pip python3-dbus ebtables cri-o cri-tools e2fsprogs-resize2fs linux-firmware-bcm43430 bluez5 python3-smbus wpa-supplicant bridge-utils git hostapd"
IMAGE_ROOTFS_EXTRA_SPACE = "8097152"
# BB_NUMBER_THREADS = "9"
# PARALLEL_MAKE = "-j 9"
INHERIT += "rm_work"

Run the bitbake command to generate the image. The above features add a lot more components than we actually need for runing MicroShift. These were added to permit some development and experimentation for learning yocto with a generous 8GB disk size. The python3, pip3, vim, i2cdetect, podman and crun runtime are installed. The debug-tweaks allows us to login as root without password.

bitbake rpi-test-image -n
bitbake rpi-test-image --runonly=fetch
bitbake rpi-test-image # This will take a couple of hours

This produces the rpi-test-image-raspberrypi4-64.wic.bz2.The balenaEtcher that we use to write to MicroSDXC card does not understand the bz2 format built by bitbake, so we extract the image

cd tmp/deploy/images/raspberrypi4-64
bzip2 -d -f rpi-test-image-raspberrypi4-64.wic.bz2

Output:

build@216abbbf40c7:/workdir/build$ rm -rf tmp
build@216abbbf40c7:/workdir/build$ bitbake rpi-test-image
Loading cache: 100% |                                                                                                                                  | ETA:  --:--:--
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |#################################################################################################################################| Time: 0:01:34
Parsing of 2939 .bb files complete (0 cached, 2939 parsed). 4503 targets, 143 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "2.2.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "ubuntu-20.04"
TARGET_SYS           = "aarch64-poky-linux"
MACHINE              = "raspberrypi4-64"
DISTRO               = "poky"
DISTRO_VERSION       = "4.1.2"
TUNE_FEATURES        = "aarch64 armv8a crc cortexa72"
TARGET_FPU           = ""
meta
meta-poky
meta-yocto-bsp       = "langdale:c805f0f90a2d1b0de49978e6516a4e7f9b11aff4"
meta-raspberrypi     = "langdale:6f5771d2bcfbfb8f8ce17b455c29a5703f2027c9"
meta-oe
meta-python
meta-multimedia
meta-networking
meta-perl
meta-filesystems     = "langdale:f8cb46d803190bb02085c8a7d20957a71d32f311"
meta-virtualization  = "langdale:d1cbc4c9fc44f0c5994a1276e38cdbb7bdb5bbd3"
meta-selinux         = "langdale:f6d73a35d3853ab09297fa1738890706901f43b8"
meta-security        = "langdale:2aa48e6f4e519abc7d6bd56da2c067309a303e80"

Initialising tasks: 100% |##############################################################################################################################| Time: 0:00:22
Sstate summary: Wanted 2991 Local 2843 Mirrors 0 Missed 148 Current 0 (95% match, 0% complete)
NOTE: Executing Tasks
NOTE: Tasks Summary: Attempted 7610 tasks of which 5808 didn't need to be rerun and all succeeded.
build@216abbbf40c7:/workdir/build$ cd tmp/deploy/images/raspberrypi4-64

On the Mac host, copy the image out from the container so we can write it to MicroSDXC card  in the next section.

podman cp yocto:/workdir/build/tmp/deploy/images/raspberrypi4-64/rpi-test-image-raspberrypi4-64.wic .

Setting up the Raspberry Pi 4 with Yocto

Run the following steps to setup the Raspberry Pi 4

  1. Write the image rpi-test-image-raspberrypi4-64.wic to Microsdxc card using balenaEtcher or the Raspberry Pi Imager, insert Microsdxc into Raspberry Pi4 and poweron
  2. Find the ethernet dhcp ipaddress of your Raspberry Pi 4 by running the nmap on your Macbook with your subnet
    $ sudo nmap -sn 192.168.1.0/24
    
    Nmap scan report for 192.168.1.209
    Host is up (0.0043s latency).
    MAC Address: E4:5F:01:2E:D8:95 (Raspberry Pi Trading)
    
  3. Login as root. Password is not required.
    ssh root@$ipaddress
  4. Extend the disk - Increase the space allocated to /dev/mmcblk0p2 and resize the file system

    Output:

    root@raspberrypi4-64:~# fdisk -lu /dev/mmcblk0
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xae38f2e4
    
    Device         Boot  Start      End  Sectors  Size Id Type
    /dev/mmcblk0p1 *      8192   154391   146200 71.4M  c W95 FAT32 (LBA)
    /dev/mmcblk0p2      155648 25128959 24973312 11.9G 83 Linux
    
    root@raspberrypi4-64:~# fdisk /dev/mmcblk0
    
    Welcome to fdisk (util-linux 2.38.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.
    
    This disk is currently in use - repartitioning is probably a bad idea.
    It's recommended to umount all file systems, and swapoff all swap
    partitions on this disk.
    
    
    Command (m for help): p
    
    Disk /dev/mmcblk0: 58.24 GiB, 62534975488 bytes, 122138624 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xae38f2e4
    
    Device         Boot  Start      End  Sectors  Size Id Type
    /dev/mmcblk0p1 *      8192   154391   146200 71.4M  c W95 FAT32 (LBA)
    /dev/mmcblk0p2      155648 25128959 24973312 11.9G 83 Linux
    
    Command (m for help): d
    Partition number (1,2, default 2): 2
    
    Partition 2 has been deleted.
    
    Command (m for help): n
    Partition type
       p   primary (1 primary, 0 extended, 3 free)
       e   extended (container for logical partitions)
    Select (default p): p
    Partition number (2-4, default 2): 2
    First sector (2048-122138623, default 2048): 155648
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (155648-122138623, default 122138623):
    
    Created a new partition 2 of type 'Linux' and of size 58.2 GiB.
    Partition #2 contains a ext4 signature.
    
    Do you want to remove the signature? [Y]es/[N]o: N
    
    Command (m for help): w
    
    The partition table has been altered.
    Syncing disks.
    
    root@raspberrypi4-64:~# resize2fs /dev/mmcblk0p2
    resize2fs 1.46.5 (30-Dec-2021)
    Filesystem at /dev/mmcblk0p2 is mounted on /; on-line resizing required
    old_desc_blocks = 2, new_desc_blocks = 8
    The filesystem on /dev/mmcblk0p2 is now 15247872 (4k) blocks long.
    
    root@raspberrypi4-64:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/root        57G 1009M   53G   2% /
    devtmpfs        3.7G     0  3.7G   0% /dev
    tmpfs           3.9G  100K  3.9G   1% /dev/shm
    tmpfs           1.6G   12M  1.6G   1% /run
    tmpfs           4.0M     0  4.0M   0% /sys/fs/cgroup
    tmpfs           3.9G     0  3.9G   0% /tmp
    tmpfs           3.9G   16K  3.9G   1% /var/volatile
    /dev/mmcblk0p1   72M   46M   26M  65% /boot
    
  5. Set the hostname with a domain and add to /etc/hosts
    hostnamectl set-hostname yocto.example.com
    echo $ipaddress yocto yocto.example.com >> /etc/hosts
    
  6. Update the kernel

    Update kernel parameters: concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt and reboot

     cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory
    

    Check the cgroup after reboot

    mount | grep cgroup
    cat /proc/cgroups | column -t # Check that memory and cpuset are present
    

    Output (hugetlb is not present):

    root@yocto:~# mount | grep cgroup
    tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755)
    cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
    cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
    cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
    cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
    cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
    cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
    cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
    cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
    root@raspberrypi4-64:~# cat /proc/cgroups | column -t # Check that memory and cpuset are present
    #subsys_name  hierarchy  num_cgroups  enabled
    cpuset        7          1            1
    cpu           3          35           1
    cpuacct       3          35           1
    blkio         4          35           1
    memory        9          66           1
    devices       10         35           1
    freezer       8          1            1
    net_cls       5          1            1
    perf_event    6          1            1
    net_prio      5          1            1
    pids          2          38           1
    
  7. Check the release
    cat /etc/os-release
    
    root@yocto:~# cat /etc/os-release
    ID=poky
    NAME="Poky (Yocto Project Reference Distro)"
    VERSION="4.1.2 (langdale)"
    VERSION_ID=4.1.2
    PRETTY_NAME="Poky (Yocto Project Reference Distro) 4.1.2 (langdale)"
    DISTRO_CODENAME="langdale"
    

Install the sense_hat

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the python libraries.

Install sensehat libraries:

pip3 install Cython Pillow numpy sense_hat smbus

Check the Sense Hat with i2cdetect

root@yocto:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Test the SenseHat samples for the Sense Hat's LED matrix and sensors.

cd
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/sensehat-fedora-iot

# Enable random LEDs
python3 sparkles.py # Ctrl-C to interrupt

# Show multiple screens to test LEDs
python3 rainbow.py # Ctrl-C to interrupt

# First time you run the temperature.py, you may see “Temperature: 0 C”. Just run it again.
python3 temperature.py 

# Show the Temperature, Pressure and Humidity
python3 testsensehat.py # Ctrl-C to interrupt

# When a magnet gets close to SenseHAT, the LEDs will all turn red for 1/5 of a second
python3 magnetometer.py

# Find Magnetic North
python3 compass.py

# Test the USB camera
pip3 install pygame --upgrade
python3 testcam.py # It will create a file 101.bmp

MicroShift Containerized All-In-One on the Raspberry Pi 4 Yocto

The MicroShift binary and CRI-O service run within a container and data is stored in a podman volume, microshift-data.

mkdir /var/hpvolumes

We will run the all-in-one microshift in podman using prebuilt images (replace the image in the podman run command below with the latest image). Use a different name for the microshift all-in-one pod (with the -h parameter for podman below) than the hostname for the Raspberry Pi 4.

podman volume rm microshift-data;podman volume create microshift-data
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 -p 30080:30080 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2022-04-20-182108-linux-nft-arm64

Now that you know the podman command to start the microshift all-in-one, you may alternatively use the following microshift service.

mkdir /usr/lib/systemd/system

cat << EOF > /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift all-in-one
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=120
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -v /lib/modules:/lib/modules --label io.containers.autoupdate=registry -p 6443:6443 -p 80:80 -p 30080:30080 quay.io/microshift/microshift-aio:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target 
EOF

systemctl daemon-reload
systemctl start microshift

Delete the erroneous routes:

route del default gw 0.0.0.0 netmask 255.255.255.255
route del default gw 0.0.0.0 netmask 0.0.0.0

We can inspect the microshift-data volume to find the path for kubeconfig.

podman volume inspect microshift-data

Output:

root@yocto:~/microshift/raspberry-pi/sensehat-fedora-iot# podman volume inspect microshift-data
[
     {
          "Name": "microshift-data",
          "Driver": "local",
          "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
          "CreatedAt": "2023-01-30T17:10:59.54641039Z",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "MountCount": 0
     }
]

Install the kubectl and the openshift client

ARCH=arm64
cd /tmp
export OCP_VERSION=4.9.11 && \
    wget https://mirror2.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/$OCP_VERSION/openshift-client-linux-$OCP_VERSION.tar.gz && \
    tar -xzvf openshift-client-linux-$OCP_VERSION.tar.gz && \
    rm -f oc.tar.gz && \
    install -t /usr/local/bin {kubectl,oc} && \
    rm -f {README.md,kubectl,oc}

It will take around 5 minutes for all pods to start within the all-in-one microshift container. On the host Raspberry Pi 4, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "oc get nodes;oc get pods -A;podman exec -it microshift crictl ps -a"

Note that the crictl commands run within the microshift container in podman as shown in the watch command above.

kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'

Samples to run on MicroShift

We will run samples that will show the use of dynamic persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb

Replace the coreos nodename in the persistent volume claims with the microshift.example.com (our current nodename in the microshift container). Do not set it to yocto.example.com

sed -i "s|coreos|microshift.example.com|" influxdb-data-dynamic.yaml grafana/grafana-data-dynamic.yaml

This script will allocate dynamic persistent volumes using influxdb-data-dynamic.yaml and grafana-data-dynamic.yaml. The annotation provisionOnNode and the storageClassName are required for dynamic PV.

  annotations:
    kubevirt.io/provisionOnNode: microshift.example.com
spec:
  storageClassName: kubevirt-hostpath-provisioner 

We create and push the “measure:latest” image using the sensor/Dockerfile. If you want to run all the steps in a single command, just execute the runall-balena-dynamic.sh.

./runall-balena-dynamic.sh

The script will create a new project influxdb for this sample, install InfluxDB, install the pod for SenseHat measurements, install Telegraf and check the measurements for the telegraf database in InfluxDB. Finally, it will install Grafana.

Output:

root@yocto:~/microshift/raspberry-pi/influxdb# ./runall-balena-dynamic.sh
Now using project "influxdb" on server "https://127.0.0.1:6443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app rails-postgresql-example

to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname

configmap/influxdb-config created
secret/influxdb-secrets created
persistentvolumeclaim/influxdb-data created
deployment.apps/influxdb-deployment created
service/influxdb-service created
deployment.apps/influxdb-deployment condition met
deployment.apps/measure-deployment created
deployment.apps/measure-deployment condition met
configmap/telegraf-config created
secret/telegraf-secrets created
deployment.apps/telegraf-deployment created
deployment.apps/telegraf-deployment condition met
persistentvolumeclaim/grafana-data created
deployment.apps/grafana created
service/grafana-service created
deployment.apps/grafana condition met
route.route.openshift.io/grafana-service exposed
NAME              HOST/PORT                                PATH   SERVICES          PORT   TERMINATION   WILDCARD
grafana-service   grafana-service-influxdb.cluster.local          grafana-service   3000                 None

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You may change the password on first login or click on Skip. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift.

Graphana Analysis Server


Go back and open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Graphana SenseHat Dashboard


Finally, after you are done working with this sample, you can run the deleteall-balena-dynamic.sh

cd ~/microshift/raspberry-pi/influxdb
./deleteall-balena-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

2. Node Red live data dashboard with SenseHat sensor charts

We will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
# Replace docker with podman in docker-debian.sh and run it
./docker-debian.sh
podman push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

mkdir /var/hpvolumes/nodered
rm -rf /var/hpvolumes/nodered/*;cp -r nodered-volume/* /var/hpvolumes/nodered/.
oc new-project nodered
oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml
oc get routes
oc -n nodered wait deployment nodered-deployment --for condition=Available --timeout=300s
oc logs deployment/nodered-deployment -f

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-nodered.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-nodered.cluster.local/

The following modules required for the dashboard have been preinstalled node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat. These can be seen under “Manage Palette - Install”. The Flow 1 or Flow 2 have already been imported from the nodered sample. This import to the Node Red can be done manually under “Import Nodes” and then click “Deploy”.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard. Click on the outward arrow in the tabs to view the sensor charts. You will see the Home by Default. You can see the state of the Joystick Up, Down, Left, Right or Pressed. Click on the Hamburger Menu (3 lines) and select PiSenseHAT. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED.

NodeRed Sensor Charts

We can continue running the next sample that will reuse this Node Red deployment. If the Node Red Deployment is no longer required, we can delete it as follows:

cd ~/microshift/raspberry-pi/nodered
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered3.yaml -f noderedroute.yaml -n nodered
oc project default
oc delete project nodered

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -f Dockerfile -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

Update the env: WebSocketURL and ImageUploadURL as shown below. Also update the hostAliases in object-detection.yaml to point to your raspberry pi 4 ip address (192.168.1.209 shown below).

        env:
          - name: WebSocketURL
            value: "ws://nodered-svc-nodered.cluster.local/ws/chat"
          - name: ImageUploadURL
            value: http://nodered-svc-nodered.cluster.local/upload

      hostAliases:
      - hostnames:
        - nodered-svc-nodered.cluster.local
        ip: 192.168.1.209

Create the deployment

oc project default
vi object-detection.yaml # set the hostnames, WebSocketURL and ImageUploadURL
oc apply -f object-detection.yaml
oc -n default wait deployment object-detection-deployment --for condition=Available --timeout=300s

We will see pictures being sent to Node Red when a person is detected at http://nodered-svc-nodered.cluster.local/#flow/3e30dc50ae28f61f and chat messages are sent to http://nodered-svc-nodered.cluster.local/chat

NodeRed Chat Messages


If instead you see the following error in the logs, it means you are using wss:// instead of ws:// for the local nodered. Change it to ws:// and replace the deployment. The wss:// for WebSocketURL and the https:// for ImageUploadURL can be used to connect to Node Red Deployment on IBM Cloud.

[root@yocto:~/microshift/raspberry-pi/object-detection]# oc logs deployment/object-detection-deployment -f
Traceback (most recent call last):
  File "//detect.py", line 18, in <module>
    ws.connect(webSocketURL)
  File "/usr/lib/python3/dist-packages/websocket/_core.py", line 222, in connect
    self.sock, addrs = connect(url, self.sock_opt, proxy_info(**options),
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 127, in connect
    sock = _ssl_socket(sock, options.sslopt, hostname)
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 264, in _ssl_socket
    sock = _wrap_sni_socket(sock, sslopt, hostname, check_hostname)
  File "/usr/lib/python3/dist-packages/websocket/_http.py", line 239, in _wrap_sni_socket
    return context.wrap_socket(
  File "/usr/lib/python3.9/ssl.py", line 500, in wrap_socket
    return self.sslsocket_class._create(
  File "/usr/lib/python3.9/ssl.py", line 1040, in _create
    self.do_handshake()
  File "/usr/lib/python3.9/ssl.py", line 1309, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)
[root@yocto:~/microshift/raspberry-pi/object-detection]# vi object-detection.yaml
[root@yocto:~/microshift/raspberry-pi/object-detection]# oc replace --force -f object-detection.yaml
deployment.apps "object-detection-deployment" deleted
deployment.apps/object-detection-deployment replaced

When we are done testing, we can delete the deployment

cd ~/microshift/raspberry-pi/object-detection
oc delete -f object-detection.yaml

4. Use .NET to drive a Raspberry Pi Sense HAT

We will run the .NET sample to retrieve sensor values from the Sense HAT, respond to joystick input, and drive the LED matrix. The source code is in github.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/dotnet

You may build the image using the  Dockerfile that uses the sensehat-quickstart-1.sh to install dot net and build the SenseHat.Quickstart sample and test it directly using podman as shown in Part 25. Now, let’s run the sample in MicroShift using the prebuilt arm64v8 image “docker.io/karve/sensehat-dotnet”.

oc new-project dotnet
oc apply -f dotnet.yaml
oc -n dotnet wait deployment dotnet-deployment --for condition=Available --timeout=300s
oc logs deployment/dotnet-deployment -f

We can observe the console log output as sensor data is displayed. The LED matrix displays a yellow pixel on a field of blue. Holding the joystick in any direction moves the yellow pixel in that direction. Clicking the center joystick button causes the background to switch from blue to red.

Temperature Sensor 1: 33.6°C
Temperature Sensor 2: 32.8°C
Pressure: 1006.89 hPa
Altitude: 56.6 m
Acceleration: <-0.43927002, -0.43927002, -0.43927002> g
Angular rate: <228.01505, 228.01505, 228.01505> DPS
Magnetic induction: <-0.080078125, 0.0949707, -0.84350586> gauss
Relative humidity: 26.3%
Heat index: 32.2°C
Dew point: 11.6°C
…
.NET on Raspberry Pi4


When we are done, we can delete the deployment

oc delete -f dotnet.yaml

5. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

oc apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Wait for the metrics-server to start in the kube-system namespace
kubectl get deployment metrics-server -n kube-system
kubectl get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
dnf install -y jq
kubectl get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

Output:

root@yocto:~/microshift/raspberry-pi/dotnet# oc adm top nodes;oc adm top pods -A
NAME                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
microshift.example.com   970m         24%    2325Mi          30%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
dotnet                          dotnet-deployment-84cd4955fd-lbf67    4m           107Mi
kube-system                     metrics-server-684454657f-45lz4       14m          14Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-v9bwk   1m           7Mi
openshift-dns                   dns-default-fwzgq                     7m           18Mi
openshift-dns                   node-resolver-7mcx2                   0m           3Mi
openshift-ingress               router-default-85bcfdd948-ctmfq       4m           26Mi
openshift-service-ca            service-ca-7764c85869-xx5s8           19m          20Mi

6. Postgresql database server

The source code is in github. We will deploy Postgresql and use this instance in the next Sample 7.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/pg

Create a new project pg. Create the configmap, pv, pvc and deployment for PostgreSQL

oc new-project pg
mkdir -p /var/hpvolumes/pg


oc apply -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc get configmap
oc get svc pg-svc
oc get all -lapp=pg
oc -n pg wait deployment pg-deployment --for condition=Available --timeout=300s
oc logs deployment/pg-deployment -f

We can continue to the next sample where we will use this postgresql deployment to demonstrate a python operator.

Instead, to delete the deployment and project, run:

cd ~/microshift/raspberry-pi/pg
oc delete -f hostpathpvc.yaml -f hostpathpv.yaml -f pg-configmap.yaml -f pg.yaml
oc delete project pg

7. Running a Python Operator using kopf (Postgresql)

We will run the Operator that is explained in the youtube video. In this, we will create and delete a student record in the postgresql using a Python Operator “postgres-writer-operator” with the Custom Resource Definition postgres-writers.demo.yash.com. A sample Custom Resource sample-student is created in the default namespace. The operator sees this and inserts an entry into the students table. When the Resource is deleted, the Operator deletes this entry from the students table.

Connect to postgresql server

oc exec -it deeployment/pg-deployment -- bash
psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password

Create the table students

create table students(id varchar(50) primary key, name varchar(20), age integer, country varchar(20));

Output:

root@yocto:~/microshift/raspberry-pi/influxdb# oc exec -it deployment/pg-deployment -- bash
Defaulted container "pg-deployment" out of: pg-deployment, volume-permissions (init)
root@pg-deployment-78cbc9cc88-dz75l:/# psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
psql (9.6.24)
Type "help" for help.

postgresdb=# create table students(id varchar(50) primary key, name varchar(20), age integer, country varchar(20));
CREATE TABLE
postgresdb=# \q
root@pg-deployment-78cbc9cc88-dz75l:/# exit
exit
command terminated with exit code 130

Now we build and push the image

cd
git clone https://github.com/thinkahead/python-postgres-writer-operator
cd ~/python-postgres-writer-operator
podman build -t docker.io/karve/postgres-writer:latest .
# podman login docker.io
podman push docker.io/karve/postgres-writer:latest

Run the Operator

oc apply -f kubernetes/
oc wait deployment postgres-writer-operator --for condition=Available --timeout=300s

Create the sample student resource

oc apply -f sample.yaml
oc get psw -A

Output:

root@yocto:~/python-postgres-writer-operator# oc apply -f sample.yaml
postgreswriter.demo.yash.com/sample-student created
root@yocto:~/python-postgres-writer-operator# oc get psw -A
NAMESPACE   NAME             AGE
default     sample-student   1s
root@yocto:~/microshift/raspberry-pi/influxdb# oc exec -it deployment/pg-deployment -- bash
Defaulted container "pg-deployment" out of: pg-deployment, volume-permissions (init)
root@pg-deployment-78cbc9cc88-dz75l:/# psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
psql (9.6.24)
Type "help" for help.

postgresdb=# select * from students;
           id           | name | age | country
------------------------+------+-----+---------
 default/sample-student | alex |  23 | canada
(1 row)

postgresdb=# \q
root@pg-deployment-78cbc9cc88-dz75l:/# exit
exit

Delete the sample student resource

oc delete -f sample.yaml
oc get psw -A

Output:

root@yocto:~/python-postgres-writer-operator# oc delete -f sample.yaml
postgreswriter.demo.yash.com "sample-student" deleted
root@yocto:~/python-postgres-writer-operator# oc get psw -A
No resources found
root@yocto:~/python-postgres-writer-operator# oc exec -it deployment/pg-deployment -- bash
Defaulted container "pg-deployment" out of: pg-deployment, volume-permissions (init)
root@pg-deployment-78cbc9cc88-dz75l:/# psql --host localhost --port 5432 --user postgresadmin --dbname postgresdb # test123 as password
psql (9.6.24)
Type "help" for help.

postgresdb=# select * from students;
 id | name | age | country
----+------+-----+---------
(0 rows)

postgresdb=# \q
root@pg-deployment-78cbc9cc88-dz75l:/# exit
exit

root@yocto:~/python-postgres-writer-operator# oc logs deployment/postgres-writer-operator -f
/usr/local/lib/python3.9/site-packages/kopf/_core/reactor/running.py:170: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
  warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2023-01-30 18:43:07,794] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.
[2023-01-30 18:43:07,814] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2023-01-30 18:43:07,818] kopf._core.engines.a [INFO    ] Initial authentication has finished.
[2023-01-30 18:45:31,329] kopf.objects         [INFO    ] [default/sample-student] Handler 'create_fn' succeeded.
[2023-01-30 18:45:31,334] kopf.objects         [INFO    ] [default/sample-student] Creation is processed: 1 succeeded; 0 failed.
[2023-01-30 18:48:22,317] kopf.objects         [INFO    ] [default/sample-student] Handler 'delete_fn' succeeded.
[2023-01-30 18:48:22,320] kopf.objects         [INFO    ] [default/sample-student] Deletion is processed: 1 succeeded; 0 failed.

Note that to avoid status progress errors such as follows, we added the “x-kubernetes-preserve-unknown-fields: true” in the Custom Resource Definition.

The logs show the sample student “default/sample-student” being inserted into and deleted from the database. When we are done, we can delete the Python Operator.

cd ~/python-postgres-writer-operator
oc delete -f kubernetes/

8. Running a Python Operator using kopf (MongoDB)

The Kubernetes Operator Pythonic Framework (kopf) is part of the Zalando-incubator github repository. This project is well documented at https://kopf.readthedocs.io

We will deploy and use the mongodb database using the image: docker.io/arm64v8/mongo:4.4.18. Do not use the latest tag for the image. It will result in "WARNING: MongoDB 5.0+ requires ARMv8.2-A or higher, and your current system does not appear to implement any of the common features for that!" and fail to start. Raspberry Pi 4 uses an ARM Cortex-A72 which is ARM v8-A.

A new PersistentVolumeClaim mongodb will use the storageClassName: kubevirt-hostpath-provisioner for the Persistent Volume. The mongodb-root-username uses the root user with a the mongodb-root-password set to a default of mongodb-password. Remember to update the selected-node in the mongodb-pv.yaml

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/mongodb
oc project default
vi mongodb-pv.yaml # Update the node name in the annotation
oc apply -f .

Now we will build and install the Operator, and test it

cd ~/microshift/raspberry-pi/python-mongodb-writer-operator
# Build the operator image
podman build -t docker.io/karve/mongodb-writer:latest .

# Optionally push the image to registry
podman login docker.io
podman push docker.io/karve/mongodb-writer:latest

# Install the Operator
oc apply -f kubernetes/

# Create sample entries
oc apply -f sample.yaml -f sample2.yaml

You can login to mongodb and check that the two students are added:

root@yocto:~/microshift/raspberry-pi/mongodb# oc exec -it statefulset/mongodb -- bash
1001@mongodb-0:/$ mongo admin --host mongodb.default.svc.cluster.local:27017 --authenticationDatabase admin -u root -p mongodb-password
> show databases;
admin   0.000GB
config  0.000GB
local   0.000GB
school  0.000GB
> db.school.find()
> use school;
switched to db school
> db.students.find()
{ "_id" : ObjectId("63d81c1d812b68ff5cbe5670"), "id" : "default/sample-student", "name" : "alex", "age" : 23, "country" : "canada" }
{ "_id" : ObjectId("63d81c1d812b68ff5cbe5671"), "id" : "default/sample-student2", "name" : "alex2", "age" : 24, "country" : "usa" }
> quit();
1001@mongodb-0:/$ exit
exit
# Modify
cat sample2.yaml | sed "s/age: 24/age: 26/g" | sed "s/country: .*/country: germany/g" | oc apply -f -

# Delete
oc delete -f sample.yaml -f sample2.yaml

The Operator logs show the samples being created, updated and deleted

root@yocto:~/microshift/raspberry-pi/python-mongodb-writer-operator# oc logs deployment/mongodb-writer-operator -f
/usr/local/lib/python3.9/site-packages/kopf/_core/reactor/running.py:170: FutureWarning: Absence of either namespaces or cluster-wide flag will become an error soon. For now, switching to the cluster-wide mode for backward compatibility.
  warnings.warn("Absence of either namespaces or cluster-wide flag will become an error soon."
[2023-01-30 19:35:39,424] kopf._core.engines.a [INFO    ] Initial authentication has been initiated.
[2023-01-30 19:35:39,440] kopf.activities.auth [INFO    ] Activity 'login_via_client' succeeded.
[2023-01-30 19:35:39,444] kopf._core.engines.a [INFO    ] Initial authentication has finished.
[2023-01-30 19:35:57,385] kopf.objects         [INFO    ] [default/sample-student] Handler 'create_fn' succeeded.
[2023-01-30 19:35:57,387] kopf.objects         [INFO    ] [default/sample-student] Creation is processed: 1 succeeded; 0 failed.
[2023-01-30 19:35:57,434] kopf.objects         [INFO    ] [default/sample-student2] Handler 'create_fn' succeeded.
[2023-01-30 19:35:57,437] kopf.objects         [INFO    ] [default/sample-student2] Creation is processed: 1 succeeded; 0 failed.
[2023-01-30 19:39:26,725] kopf.objects         [INFO    ] [default/sample-student2] Handler 'update_fn' succeeded.
[2023-01-30 19:39:26,728] kopf.objects         [INFO    ] [default/sample-student2] Updating is processed: 1 succeeded; 0 failed.
[2023-01-30 19:40:01,697] kopf.objects         [INFO    ] [default/sample-student] Handler 'delete_fn' succeeded.
[2023-01-30 19:40:01,700] kopf.objects         [INFO    ] [default/sample-student] Deletion is processed: 1 succeeded; 0 failed.
[2023-01-30 19:40:01,741] kopf.objects         [INFO    ] [default/sample-student2] Handler 'delete_fn' succeeded.
[2023-01-30 19:40:01,744] kopf.objects         [INFO    ] [default/sample-student2] Deletion is processed: 1 succeeded; 0 failed.

When done, we can delete the operator.

oc delete -f kubernetes/

9. Run a jupyter notebook sample for Chest X-Ray Pneumonia Binary Classification

The Dockerfile uses the arm64 Jupyter Notebook base image: scipy-notebook. Since we do not have a tensorflow arm64 image, we install it as described at Qengineering. Further we download the github repo with the sample notebook for the x-ray classification in the initcontainer. You will need to separately download and extract the images from the archive.zip from Kaggle.

mkdir /var/hpvolumes/xray
cd /var/hpvolumes/xray
# Download and copy the archive.zip from https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia?resource=download
unzip archive.zip

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f xray.yaml 

Check the routes:

root@yocto:~/microshift/raspberry-pi/tensorflow-notebook# oc get routes
NAME             HOST/PORT                              PATH   SERVICES       PORT   TERMINATION   WILDCARD
flask-route      flask-route-default.cluster.local             notebook-svc   5000                 None
notebook-route   notebook-route-default.cluster.local          notebook-svc   5001                 None

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Go to the work folder and run the xray-cnn.ipynb. You will need to update the notebook to point to the persistent volume claim where the images were extracted /var/hpvolumes/xray on the Raspberry Pi 4.

print(len(os.listdir("../../chest_xray/train/NORMAL")))
print(len(os.listdir("../../chest_xray/train/PNEUMONIA")))
base_dir = "../../chest_xray/"

You will be able to fit the model and plot the training and validation accuracy and loss. Finally, we will see the images that show the intermediate representations for all keras layers in the model. When we are done working with the xray sample pod, we can delete it as follows:

oc delete -f xray.yaml

10. Run a jupyter notebook sample for credit card fraud detection with AI/ML

In this sample, we look at how credit card transactions are handled in a bank setting. Specifically, we analyze credit card fraud data and model the data using Jupyter notebooks. You can watch the Red Hat Developer workshop on credit card fraud detection with AI/ML.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook
oc apply -f fraud-detection.yaml 

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Go to the work folder and run the Exploratory Analysis. This notebook requires a few changes. Instead of the pip install requirements.txt, change it to:

!pip install boto3 altair cloudpickle pyarrow

In the section “Transaction amount distribution”, change level_0 to level_1

alt.X("level_1", axis=alt.Axis(title='cumulative distribution'), scale=alt.Scale(type='linear')), 

Next, run the Featuring Engineering, then the Logistic Regression and finally the Creating Pipelines notebook to save the model.

Feature Extraction Pipeline

Copy the app folder to app2 and replace the pipeline.pkl with the one you built above. You can do this with the oc exec command to login to the pod. Then run the Start Here, next the Run Flask (do not run the step for pip install requirements.txt), and finally the Test Flask notebook.

When we are done working with the fraud detection sample pod notebook, we can delete it as follows:

oc delete -f fraud-detection.yaml

Cleanup MicroShift

We can use the script available on github to remove the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

#wget https://raw.githubusercontent.com/thinkahead/microshift/main/hack/cleanup.sh
cd ~/microshift/hack
./cleanup.sh

Conclusion

In this final Part 30, we saw how to run MicroShift All-In-One container on the Raspberry Pi 4 with an image built using Yocto langdale. We ran samples that used persistent volume for postgresql, Sense Hat, and USB camera. We saw an object detection sample that sent pictures and web socket messages to Node Red on IBM Cloud when a person was detected. We used .NET to drive a Raspberry Pi Sense HAT and ran sample Python Operators with kopf to connect to Postgresql and MongoDB. Finally we ran a tensorflow Chest X-Ray Pneumonia Binary Classification notebook in a pod.

What is pending with Yocto for Raspberry Pi 4? I still need to enable wlan0, get MicroShift to run non-containerized, create VMs, and setup Kata Containers. Maybe I will get to these in the future if time permits. Separately, I was able to run RHEL 9 on the Raspberry Pi 4 with MicroShift, but was unable to get SenseHat with devicetree to work on it. Also was able to run MicroShift on SLES 15, but could not get kubevirt to work on SLES15. I didn't get time to work with OpenBSD, FreeBSD and EVE. If I haven't been able to write about some features, it's not because of lack of trying. It has been a long journey working with MicroShift and multiple distros of Linux, overcoming the obstacles one at a time. Small steps and little victories have kept me going though this ardious and time-consuming activity. I have learned a lot and will move on to other hobbies.

What's new in MicroShift? Specifically I have used upto the 4.8.0-microshift-2022-04-20-141053 branch of MicroShift in this blogs series. There are multiple changes in the latest microshift branch. For example: projects removed, topolvm for local persistent volumes, ovn-kubernetes for overlay-based networking, rebase to latest OpenShift and using image-builder to create images based on rpm-ostree. The samples I have used in my blogs should continue to work with minor changes.

Hope you have enjoyed this article and the series of blogs on MicroShift. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of Yocto to create customized images for the Raspberry Pi 4 with MicroShift running on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#node-red#raspberry-pi#yocto 


#Highlights-home
#Highlights

0 comments
1507 views

Permalink