Infrastructure as a Service

 View Only

MicroShift – Part 8: Raspberry Pi 4 with balenaOS

By Alexei Karve posted Mon January 03, 2022 02:16 PM


MicroShift on Raspberry Pi 4 with balenaOS


MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In the previous Part 4, Part 5 and Part 6 of this series, we looked at multiple ways to run MicroShift on a Raspberry Pi 4 with the Raspberry Pi OS, CentOS 8 Linux Stream and Ubuntu 20.04 respectively. In this Part 8, we will deploy MicroShift on the Raspberry Pi 4 with balenaOS. We will install using the containerized all-in-one approach with balenaEngine. We will run multiple samples - Object Detection TensorFlow Lite sample, InfluxDB/Telegraf/Grafana sample with persistent volume, Metrics Server and deploying Node Red on the ARM device with dashboard to show gauges for SenseHat sensor data.

Devices in the balena ecosystem run balenaOS, a bare-bones, Yocto Linux based host OS, which comes packaged with Docker-compatible container engine balenaEngine. BalenaOS provides an easy means of spinning up containers on the Raspberry Pi and available for other boards as well. It is lightweight and runs headless. The host OS is responsible for kicking off the device supervisor, balena’s agent on the device, as well as containerized services. Within each service's container we can specify a base OS, which can come from a compatible Docker base image. The base OS shares a kernel with the host OS, but otherwise works independently. Containers can be configured to run privileged and access hardware directly. The balena device supervisor runs in its own container, which allows it to continue running and pulling new code even if the application crashes. The development images are recommended while getting started with balenaOS and building an application. The development images allow passwordless SSH access into balenaOS on port 22222 as the root user. 

We will use the “Local mode” that is the development mode for balena. It allows building code to a single development device in the local network without having to go through the balenaCloud build service and deployment pipeline. It uses the Docker daemon on the device to build container images, and then the device Supervisor starts the containers in the same way as if they were deployed via the cloud.

Setting up the Raspberry Pi 4 with balenaOS (64 bit)

We will download the balenaOS image onto the MacBook Pro and write to Microsdxc card

  1. Download the “Raspberry Pi 4 (using 64bit OS)” Development from and unzip
  2. Install the Balena CLI
  3. Configure the image with the Device Hostname “mydevice”, WiFi Network SSID and Key
    sudo balena local configure ~/Downloads/balena-cloud-raspberrypi4-64-2.88.4-dev-v12.11.0.img
  4. Flash the configured image to Microsdxc card using balenaEtcher
  5. Insert Microsdxc into Raspberry Pi 4 and poweron
  6. The development image allows unauthenticated root access. In the case of a production balenaOS image, an SSH key must have been previously added to the device's config.json file, sshKeys section. Login as root and check the release.

Before we can get microshift running on the device in local mode, we may need to find the device. We can find the short-uuid and local IP address of the device from the device dashboard or by scanning the network.

sudo balena scan


Scanning for local balenaOS devices... Reporting scan results
  host:          mydevice.local
  osVariant:     development
    Containers:        1
    ContainersRunning: 1
    ContainersPaused:  0
    ContainersStopped: 0
    Images:            3
    Driver:            overlay2
    SystemTime:        2022-01-02T10:11:10.626312119Z
    KernelVersion:     5.10.78-v8
    OperatingSystem:   balenaOS 2.88.4
    Architecture:      aarch64
    Version:    19.03.30
    ApiVersion: 1.40

The output includes device information collected through balenaEngine for devices running a development image of balenaOS through Docker daemon TCP port 2375. We can ssh to the device with

balena ssh mydevice.local


ssh -p 22222 root@mydevice.local

Check the os-release and containers

root@mydevice:~# cat /etc/os-release
PRETTY_NAME="balenaOS 2.88.4"
root@mydevice:~# balena ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5cb99eafe65a balena/aarch64-supervisor:v12.11.0 "/usr/src/app/entry.…" 15 months ago Up 15 months (healthy) balena_supervisor

MicroShift Containerized All-in-one

The MicroShift binary and CRI-O service run within a container and data is stored in a docker volume, microshift-data. We can build the MicroShift image locally or use the prebuilt image. In this section, we will run with the prebuilt all-in-one image. In the next section, we will build and run the image locally on the Raspberry Pi 4. We expose port 8000 (can use port 80) and 6443 in command below so that we can access the MicroShift container from our Laptop.

# Create directory for persistent volumes
mkdir /mnt/data/hpvolumes;rm -rf /mnt/data/hpvolumes/*

balena-engine run -d --rm --name microshift -h --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /mnt/data/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8000:80

Check the named volume

balena-engine volume inspect microshift-data
ls /mnt/data/docker/volumes/

Login to the microshift container and check the pods

balena-engine exec -it microshift bash
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl images;crictl pods"

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting. You may also need to increase the livenessProbe and readinessProbe timings.

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment or delete the service-ca pod if it keeps restarting and goes to CrashLoopBackOff STATUS.

oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'


root@mydevice:~# balena-engine run -d --rm --name microshift -h --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /mnt/data/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8000:80
Unable to find image '' locally
4.8.0-0.microshift-2021-12-25-175217-linux-nft-arm64: Pulling from microshift/microshift-aio
23a125fd07d1: Pull complete
866b611e1005: Pull complete
294fb0d6f4a4: Pull complete
f81f802255fa: Pull complete
354ab74b08a9: Pull complete
86dc57bec04c: Pull complete
c9785cbe991b: Pull complete
fd74ea4d864b: Pull complete
bd201d8b6610: Pull complete
5035d4aaf7b8: Pull complete
cbead360b67b: Pull complete
79a7404b8471: Pull complete
28edd075a951: Pull complete
2e29a7a7f89f: Pull complete
Total:  [==================================================>]  428.4MB/428.4MB
Digest: sha256:5feffeb9f2ed4000612a2ec92d667be837ff8ee6827a7ff4b94d24aff6ecd0b1
Status: Downloaded newer image for
root@mydevice:~# ls /mnt/data/docker/volumes/ # Check the named volume
metadata.db  microshift-data
root@mydevice:~# balena-engine volume inspect microshift-data
        "CreatedAt": "2022-01-02T00:32:50Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/microshift-data/_data",
        "Name": "microshift-data",
        "Options": null,
        "Scope": "local"
root@mydevice:~# balena-engine exec -it microshift bash
[root@microshift /]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift /]# watch "oc get nodes;oc get pods -A;crictl images;crictl pods"

NAME                     STATUS   ROLES    AGE     VERSION   Ready    <none>   2m47s   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-sdrdn                 1/1     Running   0          2m46s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-rshh6   1/1     Running   0          2m36s
openshift-dns                   dns-default-4kf9f                     2/2     Running   0          2m46s
openshift-dns                   node-resolver-2cgqs                   1/1     Running   0          2m46s
openshift-ingress               router-default-85bcfdd948-526r5       1/1     Running   0          2m50s
openshift-service-ca            service-ca-7764c85869-g59dj           1/1     Running   0          2m51s
IMAGE                                     TAG                             IMAGE ID            SIZE                          3.5                             f7ff3c4042631       491kB                    4.8.0-0.okd-2021-10-10-030117   33a276ba2a973       205MB                4.8.0-0.okd-2021-10-10-030117   67a95c8f15902       265MB            4.8.0-0.okd-2021-10-10-030117   0e66d6f50c694       8.78MB                4.8.0-0.okd-2021-10-10-030117   85fc911ceba5a       68.1MB         4.8.0-0.okd-2021-10-10-030117   37292c44812e7       225MB   4.8.0-0.okd-2021-10-10-030117   fdef3dc1264ad       39.3MB        4.8.0-0.okd-2021-10-10-030117   7f149e453e908       41.5MB    4.8.0-0.okd-2021-10-10-030117   0d3ab44356260       276MB
POD ID              CREATED              STATE NAME                                  NAMESPACE                       ATTEMPT  RUNTIME
35f3a69f15881       24 seconds ago       Ready dns-default-4kf9f                     openshift-dns                   0        (default)
d59fae7b87b62       About a minute ago   Ready router-default-85bcfdd948-526r5       openshift-ingress               0        (default)
330ccb548e95d       2 minutes ago        Ready service-ca-7764c85869-g59dj           openshift-service-ca            0        (default)
9b4f2ecf076c2       2 minutes ago        Ready kubevirt-hostpath-provisioner-rshh6   kubevirt-hostpath-provisioner   0        (default)
111713771e154       2 minutes ago        Ready node-resolver-2cgqs                   openshift-dns                   0        (default)
25313932d62d1       2 minutes ago        Ready kube-flannel-ds-sdrdn                 kube-system                     0        (default)

[root@microshift /]# exit
root@mydevice:~# exit
Connection to mydevice.local closed.

Building the All-in-one image locally

Login to and create a fleet microshift

On your Macbook Pro, run the following

cd ~
git clone
cd microshift/hack/all-in-one/balenaos
cp ../../../packaging/images/microshift-aio/crio-bridge.conf .
cp ../../../packaging/images/microshift-aio/kubelet-cgroups.conf .
cp ../../../packaging/images/microshift-aio/unit .

Build the balenaos_main image on the Raspberry Pi 4 and push the release to balenaCloud using the Docker daemon TCP port number 2375 for balena devices.

balena deploy microshift -h mydevice.local -p 2375 --build

The Dockerfile uses the as builder to get the microshift binary. Then, it copies the microshift binary, packaging files and downloads oc and kubectl to the that is used for the run stage. It finally installs the cri-o and dependencies within the image.

Run microshift all-in-one using the locally built image balenaos_main

balena ssh mydevice.local
# Cleanup MicroShift volume
balena-engine volume rm microshift-data
# Cleanup the persistent volumes
mkdir /mnt/data/hpvolumes;rm -rf /mnt/data/hpvolumes/*
balena-engine run -d --rm --name microshift -h --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /mnt/data/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8000:80 balenaos_main
exit # exit out of RaspberryPi, back to Mackbook Pro

Setting up KUBECONFIG on your Macbook Pro to connect to MicroShift

We copy the KUBECONFIG from the Raspberry Pi 4 to the Macbook Pro and update it to point to the ip address/hostname of the Raspberry Pi 4. We need to add the Raspberry Pi 4’s IP address/hostname to the kube-apiserver certificate as shown in next section or use the insecure-skip-tls-verify flag to avoid the certificate error “Unable to connect to the server: x509: certificate is valid for,,, not” as shown below:

mkdir ~/balena-microshift

scp -P 22222 root@mydevice.local:/var/lib/docker/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig ~/balena-microshift/.
export KUBECONFIG=~/balena-microshift/kubeconfig
ping -c1 mydevice.local # Get the ipaddress and replace in line below
sed -i "s||$ipaddress|g" $KUBECONFIG
watch "oc --insecure-skip-tls-verify get nodes;oc --insecure-skip-tls-verify get pods -A"
alias oc="oc --insecure-skip-tls-verify"

Alternatively, we can add the ip address of the Raspberry Pi 4 to the /etc/hosts on your Mac and replace the with the (hostname of the container) in the $KUBECONFIG. For example, add the following to /etc/hosts on your MacBook (replace below with your Raspberry Pi IP address)

sed -i "s||$ipaddress|g" $KUBECONFIG
watch "oc get nodes;oc get pods -A"

Apply the patches if not already applied

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'
oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Update the kube-api certificate

When MicroShift starts, the kube-apiserver certificate is generated with the “X509v3 Subject Alternative Names”. We need to add the Raspberry Pi 4’s ip address to the kube-apiserver certificate. We can do that by logging into the container and generating a new certificate:

root@mydevice:~# balena exec -it microshift bash
[root@microshift /]# cd /var/lib/microshift/certs/kube-apiserver/secrets/service-network-serving-certkey/
[root@microshift service-network-serving-certkey]# ls
tls.crt  tls.key
[root@microshift service-network-serving-certkey]# dnf install openssl
[root@microshift service-network-serving-certkey]# openssl x509 -in tls.crt -text | grep "X509v3 Subject Alternative Name:" -A1
            X509v3 Subject Alternative Name:
                DNS:kube-apiserver,, DNS:kubernetes.default.svc, DNS:kubernetes.default, DNS:kubernetes, DNS:localhost, DNS:, DNS:, DNS:, IP Address:, IP Address:, IP Address:

Add the dns and ip addresses from above into the api.conf. Also add either or both of your wlan0, eth0 ip addresses ( and respectively added below) to the "IP." and hostname to the "DNS.".

mkdir backup;mv tls.* backup

cat <<EOF | tee api.conf
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = localhost
DNS.5 = kube-apiserver
DNS.6 =
IP.1 =
IP.2 =
IP.3 =
IP.4 =
IP.5 =
openssl genrsa -out tls.key 2048
openssl req -new -key tls.key -subj "/CN=kube-apiserver" -out tls.csr -config api.conf
openssl x509 -req -in tls.csr -CA /var/lib/microshift/certs/ca-bundle/ca-bundle.crt -CAkey /var/lib/microshift/certs/ca-bundle/ca-bundle.key -CAcreateserial -out tls.crt -extensions v3_req -extfile api.conf -days 1000
rm -f tls.csr
openssl x509 -in tls.crt -text # Check that the new IP address is added
exit # From microshift container
exit # From Raspberry Pi back to Laptop

scp -P 22222 root@mydevice.local:/var/lib/docker/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig ~/balena-microshift/.

export KUBECONFIG=~/balena-microshift/kubeconfig
sed -i "s|||g" $KUBECONFIG
watch "oc get nodes;oc get pods -A"

Samples to run on MicroShift

We will run a few samples that will show the use of persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

SenseHat Humidity temperature and Pressure measurements in Grafana

The source code is available for this influxdb sample in github.

cd ~
git clone
cd microshift/raspberry-pi/influxdb

If you want to run all the steps in a single command, just execute the


Alternatively, we can run the steps separately. Create a new project for this sample.

oc new-project influxdb

Install InfluxDB

oc create configmap influxdb-config --from-file=influxdb.conf
oc get configmap influxdb-config -o yaml
oc apply -f influxdb-secrets.yaml
oc describe secret influxdb-secrets
# Create the directory for influxdb persistent volume
ssh -p 22222 root@mydevice.local mkdir /mnt/data/hpvolumes/influxdb
oc apply -f influxdb-pv.yaml
oc apply -f influxdb-data.yaml
oc apply -f influxdb-deployment.yaml
oc get -f influxdb-deployment.yaml # check that the Deployment is created and ready
oc logs deployment/influxdb-deployment -f
oc apply -f influxdb-service.yaml

oc rsh deployment/influxdb-deployment # connect to InfluxDB and display the databases


My-MBP:influxdb karve$ oc rsh deployment/influxdb-deployment
# influx --username admin --password admin
Connected to http://localhost:8086 version 1.7.4
InfluxDB shell version: 1.7.4
Enter an InfluxQL query
> show databases
name: databases
> exit
# exit

We can create and push the “measure:latest” image using the Dockerfile. Then, install the pod for SenseHat measurements

oc apply -f measure-deployment.yaml

Install Telegraf and check the measurements for the telegraf database in InfluxDB

oc apply -f telegraf-config.yaml 
oc apply -f telegraf-secrets.yaml 
oc apply -f telegraf-deployment.yaml


My-MBP:influxdb karve$ oc rsh deployment/influxdb-deployment
# influx --username admin --password admin
Connected to http://localhost:8086 version 1.7.4
InfluxDB shell version: 1.7.4
Enter an InfluxQL query
> show databases
name: databases
> use telegraf
Using database telegraf
> show measurements
name: measurements
> select * from cpu;
> exit
# exit

Install Grafana

cd grafana
ssh -p 22222 root@mydevice.local mkdir /mnt/data/hpvolumes/grafana
scp -P 22222 -r config/* root@mydevice.local:/mnt/data/hpvolumes/grafana/.
oc apply -f grafana-pv.yaml
oc apply -f grafana-data.yaml
oc apply -f grafana-deployment.yaml
oc apply -f grafana-service.yaml
oc expose svc grafana-service # Create the route


My-MBP:influxdb karve$ ssh -p 22222 root@mydevice.local mkdir /mnt/data/hpvolumes/grafana
My-MBP:influxdb karve$ scp -P 22222 -r grafana/config/* root@mydevice.local:/mnt/data/hpvolumes/grafana/.
analysis-server.json                                                                                                                                   100%   78KB   1.8MB/s   00:00
grafana.ini                                                                                                                                            100%   15KB   1.0MB/s   00:00
influxdb.yaml                                                                                                                                          100%  496   110.4KB/s   00:00
dashboards.yaml                                                                                                                                        100%  236    58.0KB/s   00:00
My-MBP:influxdb karve$ ssh -p 22222 root@mydevice.local ls -las /mnt/data/hpvolumes/grafana
total 32
 4 drwxr-xr-x 4 root root  4096 Jan  2 00:51 .
 4 drwxr-xr-x 3 root root  4096 Jan  2 00:51 ..
 4 drwxr-xr-x 2 root root  4096 Jan  2 00:51 dashboards
16 -rw-r--r-- 1 root root 15072 Jan  2 00:51 grafana.ini
 4 drwxr-xr-x 4 root root  4096 Jan  2 00:51 provisioning

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local:8000/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the


or delete the grafana, telegraf, influxdb separately.

oc delete route grafana-service
oc delete -f grafana-data.yaml -f grafana-deployment.yaml -f grafana-pv.yaml -f grafana-service.yaml 
cd ..
oc delete -f telegraf-config.yaml -f telegraf-secrets.yaml -f telegraf-deployment.yaml -f measure-deployment.yaml
oc delete -f influxdb-data.yaml -f influxdb-pv.yaml -f influxdb-service.yaml -f influxdb-deployment.yaml -f influxdb-secrets.yaml
oc delete project influxdb

2. Sample with Sense Hat and USB camera

Let’s install Node Red on IBM Cloud. We will use Node Red to show pictures and chat messages sent from the Raspberry Pi 4. Alternatively, we can use the Node Red that we deployed as an application in MicroShift on the MacBook Pro in VirtualBox in Part 1.

  1. Create an IBM Cloud free tier account at and login to Console (top right).
  2. Create an API Key and save it, Manage->Access->IAM->API Key->Create an IBM Cloud API Key
  3. Click on Catalog and Search for "Node-Red App", select it and click on "Get Started"
  4. Give a unique App name, for example xxxxx-node-red and select the region nearest to you
  5. Select the Pricing Plan Lite, if you already have an existing instance of Cloudant, you may select it in Pricing Plan
  6. Click Create
  7. Under Deployment Automation -> Configure Continuous Delivery, click on "Deploy your app"
  8. Select the deployment target Cloud Foundry that provides a Free-Tier of 256 MB cost-free or Code Engine. The latter has monthly limits and takes more time to deploy. [ Note: Cloud Foundry is deprecated, use the IBM Cloud Code Engine. Any IBM Cloud Foundry application runtime instances running IBM Cloud Foundry applications will be permanently disabled and deprovisioned ]
  9. Enter the IBM Cloud API Key from Step 2, or click on "New" to create one
  10. The rest of the fields Region, Organization, Space will automatically get filled up. Use the default 256MB Memory and click "Next"
  11. In "Configure the DevOps toolchain", click Create
  12. Wait for 10 minutes for the Node Red instance to start
  13. Click on the "Visit App URL"
  14. On the Node Red page, create a new userid and password
  15. In Manage Palette, install the node-red-contrib-image-tools, node-red-contrib-image-output, and node-red-node-base64
  16. Import the Chat flow and the Picture (Image) display flow. On the Chat flow, you will need to edit the template node line 35 to use wss:// (on IBM Cloud) instead of ws:// (on your Laptop)
  17. On another browser tab, start the (Replace mynodered with your IBM Cloud Node Red URL)
  18. On the Image flow, click on the square box to the right of image preview or viewer to Deactivate and Activate the Node. You will be able to see the picture when you Activate the Node and run samples below

We will reuse the karve/sensehat image we built for arm64 in previous parts.

cd ~
git clone
cd microshift/raspberry-pi/sensehat

# Update the URL to your node red instance
sed -i "s|||" sensehat.yaml

The application will take pictures using the USB camera, and further send the pictures and web socket chat messages to Node Red using a pod in microshift.

oc apply -f sensehat.yaml

When we are done, we can delete the deployment

oc delete -f sensehat.yaml

3. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

git clone
cd ~/microshift/raspberry-pi/object-detection

We will reuse the karve/object-detection-raspberrypi4 image we built for arm64 in previous parts.

sed -i "s|||" *.yaml
oc apply -f object-detection.yaml

The application will take pictures. We will see pictures being sent to Node Red when a person is detected. After we are done testing, we can delete the deployment.

oc delete -f object-detection.yaml

4. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

wget -O metrics-server-components.yaml
oc apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
oc get deployment metrics-server -n kube-system
oc get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
oc get --raw /apis/
oc get --raw /apis/
brew install jq
oc get --raw /api/v1/nodes/$(oc get nodes -o json | jq -r '.items[0]')/proxy/stats/summary

watch "oc --insecure-skip-tls-verify adm top nodes;oc --insecure-skip-tls-verify adm top pods -A"


NAME                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   344m         8%     1039Mi          13%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     kube-flannel-ds-tpphl                 8m           10Mi
kube-system                     metrics-server-dbf765b9b-pv9x7        10m          14Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-qfxj7   3m           6Mi
openshift-dns                   dns-default-fk88p                     6m           18Mi
openshift-ingress               router-default-85bcfdd948-zl966       2m           22Mi
openshift-service-ca            service-ca-5f8d7bdd7d-rr9bn           12m          19Mi

5. Node Red live data dashboard with SenseHat sensor charts

In this sample, we will install Node Red on the ARM device as a deployment within MicroShift, add the dashboard and view the gauges for temperature/pressure/humidity data from SenseHat on the dashboard.

# Create the directory for nodered data
ssh -p 22222 root@mydevice.local mkdir /mnt/data/hpvolumes/nodered

cd ~
git clone
cd ~/microshift/raspberry-pi/nodered

Build and push the arm64v8 image “karve/nodered:arm64”

cd docker-custom/
docker push karve/nodered:arm64
cd ..

Deploy Node Red with persistent volume for /data within the node red container

oc apply -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml
oc expose svc nodered-svc
oc get routes nodered-svc

Add the ipaddress of the Raspberry Pi 4 device for nodered-svc-default.cluster.local to /etc/hosts on your Laptop and browse to http://nodered-svc-default.cluster.local:8000/

Install the modules: node-red-dashboard, node-red-node-smooth, node-red-node-pi-sense-hat required for the dashboard and flow under “Manage Palette - Install”

Copy Flow 1 or Flow 2 from the nodered sample, import to the Node Red under “Import Nodes” and Deploy.

Double click the Sense HAT input node and make sure that all the events are checked. Select the Dashboard.

Click on the outward arrow in the tabs to view the sensor charts. If you selected the Flow 1, you could click on the Input for the Timestamp under “Dot Matrix” to see the “Alarm” message scroll on the SenseHat LED. For Flow 2, you can see the state of the Joystick Up, Down, Left, Right or Pressed.

Finally, delete this Node Red sample.

oc delete routes nodered-svc
oc delete -f noderedpv.yaml -f noderedpvc.yaml -f nodered.yaml

Cleanup MicroShift

Login to the device and stop the microshift container. The container will get deleted because it was started with the --rm parameter.

balena ssh mydevice.local
balena stop microshift

# Cleanup MicroShift data volume balena-engine volume rm microshift-data
# Cleanup the persistent volumes rm -rf /mnt/data/hpvolumes/*


In this Part 8, we saw how to build and run containerized all-in-one MicroShift on the Raspberry Pi 4 with the balenaOS. We ran samples that used persistent volume, Sense Hat, and USB camera. We ran a sample with object detection that sent the pictures and web socket messages to Node Red on IBM Cloud. In the last sample, we installed Node Red on MicroShift and viewed the SenseHat sensor gauges in the dashboard in Node Red. In Part 9, we will look at Virtualization on Raspberry Pi 4 with MicroShift.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.