Infrastructure as a Service

 View Only

MicroShift – Part 6: Raspberry Pi 4 with Ubuntu 20.04

By Alexei Karve posted Sun December 19, 2021 11:28 AM

  

MicroShift on Raspberry Pi 4 with Ubuntu 20.04 (64 bit) 

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In the previous Part 4 and Part 5 of this series, we looked at multiple ways to run MicroShift on a Raspberry Pi 4 with the Raspberry Pi OS (64 bit) and CentOS 8 Linux Stream respectively. In this Part 6, we continue with the didactic learning to set up and deploy MicroShift on the Raspberry Pi 4 with Ubuntu 20.04.3 available as 64-bit version. We will initially install MicroShift directly on the Raspberry Pi 4. We will also install using the containerized approach as well as the containerized all-in-one approach within Docker. We will run multiple samples - SenseHat and Object Detection TensorFlow Lite sample, InfluxDB/Telegraf/Grafana sample and the Metrics Server using each of these approaches.

Setting up the Raspberry Pi 4 with Ubuntu 20.04 (64 bit)

We will download the Ubuntu OS image zip and write to Microsdxc card

  1. Download the image from https://ubuntu.com/download/raspberry-pi/thank-you?version=20.04.3&architecture=server-arm64+raspi
  2. Write to Microsdxc card using balenaEtcher or the Raspberry Pi Imager
  3. Have a Keyboard and Monitor connected to the Raspberry Pi 4
  4. Insert Microsdxc into Raspberry Pi 4 and poweron
  5. Login with ubuntu/ubuntu. Change the password by reentering the Current password ubuntu and the new password two times

Now we set up the wifi using the command line on the Raspberry Pi 4

ls /sys/class/net # Identify the wireless network interface name wlan0
ls /etc/netplan/
sudo vi /etc/netplan/50-cloud-init.yaml # add the following section

   wifis:
        wlan0:
            optional: true
            access-points:
                "SSID-NAME-HERE":
                    password: "PASSWORD-HERE"
            dhcp4: true

sudo netplan apply
#sudo netplan --debug apply # to debug if you run into problems
ip a # Get ipaddress

Get the wifi ip address above and ssh to the Raspberry Pi 4 remotely from your Laptop using the userid ubuntu

ssh ubuntu@$ipaddress
sudo su -

Check the release

root@ubuntu:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Wait until the unattended-upgr completes

watch "ps aux | grep unatt | sort +3 -rn"
root@ubuntu:~# ps -ef | grep unatt
root        1778       1  0 14:46 ?        00:00:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root        5465    5438 99 15:13 ?        00:00:56 /usr/bin/python3 /usr/bin/unattended-upgrade

Control groups allow for allocating resources among user-defined processes on a system.  Control groups is a Linux kernel feature that enables organization of processes into hierarchically ordered groups - cgroups. The hierarchy (control groups tree) is defined by providing structure to cgroups virtual file system, mounted by default on the /sys/fs/cgroup/ directory by creating and removing sub-directories.
Update kernel parameters: concatenate onto the end of the existing line (do not add a new line) in /boot/firmware/cmdline.txt

 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

Install the rest of the updates and dependencies for SenseHat

apt-get upgrade -y
apt install -y python3 python3-dev python3-pip python3-venv  \
                   build-essential autoconf libtool          \
                   pkg-config cmake libssl-dev               \
                   i2c-tools openssl libcurl4-openssl-dev

The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, test it with i2cdetect. The SenseHat may not work as seen below the 1c and 5c show UU, we need to fix this. You may get "OSError: XXXX Init Failed" or zero readings from the sensors.

root@ubuntu:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- UU -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

0x5c: LPS25H Pressure
0x1c: LSM9DS1 9-axis iNEMO inertial module (IMU): 3D magnetometer, 3D accelerometer, 3D gyroscope
0x5f: HTS221 Humidity and Temperature
0x46: LED2472G 24-Channels LED driver with LED error detection and gain control
0x6a: LSM9DS1 Accelerometer Gyro Magnetometer

Add the i2c-dev line to /etc/modules:

cat << EOF >> /etc/modules
i2c-dev
EOF

Create the file /etc/udev/rules.d/99-i2c.rules

cat << EOF >> /etc/udev/rules.d/99-i2c.rules
KERNEL=="i2c-[0-7]",MODE="0666"
EOF

The Raspberry Pi build of Ubuntu Server comes with the Industrial I/O modules preloaded. We get initialization errors on some of the sensors because the Industrial I/O modules grab on to the i2c sensors on the Sense HAT and refuse to let them go or allow them to be read correctly. Check this with “lsmod | grep st_”.

root@ubuntu:~# lsmod | grep st_
st_pressure_spi        16384  0
st_magn_spi            16384  0
st_sensors_spi         16384  2 st_pressure_spi,st_magn_spi
st_pressure_i2c        16384  0
st_magn_i2c            16384  0
st_pressure            20480  2 st_pressure_i2c,st_pressure_spi
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio_triggered_buffer    16384  2 st_pressure,st_magn
industrialio           98304  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

We need to blacklist the Industrial I/O modules

cat << EOF > /etc/modprobe.d/blacklist-industialio.conf
blacklist st_magn_spi
blacklist st_pressure_spi
blacklist st_sensors_spi
blacklist st_pressure_i2c
blacklist st_magn_i2c
blacklist st_pressure
blacklist st_magn
blacklist st_sensors_i2c
blacklist st_sensors
blacklist industrialio_triggered_buffer
blacklist industrialio
EOF

reboot

Login back to the Raspberry Pi 4 and check the SenseHat

ssh ubuntu@$ipaddress
sudo su –

Run the commands as in output below to check, the 46 output shows UU, that is fine.

root@ubuntu:~$ cat /boot/firmware/syscfg.txt
# This file is intended to be modified by the pibootctl utility. User
# configuration changes should be placed in "usercfg.txt". Please refer to the
# README file for a description of the various configuration files on the boot
# partition.

enable_uart=1
dtparam=audio=on
dtparam=i2c_arm=on
dtparam=spi=on
cmdline=cmdline.txt

root@ubuntu:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
root@ubuntu:~# i2cdetect -F 1
Functionalities implemented by /dev/i2c-1:
I2C                              yes
SMBus Quick Command              yes
SMBus Send Byte                  yes
SMBus Receive Byte               yes
SMBus Write Byte                 yes
SMBus Read Byte                  yes
SMBus Write Word                 yes
SMBus Read Word                  yes
SMBus Process Call               yes
SMBus Block Write                yes
SMBus Block Read                 no
SMBus Block Process Call         no
SMBus PEC                        yes
I2C Block Write                  yes
I2C Block Read                   yes

Install the RTIMULib. This is required to use the SenseHat.

git clone https://github.com/RPi-Distro/RTIMULib.git
cd RTIMULib/
cd Linux/python
python3 setup.py build
python3 setup.py install
cd ../..
cd RTIMULib
mkdir build
cd build
cmake ..
make -j4
make install
ldconfig
cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11
cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10

cd /root/RTIMULib/Linux/RTIMULibDemoGL
#yum -y install qt5-qtbase-devel;qmake-qt5
apt-get install -y qt5-default qtdeclarative5-dev;qmake
make -j4
make install

cd /root
git clone https://github.com/astro-pi/python-sense-hat
cd python-sense-hat
python3 setup.py build
python3 setup.py install
#pip3 install sense_emu
cd ~

Create the sparkles.py and test the Sense Hat's LED matrix

cat << EOF > sparkles.py
from sense_hat import SenseHat
from random import randint
from time import sleep
sense = SenseHat()
while True:
    x = randint(0, 7)
    y = randint(0, 7)
    r = randint(0, 255)
    g = randint(0, 255)
    b = randint(0, 255)
    sense.set_pixel(x, y, r, g, b)
    sleep(0.1)
EOF
sudo python3 sparkles.py # Ctrl-C to interrupt

Create the testsensehat.py and test the temperature, pressure and humidity sensors

cat << EOF > testsensehat.py
from sense_hat import SenseHat
import time
sense = SenseHat()
while True:
   t = sense.get_temperature()
   p = sense.get_pressure()
   h = sense.get_humidity()
   msg = "Temperature = %s, Pressure=%s, Humidity=%s" % (t,p,h)
   print(msg)
   time.sleep(0.5)
EOF
sudo python3 testsensehat.py # Ctrl-C to interrupt

Create the testcam.py and test the USB camera - Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.

pip3 install pygame –upgrade

cat << EOF > testcam.py
import pygame, sys
from pygame.locals import *
import pygame.camera
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(352,288))
cam.start()
image= cam.get_image()
pygame.image.save(image,'101.bmp')
cam.stop()
EOF
sudo python3 testcam.py # It will create a file 101.bmp

Install MicroShift

Set the hostname with domain (if not already set)

hostnamectl set-hostname ubuntu.example.com # The host needs a fqdn domain for MicroShift to work well

Clone the microshift repository so we can run the install.sh

sudo su –
git clone https://github.com/thinkahead/microshift.git
cd microshift

Run the install script

./install.sh

We can get more details about the microshift service with

systemctl show microshift.service

To check the microshift systemd service, check the file /lib/systemd/system/microshift.service. It shows that the microshift binary is in /usr/local/bin/ directory.

root@ubuntu:# cat /lib/systemd/system/microshift.service 
[Unit]
Description=MicroShift
After=crio.service

[Service]
WorkingDirectory=/usr/local/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

To start microshift and check the status and logs, you can run

systemctl start microshift
systemctl status microshift
journalctl -u microshift -f

Install the oc client - We can download the required version of oc client for arm64.

wget https://mirror.openshift.com/pub/openshift-v4/arm64/clients/ocp/candidate/openshift-client-linux.tar.gz
mkdir tmp;cd tmp
tar -zxvf ../openshift-client-linux.tar.gz
mv -f oc /usr/local/bin
cd ..;rm -rf tmp

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
#watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Output

NAME                 STATUS   ROLES    AGE     VERSION
ubuntu.example.com   Ready    <none>   4m14s   v1.21.0
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-jzfdb 1/1 Running 0 4m15s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-qf2b2 1/1 Running 0 4m13s openshift-dns dns-default-6hkxk 2/2 Running 0 4m15s openshift-dns node-resolver-8j496 1/1 Running 0 4m15s openshift-ingress router-default-85bcfdd948-nprq5 1/1 Running 0 4m19s openshift-service-ca service-ca-76674bfb58-hcrrw 1/1 Running 0 4m20s
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 9dea7f3d64c81 About a minute ago Ready router-default-85bcfdd948-nprq5 openshift-ingress 0 (default) 6d5fd5def953d About a minute ago Ready dns-default-6hkxk openshift-dns 0 (default) e45d188fd24b9 4 minutes ago Ready node-resolver-8j496 openshift-dns 0 (default) f4c17418e67c2 4 minutes ago Ready kubevirt-hostpath-provisioner-qf2b2 kubevirt-hostpath-provisioner 0 (default) ee9f3bf212d44 4 minutes ago Ready kube-flannel-ds-jzfdb kube-system 0 (default) fc6275f909805 4 minutes ago Ready service-ca-76674bfb58-hcrrw openshift-service-ca 0 (default)
IMAGE TAG IMAGE ID SIZE k8s.gcr.io/pause 3.5 f7ff3c4042631 491kB quay.io/microshift/cli 4.8.0-0.okd-2021-10-10-030117 33a276ba2a973 205MB quay.io/microshift/coredns 4.8.0-0.okd-2021-10-10-030117 67a95c8f15902 265MB quay.io/microshift/flannel-cni 4.8.0-0.okd-2021-10-10-030117 0e66d6f50c694 8.78MB quay.io/microshift/flannel 4.8.0-0.okd-2021-10-10-030117 85fc911ceba5a 68.1MB quay.io/microshift/haproxy-router 4.8.0-0.okd-2021-10-10-030117 37292c44812e7 225MB quay.io/microshift/hostpath-provisioner 4.8.0-0.okd-2021-10-10-030117 fdef3dc1264ad 39.3MB quay.io/microshift/kube-rbac-proxy 4.8.0-0.okd-2021-10-10-030117 7f149e453e908 41.5MB quay.io/microshift/service-ca-operator 4.8.0-0.okd-2021-10-10-030117 0d3ab44356260 276MB

Build the MicroShift binary for arm64 on Ubuntu 20.04 (64 bit)

We can replace the microshift binary that was download from the install.sh script with our own. Let’s build the microshift binary from scratch. Clone the microshift repository from github, install golang, run make and finally move the microshift binary to /usr/local/bin.

sudo su -

apt -y install build-essential curl libgpgme-dev pkg-config libseccomp-dev

# Install golang
wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH

git clone https://github.com/thinkahead/microshift.git
cd microshift
make
./microshift version
ls -las microshift # binary in current directory /root/microshift
mv microshift /usr/local/bin/microshift
systemctl restart microshift

Output

root@ubuntu:~# git clone https://github.com/thinkahead/microshift.git
root@ubuntu:~# cd microshift
root@ubuntu:~/microshift# make
fatal: No names found, cannot describe anything.
go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp gssapi providerless netgo osusergo' -ldflags "-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=21 -X k8s.io/component-base/version.gitVersion=v1.21.0 -X k8s.io/component-base/version.gitCommit=c3b9e07a -X k8s.io/component-base/version.gitTreeState=clean -X k8s.io/component-base/version.buildDate=2021-12-18T22:44:41Z -X k8s.io/client-go/pkg/version.gitMajor=1 -X k8s.io/client-go/pkg/version.gitMinor=21 -X k8s.io/client-go/pkg/version.gitVersion=v1.21.1 -X k8s.io/client-go/pkg/version.gitCommit=b09a9ce3 -X k8s.io/client-go/pkg/version.gitTreeState=clean -X k8s.io/client-go/pkg/version.buildDate=2021-12-18T22:44:41Z -X github.com/openshift/microshift/pkg/version.versionFromGit=4.8.0-0.microshift-unknown -X github.com/openshift/microshift/pkg/version.commitFromGit=b570b8c5 -X github.com/openshift/microshift/pkg/version.gitTreeState=dirty -X github.com/openshift/microshift/pkg/version.buildDate=2021-12-18T22:44:42Z -s -w" github.com/openshift/microshift/cmd/microshift 
root@ubuntu:~/microshift# ./microshift version
MicroShift Version: 4.8.0-0.microshift-unknown
Base OKD Version: 4.8.0-0.okd-2021-10-10-030117
root@ubuntu:~/microshift# ls -las microshift
147668 -rwxr-xr-x 1 root root 151211021 Dec 18 22:56 microshift 
root@ubuntu:~/microshift# ls -las `which microshift`
147184 -rwxr-xr-x 1 root root 150715803 Dec 18 16:31 /usr/local/bin/microshift 
root@ubuntu:~/microshift# mv microshift /usr/local/bin/microshift

We may alternatively download the latest version of the prebuilt microshift binary from github that the install.sh downloaded as follows:

ARCH=arm64
export VERSION=$(curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | head -n 1 | cut -d '"' -f 4) && \
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/microshift-linux-${ARCH}
chmod +x microshift-linux-${ARCH}
ls -las microshift-linux*
mv microshift-linux-${ARCH} /usr/local/bin/microshift
systemctl restart microshift

Output

root@ubuntu:~/microshift# ls -las microshift-linux* 
147184 -rwxr-xr-x 1 root root 150715803 Dec 18 23:00 microshift-linux-arm64

Install Docker on Ubuntu 20.04

Although we do not need docker to run MicroShift directly on the Raspberry Pi 4, we will build images using docker and run some samples. Also in later section, we will run MicroShift using the containerized approach. So, let’s install Docker.

apt-get install -y ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io

Samples to run on MicroShift

We will run a few samples that will show the use of persistent volume, SenseHat and the USB camera.

1. InfluxDB/Telegraf/Grafana

Memory, CPU and Load


The source code is available for this influxdb sample in github with line-by-line instructions. Create a new project for this sample:

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/influxdb
oc new-project influxdb

Install InfluxDB

oc create configmap influxdb-config --from-file=influxdb.conf
oc get configmap influxdb-config -o yaml
oc apply -f influxdb-secrets.yaml
oc describe secret influxdb-secrets
mkdir /var/hpvolumes/influxdb
oc apply -f influxdb-pv.yaml
oc apply -f influxdb-data.yaml
oc apply -f influxdb-deployment.yaml
oc get -f influxdb-deployment.yaml # check that the Deployment is created and ready
oc logs deployment/influxdb-deployment -f
oc apply -f influxdb-service.yaml

oc rsh deployment/influxdb-deployment # connect to InfluxDB and display the databases

Output

root@ubuntu:~/microshift/raspberry-pi/influxdb# oc rsh deployment/influxdb-deployment
# influx --username admin --password admin
Connected to http://localhost:8086 version 1.7.4
InfluxDB shell version: 1.7.4
Enter an InfluxQL query
> show databases
name: databases
name
----
test
_internal
> exit
# exit

Install Telegraf
Check the measurements for the telegraf database in InfluxDB

oc apply -f telegraf-config.yaml 
oc apply -f telegraf-secrets.yaml 
oc apply -f telegraf-deployment.yaml

Output

root@ubuntu:~/microshift/raspberry-pi/influxdb# oc rsh deployment/influxdb-deployment
# influx --username admin --password admin
Connected to http://localhost:8086 version 1.7.4
InfluxDB shell version: 1.7.4
Enter an InfluxQL query
> show databases
name: databases
name
----
test
_internal
telegraf
> use telegraf
Using database telegraf
> show measurements
name: measurements
name
----
cpu
disk
diskio
kernel
mem
net
netstat
processes
swap
system
> select * from cpu;
...
> exit
# exit

If you want to use the telegraf:latest image, you will face an issue with capabilities. To fix it, you will need to either:

1. Add the capabilities in telegraf-deployment.yaml

        - image: telegraf
          name: telegraf
          securityContext:
            #privileged: true
            capabilities :
              add: ["CAP_SETFCAP","CAP_NET_RAW","CAP_NET_BIND_SERVICE"]

or
2. Create you own container image to ignore the error in entrypoint.sh with the command

setcap cap_net_raw,cap_net_bind_service+ep /usr/bin/telegraf

until the Pull request is merged using the following Dockerfile 

FROM telegraf:latest
RUN sed -i "s~/usr/bin/telegraf~/usr/bin/telegraf || echo \"Failed to set additional capabilities on /usr/bin/telegraf\"~" entrypoint.sh
RUN cat entrypoint.sh

Install Grafana

cd grafana
mkdir /var/hpvolumes/grafana
cp -r config/* /var/hpvolumes/grafana/.
oc apply -f grafana-pv.yaml
oc apply -f grafana-data.yaml
oc apply -f grafana-deployment.yaml
oc apply -f grafana-service.yaml
oc expose svc grafana-service # Create the route

Add the "RaspberryPiIPAddress grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local (or to http://grafana-service-default.cluster.local if you deployed to default namespace) using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). The Analysis Server dashboard should be visible. Open it to display monitoring information for MicroShift.

Finally, after you are done working with this sample, delete the grafana, telegraf, influxdb.

oc delete route grafana-service
oc delete -f grafana-data.yaml -f grafana-deployment.yaml -f grafana-pv.yaml -f grafana-service.yaml 
cd ..
oc delete -f telegraf-config.yaml -f telegraf-secrets.yaml -f telegraf-deployment.yaml
oc delete -f influxdb-data.yaml -f influxdb-pv.yaml -f influxdb-service.yaml -f influxdb-deployment.yaml -f influxdb-secrets.yaml
oc delete project influxdb

2. Sample with Sense Hat and USB camera

Let’s install Node Red on IBM Cloud. We will use Node Red to show pictures and chat messages sent from the Raspberry Pi 4. Alternatively, we can use the Node Red that we deployed as an application in MicroShift on the MacBook Pro in VirtualBox in Part 1.

  1. Create an IBM Cloud free tier account at https://www.ibm.com/cloud/free and login to Console (top right).
  2. Create an API Key and save it, Manage->Access->IAM->API Key->Create an IBM Cloud API Key
  3. Click on Catalog and Search for "Node-Red App", select it and click on "Get Started"
  4. Give a unique App name, for example xxxxx-node-red and select the region nearest to you
  5. Select the Pricing Plan Lite, if you already have an existing instance of Cloudant, you may select it in Pricing Plan
  6. Click Create
  7. Under Deployment Automation -> Configure Continuous Delivery, click on "Deploy your app"
  8. Select the deployment target Cloud Foundry that provides a Free-Tier of 256 MB cost-free or Code Engine. The latter has monthly limits and takes more time to deploy. [ Note: Cloud Foundry is deprecated, use the IBM Cloud Code Engine. Any IBM Cloud Foundry application runtime instances running IBM Cloud Foundry applications will be permanently disabled and deprovisioned ]
  9. Enter the IBM Cloud API Key from Step 2, or click on "New" to create one
  10. The rest of the fields Region, Organization, Space will automatically get filled up. Use the default 256MB Memory and click "Next"
  11. In "Configure the DevOps toolchain", click Create
  12. Wait for 10 minutes for the Node Red instance to start
  13. Click on the "Visit App URL"
  14. On the Node Red page, create a new userid and password
  15. In Manage Palette, install the node-red-contrib-image-tools, node-red-contrib-image-output, and node-red-node-base64
  16. Import the Chat flow and the Picture (Image) display flow. On the Chat flow, you will need to edit the template node line 35 to use wss:// (on IBM Cloud) instead of ws:// (on your Laptop)
  17. On another browser tab, start the https://mynodered.mybluemix.net/chat (Replace mynodered with your IBM Cloud Node Red URL)
  18. On the Image flow, click on the square box to the right of image preview or viewer to Deactivate and Activate the Node. You will be able to see the picture when you Activate the Node and run samples below

Build the image for this sensehat sample

cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/sensehat

docker build -t karve/sensehat .
docker push karve/sensehat
docker run --privileged --name sensehat -ti karve/sensehat bash

We will use the above docker image to send pictures and web socket chat messages to Node Red using a pod in microshift.

# Update the URL to your node red instance
sed -i "s|mynodered.mybluemix.net|yournodered.mybluemix.net|" sensehat.yaml
oc apply -f sensehat.yaml

When we are done, we can delete the deployment

oc delete -f sensehat.yaml

3. TensorFlow Lite Python object detection with SenseHat and Node Red

This example requires the same Node Red setup as in the previous Sample 2.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in MicroShift.

docker build -t docker.io/karve/object-detection-raspberrypi4 .
docker push docker.io/karve/object-detection-raspberrypi4:latest

sed -i "s|mynodered.mybluemix.net|yournodered.mybluemix.net|" *.yaml
oc apply -f object-detection.yaml

We will see pictures being sent to Node Red with a person is detected. When we are done testing, we can delete the deployment.

oc delete -f object-detection.yaml

4. Install Metrics Server

This will enable us to run the “kubectl top” and “oc adm top” commands.

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server-components.yaml
kubectl apply -f metrics-server-components.yaml

# Wait for the metrics-server to start in the kube-system namespace
kubectl get deployment metrics-server -n kube-system
kubectl get events -n kube-system
# Wait for a couple of minutes for metrics to be collected
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
apt-get install -y jq
kubectl get --raw /api/v1/nodes/$(kubectl get nodes -o json | jq -r '.items[0].metadata.name')/proxy/stats/summary

watch "kubectl top nodes;kubectl top pods -A"
watch "oc adm top nodes;oc adm top pods -A"

Output

NAME                 CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ubuntu.example.com   585m         14%    1937Mi          25%
NAMESPACE                       NAME                                  CPU(cores)   MEMORY(bytes)
kube-system                     metrics-server-dbf765b9b-lvw2t        10m          16Mi
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-qf2b2   1m           7Mi
openshift-dns                   dns-default-6hkxk                     9m           20Mi
openshift-dns                   node-resolver-8j496                   5m           4Mi
openshift-ingress               router-default-85bcfdd948-nprq5       3m           23Mi
openshift-service-ca            service-ca-76674bfb58-hcrrw           13m          20Mi

Cleanup MicroShift

We can use the script available on github to cleanup the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

wget https://raw.githubusercontent.com/thinkahead/microshift/main/hack/cleanup.sh
bash ./cleanup.sh

Containerized MicroShift on Ubuntu 20.04 (64 bit)

We can run MicroShift within containers in two ways:
  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Podman container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only.

1. Microshift Containerized

We can either use the prebuilt image or build the image using docker.

To use the prebuilt image, set

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2021-11-19-115908-linux-arm64
docker pull $IMAGE

To build the image, clone the microshift repository from github and run make

git clone https://github.com/thinkahead/microshift.git
cd microshift 

# Edit the packaging/images/microshift/Dockerfile. Replace the go-toolset with go-toolset:1.16.7-5
-FROM registry.access.redhat.com/ubi8/go-toolset as builder
+FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder

# This will create the image quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64
make build-containerized-cross-build-linux-arm64 -e FROM_SOURCE

The Dockerfile uses the registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder to build the microshift binary. Then, it copies the binary to the registry.access.redhat.com/ubi8/ubi-minimal:8.4 that is used for the run stage.

Set the IMAGE to the one we just built above

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64

Run the microshift container in Docker

docker run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "docker ps;kubectl get nodes;kubectl get pods -A;crictl images"

The microshift container runs in docker and the rest of the pods in crio. Now, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the docker run will delete the container when we stop it.

docker stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

2. MicroShift Containerized All-In-One

We can build and run the all-in-one microshift in docker or use prebuilt images. Let’s stop the crio on the host, we will be creating an all-in-one container in docker that will have crio within the container.

systemctl stop crio

To build the all-in-one image, clone the microshift repository from github and run make

git clone https://github.com/thinkahead/microshift.git
cd microshift 

# Edit the packaging/images/microshift-aio/Dockerfile. Replace the go-toolset with go-toolset:1.16.7-5
-FROM registry.access.redhat.com/ubi8/go-toolset as builder
+FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder

# This will create the image quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64
make build-containerized-all-in-one-arm64

The Dockerfile uses the registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder to get the microshift binary. Then, it copies the microshift binary, packaging files and downloads kubectl to the registry.access.redhat.com/ubi8/ubi-init:8.4 that is used for the run stage. It finally installs the cri-o and dependencies within the image.

Create a new docker volume microshift-data and run the microshift container using the image we build above or the prebuilt image.

docker volume rm microshift-data;docker volume create microshift-data

# Run using the image we built above
#docker run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64

# Run using prebuilt image
docker run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2021-10-10-030117-3-ga424399-linux-nft-arm64

Now login to the microshift container and see the pods created using crio within the container, not directly on the host. It may take upto 6 minutes for all pods to be started.

docker exec -it microshift bash
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "kubectl get nodes;kubectl get pods -A;crictl images;crictl pods"

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.

kubectl patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment if it keeps restarting:

kubectl patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Output

root@ubuntu:~# docker ps -a
CONTAINER ID   IMAGE                                                                                               COMMAND        CREATED         STATUS         PORTS                                                                                                                     NAMES
6d057af1c963   quay.io/microshift/microshift-aio:4.8.0-0.microshift-2021-10-10-030117-3-ga424399-linux-nft-arm64   "/sbin/init"   4 minutes ago   Up 4 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:6443->6443/tcp, :::6443->6443/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   microshift
root@ubuntu:~# docker exec -it microshift bash [root@microshift /]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig [root@microshift /]# watch "kubectl get nodes;kubectl get pods -A;crictl images;crictl pods" # Ctrl-C to interrupt watch [root@microshift /]# kubectl patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}' daemonset.apps/dns-default patched [root@microshift /]# kubectl patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}' deployment.apps/service-ca patched [root@microshift /]# watch "kubectl get nodes;kubectl get pods -A;crictl images;crictl pods" NAME STATUS ROLES AGE VERSION microshift.example.com Ready <none> 6m57s v0.21.0 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-kddzr 1/1 Running 0 6m13s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-nknlj 1/1 Running 0 6m17s openshift-dns dns-default-278qj 2/2 Running 0 27s openshift-dns node-resolver-rb6dx 1/1 Running 0 6m14s openshift-ingress router-default-85bcfdd948-cdlrp 1/1 Running 0 6m16s openshift-service-ca service-ca-57cbc45559-l7bdm 1/1 Running 0 32s IMAGE TAG IMAGE ID SIZE k8s.gcr.io/pause 3.2 2a060e2e7101d 489kB quay.io/microshift/cli 4.8.0-0.okd-2021-10-10-030117 33a276ba2a973 205MB quay.io/microshift/coredns 4.8.0-0.okd-2021-10-10-030117 67a95c8f15902 265MB quay.io/microshift/flannel 4.8.0-0.okd-2021-10-10-030117 85fc911ceba5a 68.1MB quay.io/microshift/haproxy-router 4.8.0-0.okd-2021-10-10-030117 37292c44812e7 225MB quay.io/microshift/hostpath-provisioner 4.8.0-0.okd-2021-10-10-030117 fdef3dc1264ad 39.3MB quay.io/microshift/kube-rbac-proxy 4.8.0-0.okd-2021-10-10-030117 7f149e453e908 41.5MB quay.io/microshift/service-ca-operator 4.8.0-0.okd-2021-10-10-030117 0d3ab44356260 276MB POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME dda54fc0e473d 26 seconds ago Ready dns-default-278qj openshift-dns 0 (default) 2653d9ccaee1e 31 seconds ago Ready service-ca-57cbc45559-l7bdm openshift-service-ca 0 (default) 4380da41f5cf3 5 minutes ago Ready dns-default-snfbp openshift-dns 0 (default) e8e34c455d654 5 minutes ago Ready router-default-85bcfdd948-cdlrp openshift-ingress 0 (default) ee372c29fe44f 6 minutes ago Ready kube-flannel-ds-kddzr kube-system 0 (default) c9d72094b2863 6 minutes ago Ready node-resolver-rb6dx openshift-dns 0 (default) 273938427b090 6 minutes ago Ready kubevirt-hostpath-provisioner-nknlj kubevirt-hostpath-provisioner 0 (default) c4295c98166b2 6 minutes ago Ready service-ca-76674bfb58-z4qw4 openshift-service-ca 0 (default)

We exit back to the host and check that the microshift container is still running within docker

[root@microshift /]# exit
exit
root@ubuntu:~# docker ps -a
CONTAINER ID   IMAGE                                                                                               COMMAND        CREATED          STATUS          PORTS                                                                                                                     NAMES
6d057af1c963   quay.io/microshift/microshift-aio:4.8.0-0.microshift-2021-10-10-030117-3-ga424399-linux-nft-arm64   "/sbin/init"   14 minutes ago   Up 13 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:6443->6443/tcp, :::6443->6443/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   microshift

We can inspect the microshift-data volume to find the path where the kubeconfig is located.

root@ubuntu:~# docker volume inspect microshift-data
[
    {
        "CreatedAt": "2021-12-19T08:54:20Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/microshift-data/_data",
        "Name": "microshift-data",
        "Options": {},
        "Scope": "local"
    }
]

On the host, we set KUBECONFIG to point to the kubeconfig on the data volume

export KUBECONFIG=/var/lib/docker/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "kubectl get nodes;kubectl get pods -A"

We do not run crictl on the host, crio is running within the docker microshift all-in-one container.

Output

NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    <none>   21m   v0.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-kddzr                 1/1     Running   0          21m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-nknlj   1/1     Running   0          21m
openshift-dns                   dns-default-278qj                     2/2     Running   0          15m
openshift-dns                   node-resolver-rb6dx                   1/1     Running   0          21m
openshift-ingress               router-default-85bcfdd948-cdlrp       1/1     Running   0          21m
openshift-service-ca            service-ca-57cbc45559-l7bdm           1/1     Running   0          15m

Now, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the docker run will delete the container when we stop it.

docker stop microshift

After it is stopped, we can run the cleanup.sh as in previous section.

Conclusion

In this Part 6, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the Ubuntu 20.04 (64 bit). We ran samples that used persistent volume for InfluxDB, Sense Hat, and USB camera. We deployed a sample that sent the pictures with object detection and web socket messages to Node Red on IBM Cloud. In Part 7, we will revisit the Jetson Nano. We had deployed MicroShift on the Jetson Nano with Ubuntu 18.04 in Part 2 and Part 3. We will build and deploy MicroShift on the Jetson Nano with Ubuntu 20.04 (developer preview planned for Q1-2022 in Jetson Software Roadmap).

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#node-red#raspberry-pi#ubuntu20.04​​​​​​​

​​
0 comments
33 views

Permalink