Infrastructure as a Service

 View Only

MicroShift – Part 5: Raspberry Pi 4 with CentOS 8 Linux Stream

By Alexei Karve posted Mon December 06, 2021 04:15 PM

  

MicroShift on Raspberry Pi 4 with CentOS 8 Linux Stream

Introduction 

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 1 we looked at multiple ways to run MicroShift on a MacBook Pro. In Part 2 and Part 3 of this series we worked with the Jetson Nano. In Part 4, we ran MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit). In this Part 5, we will set up and deploy MicroShift on CentOS 8 Stream using multiple approaches: directly on the Pi, Containerized, Containerized All-in-one with Docker and Podman. Further, we will show use of a template and also run an object detection sample and send messages to Node Red on IBM Cloud.

Setting up the Raspberry Pi 4 with CentOS 8 Stream

Run the following steps to download the CentOS image and setup the Raspberry Pi 4 

  1. Download the CentOS image CentOS-Userland-8-stream-aarch64-RaspberryPI-Minimal-4-sda.raw.xz. If you have already downloaded the CentOS-Userland-8-aarch64-RaspberryPI-Minimal-4-sda.raw.xz, we can convert to stream as documented at https://ostechnix.com/how-to-migrate-to-centos-stream-8-from-centos-linux-8/ as shown in 5 below.
  2. Write to Microsdxc card, insert Microsdxc into Raspberry Pi4 and poweron
  3. Login using root/centos and get the ipaddress

ssh root@$ipaddress

  1. Extend the disk
sudo growpart /dev/mmcblk0 3
sudo fdisk -lu
resize2fs /dev/mmcblk0p3

or

rootfs-expand

  1. If you used the non-stream image, convert to stream
cat /etc/redhat-release
dnf install centos-release-stream
dnf swap centos-{linux,stream}-repos
dnf -y distro-sync

If you get the Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
dnf -y distro-sync


  1. Add your public key, enable wifi
mkdir ~/.ssh
vi ~/.ssh/authorized_keys
chmod 600 ~/.ssh
chmod 644 ~/.ssh/authorized_keys
nmcli device wifi list # Note your ssid
nmcli device wifi connect $ssid --ask

  1. Set the hostname with a domain

hostnamectl set-hostname centos.example.com

  1. Update the kernel and kernel parameters
Update kernel parameters: concatenate the following onto the end of the existing line (do not add a new line) in /boot/cmdline.txt
 cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory

A control group (cgroup) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, and so on) of a collection of processes. Cgroups are a key component of containers because there are often multiple processes running in a container that you need to control together. In Microshift, cgroups are used to implement resource requests and limits and corresponding QoS classes at the pod level.

We will get the “Error cannot access '/sys/fs/cgroup/cpu/cpu.cfs_quota_us': No such file or directory” when starting MicroShift. To get past this, change kernel from raspberrypi2-kernel4-5.4.60-v8.1.el8.aarch64 to raspberrypi2-kernel4.5.4.158-v8.1.el8 as follows (latest is raspberrypi2-kernel4-5.4.195-v8.1.el8):

Create /etc/yum.repos.d/pgrepo.repo
[pgrepo]
name=New Kernel
type=rpm-md
baseurl=https://people.centos.org/pgreco/rpi_aarch64_el8/
gpgcheck=0
enabled=1
 
Update and reboot
dnf -y update

If you get the Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
dnf -y update

reboot

Verify
mount | grep cgroup
cat /proc/cgroups | column -t # Check that memory and cpuset are present
ls -l /sys/fs/cgroup/cpu/cpu.cfs_quota_us # This needs to be present for MicroShift to work
 
  1. Check the release
cat /etc/os-release
[root@centos ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Linux 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-8"
CENTOS_MANTISBT_PROJECT_VERSION="8"
  1. Installing sense_hat and RTIMULib on CentOS 8
The Sense HAT is an add-on board for the Raspberry Pi. The Sense HAT has an 8 × 8 RGB LED matrix, a five – button joystick and includes the following sensors: Inertial Measurement Unit (Accelerometer, Gyroscope, Magnetometer), Temperature, Barometric pressure, Humidity. If you have the Sense HAT attached, install the libraries and test it. Also test the USB Camera.

Install sensehat
dnf -y install zlib zlib-devel libjpeg-devel gcc gcc-c++ i2c-tools python3-devel python3 python3-pip git
pip3 install Cython Pillow numpy sense_hat

Be patient. The above command will take a few minutes to install:
[root@localhost ~]# pip3 install Cython Pillow numpy sense_hat
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting Cython
Downloading https://files.pythonhosted.org/packages/80/08/1c007f1d571f8f2a67ed6938cc79117fa5ae9c0d9ff633fbd5e52f212062/Cython-0.29.30-py2.py3-none-any.whl (985kB)
100% |████████████████████████████████| 993kB 549kB/s
Collecting Pillow
Downloading https://files.pythonhosted.org/packages/7d/2a/2fc11b54e2742db06297f7fa7f420a0e3069fdcf0e4b57dfec33f0b08622/Pillow-8.4.0.tar.gz (49.4MB)
100% |████████████████████████████████| 49.4MB 11kB/s
Collecting numpy
Downloading https://files.pythonhosted.org/packages/51/60/3f0fe5b7675a461d96b9d6729beecd3532565743278a9c3fe6dd09697fa7/numpy-1.19.5.zip (7.3MB)
100% |████████████████████████████████| 7.3MB 76kB/s
Collecting sense_hat
Downloading https://files.pythonhosted.org/packages/13/cd/f30b6709e01cacd0f9e2882ce3c0633ea2862771a75f4a9d02a56db9ec9a/sense_hat-2.2.0-py2.py3-none-any.whl
Installing collected packages: Cython, Pillow, numpy, sense-hat
Running setup.py install for Pillow ... done
Running setup.py install for numpy ... done
Successfully installed Cython-0.29.30 Pillow-8.4.0 numpy-1.19.5 sense-hat-2.2.0


Install RTIMULib
cd ~
git clone https://github.com/RPi-Distro/RTIMULib.git
cd ~/RTIMULib/Linux/python
python3 setup.py build
python3 setup.py install
cd ../../RTIMULib
mkdir build
cd build
dnf install -y cmake

cmake ..
make -j4
make install
ldconfig

cd /root/RTIMULib/Linux/RTIMULibDrive11
make -j4
make install
RTIMULibDrive11

cd /root/RTIMULib/Linux/RTIMULibDrive10
make -j4
make install
RTIMULibDrive10

dnf -y install qt5-qtbase-devel
cd /root/RTIMULib/Linux/RTIMULibDemoGL
qmake-qt5
make -j4
make install
 
Check the Sense Hat with i2cdetect. The 46 output for the LED matrix shows UU, that is fine.
i2cdetect -y 1

Output
root@centos:~# i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:                         -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --

Create the sparkles.py and test the Sense Hat's LED matrix
cat << EOF > sparkles.py
from sense_hat import SenseHat
from random import randint
from time import sleep
sense = SenseHat()
while True:
    x = randint(0, 7)
    y = randint(0, 7)
    r = randint(0, 255)
    g = randint(0, 255)
    b = randint(0, 255)
    sense.set_pixel(x, y, r, g, b)
    sleep(0.1)
EOF
 
python3 sparkles.py # Ctrl-C to interrupt
 
If you get the ModuleNotFoundError: No module named 'sense_hat', it means you need to run "pip3 install Cython Pillow numpy sense_hat". If you get the ModuleNotFoundError: No module named 'RTIMU', then it means you need to install the RTIMULib as mentioned above.

Create the temperature.py and test the temperature sensor
 
cat << EOF > temperature.py
from sense_hat import SenseHat
 
sense = SenseHat()
 
temp = sense.get_temperature_from_pressure()
#temp = sense.get_temperature_from_humidity()
print("Temperature: %s C" % temp)
sense.show_message("{:.1f} C".format(temp))
EOF
 
python3 temperature.py
 
First time you run it, you may see “Temperature: 0 C”. Just run it again.

[root@localhost sensehat]# python3 temperature.py
Temperature: 0 C
[root@localhost sensehat]# python3 temperature.py
Temperature: 36.741668701171875 C
 
Test the USB camera
 
Create the testcam.py and test the USB camera - Install the latest pygame. Note that pygame 1.9.6 will throw “SystemError: set_controls() method: bad call flags”. So, you need to upgrade pygame to 2.1.0.
 
pip3 install pygame --upgrade
 
cat << EOF > testcam.py
import pygame, sys
from pygame.locals import *
import pygame.camera
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(352,288))
cam.start()
image= cam.get_image()
pygame.image.save(image,'101.bmp')
cam.stop()
EOF
 
python3 testcam.py # It will create a file 101.bmp

Install MicroShift on the Raspberry Pi 4

Update selinux-policy, setup crio and MicroShift Nightly CentOS Stream 8 aarch64

rpm -qi selinux-policy # selinux-policy-3.14.3-82.el8 or selinux-policy-3.14.3-80.el8_5.2

# Required to install the selinux-policy-3.14.3-98.el8 for MicroShift (at least selinux-policy-3.14.3-96.el8 required)
dnf --disablerepo '*' --enablerepo=extras swap centos-linux-repos centos-stream-repos
dnf distro-sync

dnf -y install 'dnf-command(copr)'
# dnf -y copr enable rhcontainerbot/container-selinux
# dnf copr enable @redhat-et/microshift-nightly # Do not use this
curl https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-nightly/repo/centos-stream-8/ -o /etc/yum.repos.d/microshift-nightly-centos-stream-8.repo cat /etc/yum.repos.d/microshift-nightly-centos-stream-8.repo #sudo dnf copr enable -y @redhat-et/microshift VERSION=1.22 curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8_Stream/devel:kubic:libcontainers:stable.repo sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:/${VERSION}/CentOS_8_Stream/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo cat /etc/yum.repos.d/devel\:kubic\:libcontainers\:stable\:cri-o\:${VERSION}.repo
dnf -y install podman # We will use podman for containerized install later dnf -y install cri-o cri-tools microshift

Check that cni plugins are present and start MicroShift

ls /opt/cni/bin/ # empty
ls /usr/libexec/cni # cni plugins
systemctl enable --now crio
#systemctl start crio
systemctl enable --now microshift
#systemctl start microshift

Check the microshift and crio logs

journalctl -u microshift -f # Ctrl-C to break
journalctl -u crio -f # Ctrl-C to break

Enable firewall

systemctl enable firewalld --now
firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5353/udp --permanent
firewall-cmd --zone=public --permanent --add-port=6443/tcp
firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp
firewall-cmd --reload

The microshift service references the microshift binary in the /usr/bin directory

[root@centos ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=/usr/bin/microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

If you make changes to the above microshift.service, you need to run the following to take changed configurations from filesystem and regenerate dependency trees.

systemctl daemon-reload

Install the kubectl and the openshift client

# Install kubectl
ARCH=arm64
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/$ARCH/kubectl"
chmod +x kubectl
mv kubectl /usr/local/bin

# Install oc
dnf -y install wget
wget https://mirror.openshift.com/pub/openshift-v4/arm64/clients/ocp/candidate/openshift-client-linux.tar.gz
mkdir tmp;cd tmp
tar -zxvf ../openshift-client-linux.tar.gz
mv -f oc /usr/local/bin
cd ..;rm -rf tmp

It will take around 3 minutes for all pods to start. Check the status of node and pods using kubectl or oc client.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "kubectl get nodes;kubectl get pods -A;crictl pods;crictl images"
#watch "oc get nodes;oc get pods -A;crictl pods;crictl images"

Output

NAME                 STATUS   ROLES    AGE     VERSION
centos.example.com   Ready    <none>   3m45s   v1.21.0

NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-lzw8b                 1/1     Running   0          3m44s
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-v8qrq   1/1     Running   0          3m34s
openshift-dns                   dns-default-5q6jw                     2/2     Running   0          3m45s
openshift-dns                   node-resolver-hn8rt                   1/1     Running   0          3m45s
openshift-ingress               router-default-85bcfdd948-x52mm       1/1     Running   0          3m47s
openshift-service-ca            service-ca-76674bfb58-n8vxf           1/1     Running   0          3m48s

POD ID              CREATED         STATE NAME                                  NAMESPACE                       ATTEMPT RUNTIME
8e5c21c9e5598       2 minutes ago	Ready router-default-85bcfdd948-x52mm       openshift-ingress               0       (default)
473cf8594092d       2 minutes ago	Ready dns-default-5q6jw                     openshift-dns                   0       (default)
6b054cd62f066       3 minutes ago	Ready kubevirt-hostpath-provisioner-v8qrq   kubevirt-hostpath-provisioner   0       (default)
1e1d0029b1ec8       3 minutes ago	Ready service-ca-76674bfb58-n8vxf           openshift-service-ca            0       (default)
db32e9a1911a2       3 minutes ago	Ready kube-flannel-ds-lzw8b                 kube-system                     0       (default)
066eb9cab2186       3 minutes ago	Ready node-resolver-hn8rt                   openshift-dns                   0       (default)

IMAGE                                     TAG                             IMAGE ID            SIZE
k8s.gcr.io/pause                          3.5                             f7ff3c4042631       491kB
quay.io/microshift/cli                    4.8.0-0.okd-2021-10-10-030117   33a276ba2a973       205MB
quay.io/microshift/coredns                4.8.0-0.okd-2021-10-10-030117   67a95c8f15902       265MB
quay.io/microshift/flannel                4.8.0-0.okd-2021-10-10-030117   85fc911ceba5a       68.1MB
quay.io/microshift/haproxy-router         4.8.0-0.okd-2021-10-10-030117   37292c44812e7       225MB
quay.io/microshift/hostpath-provisioner   4.8.0-0.okd-2021-10-10-030117   fdef3dc1264ad       39.3MB
quay.io/microshift/kube-rbac-proxy        4.8.0-0.okd-2021-10-10-030117   7f149e453e908       41.5MB
quay.io/microshift/service-ca-operator    4.8.0-0.okd-2021-10-10-030117   0d3ab44356260       276MB

Build the MicroShift binary for arm64 on Raspberry Pi OS (64 bit)

We can replace the microshift binary that was download from the install.sh script with our own. Let’s build the microshift binary from scratch. Clone the microshift repository from github, install golang, run make and finally move the microshift binary to /usr/bin.

sudo su -

# Install golang
wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH

git clone https://github.com/thinkahead/microshift.git
cd microshift
time make cross-build-linux-arm64
./microshift version
mv microshift /usr/bin/microshift
systemctl restart microshift

Output

root@centos:~# git clone https://github.com/thinkahead/microshift.git
root@centos:~# cd microshift
[root@centos microshift]# time make cross-build-linux-arm64
make _build_local GOOS=linux GOARCH=arm64
make[1]: Entering directory '/root/microshift'
mkdir -p '_output/bin/linux_arm64'
go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp gssapi providerless netgo osusergo' -ldflags "-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=21 -X k8s.io/component-base/version.gitVersion=v1.21.0 -X k8s.io/component-base/version.gitCommit=c3b9e07a -X k8s.io/component-base/version.gitTreeState=clean -X k8s.io/component-base/version.buildDate=2021-12-05T22:41:38Z -X k8s.io/client-go/pkg/version.gitMajor=1 -X k8s.io/client-go/pkg/version.gitMinor=21 -X k8s.io/client-go/pkg/version.gitVersion=v1.21.1 -X k8s.io/client-go/pkg/version.gitCommit=b09a9ce3 -X k8s.io/client-go/pkg/version.gitTreeState=clean -X k8s.io/client-go/pkg/version.buildDate=2021-12-05T22:41:38Z -X github.com/openshift/microshift/pkg/version.versionFromGit=4.8.0-0.microshift-unknown -X github.com/openshift/microshift/pkg/version.commitFromGit=1076beb2 -X github.com/openshift/microshift/pkg/version.gitTreeState=dirty -X github.com/openshift/microshift/pkg/version.buildDate=2021-12-05T22:41:38Z -s -w" -o '_output/bin/linux_arm64/microshift' github.com/openshift/microshift/cmd/microshift
make[1]: Leaving directory '/root/microshift'

real	1m32.908s
user	1m4.960s
sys	0m19.248s 
[root@centos microshift]# ls -las _output/bin/linux_arm64/microshift
147676 -rwxr-xr-x. 1 root root 151218179 Dec  5 22:53 _output/bin/linux_arm64/microshift
[root@centos microshift]# _output/bin/linux_arm64/microshift version
MicroShift Version: 4.8.0-0.microshift-unknown
Base OKD Version: 4.8.0-0.okd-2021-10-10-030117 
root@centos:~/microshift# mv _output/bin/linux_arm64/microshift /usr/local/bin/microshift

We may also download the latest microshift binary from github as follows:

ARCH=arm64
export VERSION=$(curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | head -n 1 | cut -d '"' -f 4) && \
curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/microshift-linux-${ARCH}
chmod +x microshift-linux-${ARCH}
mv microshift-linux-${ARCH} /usr/local/bin/microshift
systemctl restart microshift

Samples to run on MicroShift

We will run a few samples that will show the use of helm, persistent volume, template, SenseHat and the USB camera.

1. Nginx web server with persistent volume

The source code is in github

cd ~/microshift/raspberry-pi/nginx
oc project default

Create the data in /var/hpvolumes/nginx/data1. The data1 is because we use the subPath in the volumeMounts in the nginx.yaml

mkdir -p /var/hpvolumes/nginx/data1/
cp index.html /var/hpvolumes/nginx/data1/.
cp 50x.html /var/hpvolumes/nginx/data1/.

Output

[root@centos nginx]# mkdir -p /var/hpvolumes/nginx/data1/
[root@centos nginx]# cp index.html /var/hpvolumes/nginx/data1/.
[root@centos nginx]# cp 50x.html /var/hpvolumes/nginx/data1/.

If you have the selinux set to Enforcing, the /var/hpvolumes used for creating persistent volumes will give permission denied errors when it runs the initContainers. Files labeled with container_file_t are the only files that are writable by containers. You can either set the selinux to Permissive or relabel the /var/hpvolumes.

Option 1 - Set the selinux to Permissive

setenforce 0

Output

[root@centos microshift]# setenforce 0
[root@centos microshift]# getenforce
Permissive
[root@centos microshift]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      31

Option 2 - Relabel the /var/hpvolumes

#semanage fcontext -a -t container_var_lib_t "/var/hpvolumes/*"
#semanage fcontext -a -t container_file_t "/var/hpvolumes/*" restorecon -R -v "/var/hpvolumes/*"
ls -lZ /var/hpvolumes/

Output

[root@centos microshift]# getenforce
Enforcing
[root@centos microshift]# ls -lZ /var/hpvolumes/
total 4
drwxr-xr-x. 3 root root unconfined_u:object_r:unlabeled_t:s0 4096 Dec  6 12:20 nginx
[root@centos microshift]# ls -lZ /var/hpvolumes/nginx/
total 4
drwxr-xr-x. 2 root root unconfined_u:object_r:unlabeled_t:s0 4096 Dec  6 12:20 data1
[root@centos microshift]# semanage fcontext -a -t container_var_lib_t "/var/hpvolumes/nginx"
[root@centos microshift]# restorecon -R -v "/var/hpvolumes/*"
Relabeled /var/hpvolumes/nginx from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:container_var_lib_t:s0
Relabeled /var/hpvolumes/nginx/data1 from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/hpvolumes/nginx/data1/index.html from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/hpvolumes/nginx/data1/50x.html from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:container_file_t:s0
[root@centos microshift]# ls -lZ /var/hpvolumes/
total 4
drwxr-xr-x. 3 root root unconfined_u:object_r:container_var_lib_t:s0 4096 Dec  6 12:20 nginx
[root@centos microshift]# ls -lZ /var/hpvolumes/nginx/
total 4
drwxr-xr-x. 2 root root unconfined_u:object_r:container_file_t:s0 4096 Dec  6 12:20 data1


Now we create the pv, pvc, deployment and service. There will be two replicas sharing the same persistent volume.

oc apply -f hostpathpv.yaml
oc apply -f hostpathpvc.yaml
oc apply -f nginx.yaml

Let’s login to one of the pods to see the index.html. Also submit a curl request to nginx

oc get pods # replace the pod name below with either of the nginx pod names
oc exec -it nginx-deployment-7b4d76f8d8-vjgzt -- cat /usr/share/nginx/html/index.html
curl localhost:30080 # Will return the standard nginx response from index.html

We can add a file hello in the shared volume from within the container

oc rsh nginx-deployment-7b4d76f8d8-vjgzt
echo "Hello" > /usr/share/nginx/html/hello
curl localhost:30080/hello

Output

[root@centos nginx]# curl localhost:30080/hello
Hello

Change the replicas to 1

oc scale deployment.v1.apps/nginx-deployment --replicas=1

You can test nginx by exposing the nginx-svc as a route. We can delete the deployment and service after we are done.

oc delete -f nginx.yaml

Then, delete the pvc

oc delete hostpathpvc.yaml

Now if we want to reuse the pv, we must delete the claimRef from the pv

kubectl edit pv hostpath-provisioner
# Delete the complete claimRef field and save
oc apply -f hostpathpvc.yaml # create a new claim, it will work now
oc apply -f nginx.yaml # create the nginx again

Finally, we can clean up the nginx

oc delete -f nginx.yaml
oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml

2. Nginx web server with template

The source code is in github

cd ~/microshift/raspberry-pi/nginx
oc project default

If you use a different namespace xxxx instead of default, you will need to change the /etc/hosts to match the nginx-xxxx.cluster.local accordingly. The nginx-template* uses the image docker.io/nginxinc/nginx-unprivileged:alpine. The deployment config does not get processed in MicroShift. So, we use a template with a deployment instead.

#oc process -f nginx-template-deploymentconfig.yml | oc apply -f - # deploymentconfig does not work in microshift
oc process -f nginx-template-deployment-8080.yml | oc apply -f - # deployment works in microshift
oc get templates
oc get routes

Add the following to /etc/hosts on the Raspberry Pi 4

127.0.0.1 localhost nginx-default.cluster.local

Then, submit a curl request to nginx

curl nginx-default.cluster.local

To delete nginx, run

oc process -f nginx-template-deployment-8080.yml | oc delete -f -

We can also create the template in MicroShift and process the template by name

# Either of the following two may be used:
oc create -f nginx-template-deployment-8080.yml
#oc create -f nginx-template-deployment-80.yml

oc process nginx-template | oc apply -f -
curl nginx-default.cluster.local
oc process nginx-template | oc delete -f -
oc delete template nginx-template

3. Postgresql database server

The source code is in github

cd ~/microshift/raspberry-pi/pg

Create a new project pg

oc new-project pg
mkdir -p /var/hpvolumes/pg

If you have the selinux set to Enforcing, run the

restorecon -R -v "/var/hpvolumes/*"

Output

[root@centos microshift]# restorecon -R -v "/var/hpvolumes/*"
Relabeled /var/hpvolumes/pg from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:container_file_t:s0

Create the configmap, pv, pvc and deployment for PostgreSQL

oc create -f pg-configmap.yaml
oc create -f hostpathpv.yaml
oc create -f hostpathpvc.yaml
oc apply -f pg.yaml
oc get configmap
oc get svc pg-svc
oc get all -lapp=pg
oc logs deployment/pg-deployment -f

Output

root@centos:~/microshift/raspberry-pi/pg# oc get all -lapp=pg
NAME                                 READY   STATUS    RESTARTS   AGE
pod/pg-deployment-78cbc9cc88-9rsmk   1/1     Running   0          25s

NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/pg-svc   NodePort   10.43.13.209   

First time we start postgresql, we can check the logs where it creates the database

root@centos:~/microshift/raspberry-pi/pg# oc logs pg-deployment-78cbc9cc88-58mms
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:
…
server started
CREATE DATABASE
…
PostgreSQL init process complete; ready for start up.

LOG:  database system is ready to accept connections

Install the postgresql client

yum install postgresql 
#apt-get install postgresql-client

Connect to the database

psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password

Create a TABLE "cities" and insert a couple of rows

CREATE TABLE cities (name varchar(80), location point);
\t
INSERT INTO cities VALUES ('Madison', '(89.40, 43.07)'),('San Francisco', '(-122.43,37.78)');
SELECT * from cities;
\d
\q

Let's delete the deployment and recreate it

oc delete pods -n pg –all

Check that the data still exists

root@raspberrypi:~/microshift/raspberry-pi/pg# ls /var/hpvolumes/pg/data/
base	 pg_commit_ts  pg_ident.conf  pg_notify    pg_snapshots  pg_subtrans  PG_VERSION	    postgresql.conf
global	 pg_dynshmem   pg_logical     pg_replslot  pg_stat	 pg_tblspc    pg_xlog		    postmaster.opts
pg_clog  pg_hba.conf   pg_multixact   pg_serial    pg_stat_tmp	 pg_twophase  postgresql.auto.conf

Let's recreate the deployment and look at the deployment logs. This time it already has a database.

#oc create -f .
oc apply -f pg.yaml
oc logs deployment/pg-deployment -f

Output

PostgreSQL Database directory appears to contain a database; Skipping initialization

LOG:  database system was shut down at 2021-12-04 12:11:13 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections

Now we can connect to the database and look at the cities table

psql --host localhost --port 30080 --user postgresadmin --dbname postgresdb # test123 as password
SELECT * FROM cities;
\q

Output

psql (13.5 (Debian 13.5-0+deb11u1), server 9.6.24)
Type "help" for help.

postgresdb=# SELECT * FROM cities;
     name      |    location
---------------+-----------------
 Madison       | (89.4,43.07)
 San Francisco | (-122.43,37.78)
(2 rows)

Finally, we delete the deployment and project

oc delete -f pg.yaml
oc delete -f pg-configmap.yaml
oc delete -f hostpathpvc.yaml
oc delete -f hostpathpv.yaml
oc delete project pg

4. Sense Hat and USB camera sending to Node Red on IBM Cloud

We will run the same sensehat sample as we ran in Part 4. Further we expand this in the next sample 5 to do object detection.

Let’s install Node Red on IBM Cloud if not already done in earlier Parts. We will use Node Red to show pictures and chat messages sent from the Raspberry Pi 4 when a person is detected. Alternatively, we can use the Node Red that we deployed as an application in MicroShift on the MacBook Pro in VirtualBox in Part 1.

  1. Create an IBM Cloud free tier account at https://www.ibm.com/cloud/free and login to Console (top right).
  2. Create an API Key and save it, Manage->Access->IAM->API Key->Create an IBM Cloud API Key
  3. Click on Catalog and Search for "Node-Red App", select it and click on "Get Started"
  4. Give a unique App name, for example xxxxx-node-red and select the region nearest to you
  5. Select the Pricing Plan Lite, if you already have an existing instance of Cloudant, you may select it in Pricing Plan. Click Create
  6. Under Deployment Automation -> Configure Continuous Delivery, click on "Deploy your app"
  7. Select the deployment target Cloud Foundry that provides a Free-Tier of 256 MB cost-free or Code Engine. The latter has monthly limits and takes more time to deploy. [ Note: Cloud Foundry is deprecated, use the IBM Cloud Code Engine. Any IBM Cloud Foundry application runtime instances running IBM Cloud Foundry applications will be permanently disabled and deprovisioned
  8. Enter the IBM Cloud API Key from Step 2, or click on "New" to create one
  9. The rest of the fields Region, Organization, Space will automatically get filled up. Use the default 256MB Memory and click "Next"
  10. In "Configure the DevOps toolchain", click Create
  11. Wait for 10 minutes for the Node Red instance to start
  12. Click on the "Visit App URL"
  13. On the Node Red page, create a new userid and password
  14. In Manage Palette, install the node-red-contrib-image-tools, node-red-contrib-image-output, and node-red-node-base64
  15. Import the Chat flow and the Picture (Image) display flow. On the Chat flow, you will need to edit the template node line 35 to use wss:// (on IBM Cloud) instead of ws:// (on your Laptop)
  16. On another browser tab, start the https://mynodered.mybluemix.net/chat (Replace mynodered with your IBM Cloud Node Red URL)
  17. On the Image flow, click on the square box to the right of image preview or viewer to Deactivate and Activate the Node. You will be able to see the picture when you Activate the Node and run samples below
cd ~
git clone https://github.com/thinkahead/microshift.git
cd microshift/raspberry-pi/object-detection

a. Test directly on the Raspberry Pi 4

pip3 install websocket-client
pip3 install pygame –upgrade # pygame 2.1

Update the URL to your node red instance and run the python code to send to Node Red

sed -i "s|mynodered.mybluemix.net|yournodered.mybluemix.net|" *.py

Use pygame 2.1 to capture images from USB camera

python3 sendimages1.py # Send images to Node Red, Ctrl-C to stop

Output

[root@centos sensehat]# python3 sendimages1.py
pygame 2.1.0 (SDL 2.0.16, Python 3.6.8)
Hello from the pygame community. https://www.pygame.org/contribute.html
ALSA lib pcm_dmix.c:1035:(snd_pcm_dmix_open) unable to open slave
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded
waiting 5 seconds...
File 101.jpg uploaded 

Send images and web socket message to Node Red

python3 sendtonodered.py

Output

root@centos:~/microshift/raspberry-pi/sensehat# python3 sendtonodered.py
pygame 2.1.0 (SDL 2.0.16, Python 3.9.2)
Hello from the pygame community. https://www.pygame.org/contribute.html
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1638560972: Temperature: 28.572917938232422 C"}
waiting 5 seconds...
File 101.jpg uploaded
{"user":"raspberrypi4","message":"1638560982: Temperature: 28.625 C"}
waiting 5 seconds...

Use fswebcam to capture images from USB camera

dnf install -y fswebcam
python3 sendimages2.py # Send images to Node Red, Ctrl-C to stop

Output

[root@centos sensehat]# python3 sendimages2.py
--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
--- Capturing frame...
GD Warning: gd-jpeg: JPEG library reports unrecoverable error: Not a JPEG file: starts with 0xb5 0x7dCaptured frame in 0.00 seconds.
--- Processing captured image...
Setting output format to PNG, quality 9
Setting subtitle "person".
Writing PNG image to '101.png'.
File 101.png uploaded
waiting 5 seconds...

b. Use a container

Check that we can access the Sense Hat and the camera from a container in podman

dnf install -y podman
podman build -t karve/sensehat .
podman push karve/sensehat
podman run --privileged --name sensehat -e ImageUploadURL=http://yournodered.mybluemix.net/upload -e WebSocketURL=wss://yournodered.mybluemix.net/ws/chat -ti karve/sensehat bash

# Inside the container
python sparkles.py # Tests the Sense Hat's LED matrix
python temperature.py # Get the temperature from Sense Hat’s sensor
python testcam.py # Creates 101.bmp

# Update the URL to your node red instance
sed -i "s|mynodered.mybluemix.net|yournodered.mybluemix.net|" *.py
python sendimages1.py # Ctrl-C to stop
python sendtonodered.py # Ctrl-C to stop
exit

# When we are done, delete the container
podman rm -f sensehat

c. Run in microshift

We will use the podman image we created above to send pictures and web socket chat messages to Node Red using a pod in microshift.

oc apply -f sensehat.yaml

When we are done, we can delete the deployment

oc delete -f sensehat.yaml

5. TensorFlow Lite Python object detection example in MicroShift with SenseHat and Node Red


This example requires the same Node Red setup as in the previous Sample 4.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/object-detection

We will build the image for object detection send pictures and web socket chat messages to Node Red when a person is detected using a pod in microshift.

podman build -t docker.io/karve/object-detection-raspberrypi4 .
podman push docker.io/karve/object-detection-raspberrypi4:latest

a. Use a container

Check that we can access the Sense Hat and the camera from a container in podman

podman run -d --privileged --name object-detection -e ImageUploadURL=http://yournodered.mybluemix.net/upload -e WebSocketURL=wss://yournodered.mybluemix.net/ws/chat docker.io/karve/object-detection-raspberrypi4:latest

# When we are done, delete the container
podman rm -f object-detection

b. Use microshift

sed -i "s|mynodered.mybluemix.net|yournodered.mybluemix.net|" *.yaml
oc apply -f object-detection.yaml

We will see pictures being sent to Node Red with a person is detected. When we are done testing, we can delete the deployment

oc delete -f object-detection.yaml

Cleanup Microshift

We can use the script available on github to remove the pods and images. If you already cloned the microshift repo from github, you have the script in the ~/microshift/hack directory.

wget https://raw.githubusercontent.com/thinkahead/microshift/main/hack/cleanup.sh
./cleanup.sh

Output

[root@centos hack]# ./cleanup.sh
DATA LOSS WARNING: Do you wish to stop and cleanup ALL MicroShift data AND cri-o container workloads?
1) Yes
2) No
#? 1
Stopping microshift
Removing crio pods
Removing crio containers
Removing crio images
Killing conmon, pause processes
Removing /var/lib/microshift
Cleanup succeeded

Containerized MicroShift on CentOS 8 Stream

We can run MicroShift within containers in two ways:

  1. MicroShift Containerized – The MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
  1. MicroShift Containerized All-In-One – The MicroShift binary and CRI-O service run within a Docker container and data is stored in a docker volume, microshift-data. This should be used for “Testing and Development” only.

Check if golang is installed

[root@centos ~]# go version
go version go1.17.2 linux/arm64

If not, install golang

wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH

Microshift Containerized

We can either use the prebuilt image or build the image using docker or podman.

To use the prebuilt image, set

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2021-11-19-115908-linux-arm64
podman pull $IMAGE

To build the image, clone the microshift repository from github and run make

git clone https://github.com/thinkahead/microshift.git
cd microshift 

# Edit the packaging/images/microshift/Dockerfile. Replace the go-toolset with go-toolset:1.16.7-5
-FROM registry.access.redhat.com/ubi8/go-toolset as builder
+FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder

# This will create the image quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64
make build-containerized-cross-build-linux-arm64 -e FROM_SOURCE=true

The Dockerfile uses the registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder to build the microshift binary from source. Then, it copies the binary to the registry.access.redhat.com/ubi8/ubi-minimal:8.4 that is used for the run stage. To build from source, it takes around 18 minutes on the Raspberry Pi 4. To build using the available binary, it takes about 3 minutes. Output from the build:

[root@centos microshift]# time make build-containerized-cross-build-linux-arm64 -e FROM_SOURCE=true
make _build_containerized ARCH=arm64
make[1]: Entering directory '/root/microshift'
echo BIN_TIMESTAMP==2021-12-05T11:43:11Z
BIN_TIMESTAMP==2021-12-05T11:43:11Z
/usr/bin/podman build -t quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64 \
	-f "/root/microshift"/packaging/images/microshift/Dockerfile \
	--build-arg SOURCE_GIT_TAG=4.8.0-0.microshift-unknown \
	--build-arg BIN_TIMESTAMP=2021-12-05T11:43:11Z \
	--build-arg ARCH=arm64 \
	--build-arg MAKE_TARGET="cross-build-linux-arm64" \
	--build-arg FROM_SOURCE=true \
	--platform="linux/arm64" \
	.
[1/2] STEP 1/12: FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 AS builder
Trying to pull registry.access.redhat.com/ubi8/go-toolset:1.16.7-5...
…
mkdir -p '_output/bin/linux_arm64'
go build -mod=vendor -tags 'include_gcs include_oss containers_image_openpgp gssapi providerless netgo osusergo' -ldflags "-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=21 -X k8s.io/component-base/version.gitVersion=v1.21.0 -X k8s.io/component-base/version.gitCommit=c3b9e07a -X k8s.io/component-base/version.gitTreeState=clean -X k8s.io/component-base/version.buildDate=2021-12-05T11:43:11Z -X k8s.io/client-go/pkg/version.gitMajor=1 -X k8s.io/client-go/pkg/version.gitMinor=21 -X k8s.io/client-go/pkg/version.gitVersion=v1.21.1 -X k8s.io/client-go/pkg/version.gitCommit=b09a9ce3 -X k8s.io/client-go/pkg/version.gitTreeState=clean -X k8s.io/client-go/pkg/version.buildDate=2021-12-05T11:43:11Z -X github.com/openshift/microshift/pkg/version.versionFromGit=4.8.0-0.microshift-unknown -X github.com/openshift/microshift/pkg/version.commitFromGit=3370de8b -X github.com/openshift/microshift/pkg/version.gitTreeState=dirty -X github.com/openshift/microshift/pkg/version.buildDate=2021-12-05T11:43:11Z -s -w" -o '_output/bin/linux_arm64/microshift' github.com/openshift/microshift/cmd/microshift
make[1]: Leaving directory '/opt/app-root/src/github.com/redhat-et/microshift'
--> 3763e352fbd
[2/2] STEP 1/6: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.4
Trying to pull registry.access.redhat.com/ubi8/ubi-minimal:8.4...
…
[2/2] STEP 3/6: RUN microdnf install -y     policycoreutils-python-utils     iptables     && microdnf clean all
…
[2/2] STEP 4/6: COPY --from=builder /opt/app-root/src/github.com/redhat-et/microshift/_output/bin/linux_$ARCH/microshift /usr/bin/microshift
--> 5b47d45221c
[2/2] STEP 5/6: ENTRYPOINT ["/usr/bin/microshift"]
--> 5628d041ac6
[2/2] STEP 6/6: CMD ["run"]
[2/2] COMMIT quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64
--> 208c35e8e21
Successfully tagged quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64
208c35e8e214ffb1eac69f65ed57b73dd51f1d2975b58119a3960f12ba085908
make[1]: Leaving directory '/root/microshift'

real	17m56.366s
user	35m23.853s
sys	8m31.931s

Set the IMAGE to the one we just built above

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64

Run the microshift container

podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;kubectl get nodes;kubectl get pods -A"

The output shows the microshift container running using podman and the rest of the pods in crio

podman ps

CONTAINER ID  IMAGE                                                                 COMMAND     CREATED         STATUS             PORTS       NAMES
ecda1376a7b4  quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-arm64  run         14 minutes ago  Up 15 minutes ago              microshift

kubectl get nodes

NAME                 STATUS   ROLES    AGE   VERSION
centos.example.com   Ready    

kubectl get pods -A

NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                     kube-flannel-ds-fxp9s                 1/1     Running   0          13m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-vm4x6   1/1     Running   0          13m
openshift-dns                   dns-default-pqnd2                     2/2     Running   0          13m
openshift-dns                   node-resolver-qn2lh                   1/1     Running   0          13m
openshift-ingress               router-default-85bcfdd948-8bvbt       1/1     Running   0          13m
openshift-service-ca            service-ca-76674bfb58-xs8vh           1/1     Running   0          13m

The containers run within cri-o on the host

[root@centos microshift]# crictl pods
POD ID         CREATED         STATE NAME                                  NAMESPACE                       ATTEMPT   RUNTIME
c23ab9cb435c3  16 minutes ago  Ready router-default-85bcfdd948-8bvbt       openshift-ingress               0         (default)
597b5b997379e  17 minutes ago  Ready dns-default-pqnd2                     openshift-dns                   0         (default)
6a40defbba385  18 minutes ago  Ready service-ca-76674bfb58-xs8vh           openshift-service-ca            0         (default)
94a869e9eb4b7  18 minutes ago  Ready kubevirt-hostpath-provisioner-vm4x6   kubevirt-hostpath-provisioner   0         (default)
493452ba05fd3  19 minutes ago  Ready kube-flannel-ds-fxp9s                 kube-system                     0         (default)
ecd070183e649  19 minutes ago  Ready node-resolver-qn2lh                   openshift-dns                   0         (default)

Now, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift

MicroShift Containerized All-In-One (with Docker)

Let’s stop the crio on the host, we will be creating an all-in-one container in docker that will have crio within the container.

systemctl stop crio

Remove podman, instead install docker with containerd

yum remove -y podman
yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
systemctl start docker

To build the all-in-one image, clone the microshift repository from github and run make

git clone https://github.com/thinkahead/microshift.git
cd microshift 

# Edit the packaging/images/microshift-aio/Dockerfile. Replace the go-toolset with go-toolset:1.16.7-5
-FROM registry.access.redhat.com/ubi8/go-toolset as builder
+FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder

# This will create the image quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64
make build-containerized-all-in-one-arm64

The Dockerfile uses the registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder to get the microshift binary. Then, it copies the microshift binary, packaging files and downloads kubectl to the registry.access.redhat.com/ubi8/ubi-init:8.4 that is used for the run stage. It finally installs the cri-o and dependencies within the image. Output from the build:

root@centos:~/microshift# time make build-containerized-all-in-one-arm64
make _build_containerized_aio ARCH=arm64
make[1]: Entering directory '/root/microshift'
echo BIN_TIMESTAMP==2021-12-04T13:18:37Z
BIN_TIMESTAMP==2021-12-04T13:18:37Z
/usr/bin/docker build -t quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64 \
	-f "/root/microshift"/packaging/images/microshift-aio/Dockerfile \
	--build-arg SOURCE_GIT_TAG=4.8.0-0.microshift-unknown \
	--build-arg BIN_TIMESTAMP=2021-12-04T13:18:37Z \
	--build-arg ARCH=arm64 \
	--build-arg MAKE_TARGET="cross-build-linux-arm64" \
	--build-arg FROM_SOURCE=false \
	--build-arg IPTABLES=nft \
	--platform="linux/arm64" \
	.
Sending build context to Docker daemon  563.3MB
Step 1/29 : ARG IMAGE_NAME=registry.access.redhat.com/ubi8/ubi-init:8.4
Step 2/29 : FROM registry.access.redhat.com/ubi8/go-toolset:1.16.7-5 as builder
…
Step 13/29 : RUN if [ "$FROM_SOURCE" == "true" ]; then       make clean $MAKE_TARGET SOURCE_GIT_TAG=$SOURCE_GIT_TAG BIN_TIMESTAMP=$BIN_TIMESTAMP &&       mv _output/bin/linux_$ARCH/microshift microshift;     else       export VERSION=$(curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | head -n 1 | cut -d '"' -f 4) &&       curl -LO https://github.com/redhat-et/microshift/releases/download/$VERSION/microshift-linux-$ARCH &&       mv microshift-linux-$ARCH microshift;     fi
…
Step 14/29 : FROM ${IMAGE_NAME}
8.4: Pulling from ubi8/ubi-init
…
Step 17/29 : ENV BUILD_PATH=packaging/images/microshift-aio
 ---> Running in fa3f914645a6
Removing intermediate container fa3f914645a6
 ---> 66724c255163
Step 18/29 : COPY --from=builder /opt/app-root/src/github.com/redhat-et/microshift/microshift /usr/local/bin/microshift
 ---> 865afaa572e7
Step 19/29 : COPY $BUILD_PATH/unit /usr/lib/systemd/system/microshift.service
 ---> 3e78aa44b735
Step 20/29 : COPY $BUILD_PATH/kubelet-cgroups.conf /etc/systemd/system.conf.d/kubelet-cgroups.conf
 ---> 3622d488a598
Step 21/29 : COPY $BUILD_PATH/crio-bridge.conf /etc/cni/net.d/100-crio-bridge.conf
 ---> 33e9a5457057
Step 22/29 : RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$ARCH/kubectl" &&     chmod +x ./kubectl &&     mv ./kubectl /usr/local/bin/kubectl
 ---> Running in e0eb7c669c2e
…
Step 25/29 : RUN dnf install -y cri-o         cri-tools         iproute         procps-ng &&     dnf clean all
 ---> Running in 5598191b9295
…
Successfully built c0850658b7b9
Successfully tagged quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64
make[1]: Leaving directory '/root/microshift'

real	12m20.263s
user	9m30.934s
sys	3m40.841s

Create a new docker volume microshift-data and run the microshift container. If you do not create the volume, docker will create the named volume automatically.

docker volume rm microshift-data;podman volume create microshift-data
docker run -d --rm --name microshift -h microshift.example.com --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64

Now login to the microshift container and see the pods created using crio within the container, not directly on the host.

docker exec -it microshift bash
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "kubectl get nodes;kubectl get pods -A;crictl images;crictl pods"

Output

root@centos:~/microshift# docker exec -it microshift bash
[root@microshift /]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift /]# watch "kubectl get nodes;kubectl get pods -A;crictl images;crictl pods"

NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    

We exit back to the host and check that the microshift container is still running within docker

root@centos:~/microshift# docker ps -a
CONTAINER ID   IMAGE                                                                          COMMAND                  CREATED          STATUS          PORTS                                       NAMES
ac6cc94bc6d5   quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64   "/sbin/init"             38 minutes ago   Up 37 minutes   0.0.0.0:6443->6443/tcp, :::6443->6443/tcp   microshift

We can inspect the microshift-data volume to find the path

docker volume inspect microshift-data

Output

[root@centos microshift]# docker volume inspect microshift-data
[
    {
        "CreatedAt": "2021-12-06T16:04:32Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/microshift-data/_data",
        "Name": "microshift-data",
        "Options": null,
        "Scope": "local"
    }
]

On the host, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above

export KUBECONFIG=/var/lib/docker/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "kubectl get nodes;kubectl get pods -A"

Output

NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    

The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.

oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'

You may also need to patch the service-ca deployment if it keeps restarting:

oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'

Now, we can run the samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the docker run will delete the container when we stop it.

docker stop microshift
docker volume rm microshift-data

After it is stopped, we can run the cleanup.sh as in previous section.

To remove Docker and reinstall Podman

dnf remove -y docker-ce containerd.io
dnf install -y podman

MicroShift Containerized All-In-One (with Podman)

After stopping crio, we can also build and run the all-in-one microshift in podman or use prebuilt images. I had to mount the /sys/fs/cgroup in the podman run command.  The “sudo setsebool -P container_manage_cgroup true” did not work. We can just volume mount /sys/fs/cgroup into the container using -v /sys/fs/cgroup:/sys/fs/cgroup:ro. This will volume mount in /sys/fs/cgroup into the container as read/only, but the subdir/mount points will be mounted in as read/write.

systemctl stop crio

podman volume rm microshift-data;podman volume create microshift-data

#cd ~/microshift
#make build-containerized-all-in-one-arm64
#podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-arm64
podman run -d --rm --name microshift -h microshift.example.com --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /lib/modules:/lib/modules -v microshift-data:/var/lib -v /var/hpvolumes:/var/hpvolumes -p 6443:6443 -p 8080:8080 -p 80:80 quay.io/microshift/microshift-aio:4.8.0-0.microshift-2021-10-10-030117-3-ga424399-linux-nft-arm64

We can inspect the microshift-data volume to find the path

[root@centos pg]# podman volume inspect microshift-data
[
    {
        "Name": "microshift-data",
        "Driver": "local",
        "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
        "CreatedAt": "2021-12-06T22:12:52.987954711Z",
        "Labels": {},
        "Scope": "local",
        "Options": {}
    }
]

On the host raspberry pi, we set KUBECONFIG to point to the kubeconfig on the data volume at the Mountpoint from above

export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
# crio on host is stopped, so we do not run crictl commands on host
watch "kubectl get nodes;kubectl get pods -A"

The crio service is stopped on the Raspberry Pi, so crictl command will not work on the Pi. You will get the error:

time="2021-12-08T14:32:02Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/crio/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"

The crictl commands will work within the microshift container in podman.

Let’s try the nginx sample. Still on the host raspberry pi, copy the index.html to /var/hpvolumes/nginx/data1/ and shown in Sample 1 earlier and then run the nginx sample with

cd ~/microshift/raspberry-pi/nginx
oc apply -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml
oc expose svc nginx-svc

Then, login to the microshift all-in-one pod

podman exec -it microshift bash

Within the pod, get the ipaddress of the node with

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
kubectl get nodes -o wide

Output

[root@microshift /]# kubectl get nodes -o wide
NAME                     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                               KERNEL-VERSION     CONTAINER-RUNTIME
microshift.example.com   Ready    

Add the ipaddress with nginx-svc-default.cluster.local to /etc/hosts within the microshift pod

10.88.0.6       microshift.example.com microshift nginx-svc-default.cluster.local

Now within the microshift pod, you can give a request to nginx

curl nginx-svc-default.cluster.local

On host raspberry pi, add the ipaddress of raspberry pi (or 127.0.0.1) with nginx-svc-default.cluster.local to /etc/hosts

127.0.0.1 localhost.localdomain localhost nginx-svc-default.cluster.local

Now on host raspberry pi, you can give a request to nginx

curl nginx-svc-default.cluster.local

Since the name server cannot resolve the hostname in the route, you can give command as follows instead of modifying the /etc/hosts:

curl -H "Host: nginx-svc-default.cluster.local" 10.88.0.6
or
curl --resolv nginx-svc-default.cluster.local:80:10.88.0.6 http://nginx-svc-default.cluster.local
curl --resolv nginx-svc-default.cluster.local:30080:10.88.0.6 http://nginx-svc-default.cluster.local:30080

The --resolv provides a custom address for a specific host and port pair.

Alternatively, to avoid modifying the /etc/hosts, you can use the nip.io with the ip address of your microshift container. The nip.io is a free service that provides mapping any IP Address to a hostname with certain formats. Replace the 10.88.0.6 with the nginx.<ip address of the microshift container>.nip.io in commands below.

[root@centos nginx]# podman inspect microshift | grep IPAddress
            "IPAddress": "10.88.0.6", 
[root@centos nginx]# oc delete route nginx-svc # delete old route, we will create one with new hostname
route.route.openshift.io "nginx-svc" deleted
[root@centos nginx]# oc expose svc nginx-svc --hostname=nginx.10.88.0.6.nip.io
route.route.openshift.io/nginx-svc exposed
[root@centos nginx]# podman exec -it microshift bash
[root@microshift /]# curl nginx.10.88.0.6.nip.io
…
<title>Welcome to nginx from MicroShift!</title>
… [root@microshift /]# exit exit

This will allow you to give curl commands from the Raspberry Pi, microshift container in podman and from the nginx pods in crio within the microshift container.

Instead of the microshift container ip, you can replace with your host raspberry pi ip address as shown in the below snippet. In my case it is 192.168.1.239. Now on the host raspberry pi,

oc delete route nginx-svc
oc expose svc nginx-svc --hostname=nginx.192.168.1.239.nip.io --name=nginx-route

The new route inherits the name from the service unless you specify one using the --name option. You can now give the curl request from your Mac or from any machine that has access to the ip address of the raspberry pi, even from within the microshift container or the pod within the microshift container.

curl nginx.192.168.1.239.nip.io

Output

MBP:~ karve$ # On my Laptop
MBP:~ karve$ curl nginx.192.168.1.239.nip.io
…
Welcome to nginx from MicroShift!
…
MBP:~ karve$ ssh root@192.168.1.239 # Logging onto the Raspberry Pi
[root@centos nginx]# curl nginx.192.168.1.239.nip.io # On Raspberry Pi
…
Welcome to nginx from MicroShift!
…
[root@centos nginx]# podman exec -it microshift bash # On microshift container in podman
[root@microshift /]# curl nginx.192.168.1.239.nip.io
… 
Welcome to nginx from MicroShift!
…
[root@microshift /]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift /]# kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-7f888f8ff7-45h5z   1/1     Running   0          96m
nginx-deployment-7f888f8ff7-x62td   1/1     Running   0          96m
[root@microshift /]# kubectl exec -it nginx-deployment-7f888f8ff7-45h5z – sh # within one of the two nginx containers in crio
Defaulted container "nginx" out of: nginx, volume-permissions (init)
/ $ curl nginx.192.168.1.239.nip.io
…
Welcome to nginx from MicroShift!
…
/ $ exit # Exit out of nginx container in crio
[root@microshift /]# exit # Exit out of microshift container in podman
[root@centos nginx]# 
[root@centos nginx]# exit
logout
Connection to 192.168.1.239 closed.
MBP:~ karve$

Cleanup the nginx

oc delete -f hostpathpv.yaml -f hostpathpvc.yaml -f nginx.yaml 


We can run the postgresql sample in similar way. The sensehat and the object-detection samples also work through the crio within the all-in-one podman container.

Conclusion

In this Part 5, we saw multiple options to build and run MicroShift on the Raspberry Pi 4 with the CentOS 8 Stream (64 bit). We ran samples that used template, persistent volume for postgresql, Sense Hat, and USB camera. We deployed an object detection sample that sent pictures and web socket messages to Node Red on IBM Cloud when a person was detected. In Part 6, we will deploy MicroShift on the Raspberry Pi 4 with Ubuntu 20.04 (64 bit). In Part 8, we will look at the All-In-One install of MicroShift on balenaOS. In Part 10, we will deploy MicroShift on Fedora IoT.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift on ARM devices and if you would like to see something covered in more detail.

References


#MicroShift#Openshift​​#containers#crio#Edge#node-red#raspberry-pi#centos
​​​​​​​

​​​​​​

0 comments
235 views

Permalink