Infrastructure as a Service

 View Only

MicroShift – Part 23: Raspberry Pi 4 with Kata Containers on Fedora 36

By Alexei Karve posted Sat July 30, 2022 04:19 PM

  

MicroShift and Kata Containers on Raspberry Pi 4 with multiple editions of Fedora 36

Introduction

MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 10, Part 11, Part 12, and Part 21 we deployed MicroShift and KubeVirt on Fedora IoT, Fedora Server, Fedora CoreOS and Fedora 36 Silverblue respectively. Multiple editions of Fedora are currently available: Workstation for your laptop or desktop computer, Server for bare metal or the cloud, CoreOS as a minimal OS focused on running containerized workloads, Silverblue and Kinoite as an immutable desktop (Silverblue ships with GNOME and Kinoite ships with KDE Plasma), and IoT providing a foundation for Internet of Things and Device Edge ecosystems. In this Part 23, we will install and use Kata Containers with MicroShift on these aforementioned editions of Fedora 36. We will test Kata Containers with pods for alpine, busybox and nginx images. We will also run the Influx DB sample with a deployment containing multiple Kata containers and show SenseHat metrics in Grafana.

Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers but provide the workload isolation and security advantages of VMs (Source: Kata Containers Website). With Kata Containers, each container or container pod is launched into a lightweight VM with its own unique kernel instance. Since each container/pod is now running in its own VM, malicious code can no longer exploit the shared kernel to access neighboring containers. Kubernetes CRI (Container Runtime Interface) implementations allow using any OCI-compatible runtime with Kubernetes, such as the Kata Containers runtime. Kata Containers support both the CRI-O and CRI-containerd CRI implementations. RuntimeClass is a feature for selecting the container runtime configuration. The container runtime configuration is used to run a Pod's containers.

Install Kata Containers on Fedora 36 Server, IoT, Silverblue and CoreOS

The following instructions assume that you have followed the previous blogs and installed the relevant edition of Fedora with MicroShift on your Raspberry Pi 4. In this blog, we see multiple lines with comments, a hash followed by the edition. Choose the correct command for Fedora edition.

Update to Fedora 36 latest

rpm-ostree upgrade # Fedora Silverblue, Fedora IoT

rpm-ostree upgrade --bypass-driver # CoreOS to bypass zincati
sudo dnf -y update # Fedora Server, Workstation

Check the cgroup version and change to cgroup v1

mount | grep cgroup

Output shows cgroup2

[root@microshift ~]# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot)

The cgroup v2 is not yet supported for Kata Containers. Refer to this issue 3038. We set the following one time

rpm-ostree kargs --append="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller" # Fedora Silverblue, IoT, CoreOS
systemctl reboot

grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller" # Fedora Server
# Use --remove-args if you want to delete the arguments and --update-kernel=ALL to apply to all kernels
reboot

Output:

[root@microshift ~]# rpm-ostree kargs --append="systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller"
Checking out tree …
[root@microshift ~]# systemctl reboot

After reboot, ssh to the Raspberry Pi 4

[root@microshift ~]# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,size=4096k,nr_inodes=1024,mode=755,inode64)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,misc)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)

Alternatively, instead of appending the kernel arguments with unified_cgroup_hierarchy=0, you may run the following after every reboot

mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Install kata containers

rpm-ostree install kata-containers # Fedora Silverblue, IoT and CoreOS

sudo -E dnf -y install kata-containers # Fedora Server

If you get the following error in SilverBlue that group 'qemu' does not exist, but it exists:

Running %post for qemu-common: bwrap(/bin/sh): Child process killed by signal 6; run `journalctl -t 'rpm-ostree(qemu-common.post)'` for more information

Output:

[root@microshift ~]# journalctl -t 'rpm-ostree(qemu-common.post)'
Jul 22 12:18:48 microshift.example.com rpm-ostree(qemu-common.post)[2774]: groupadd: lock /etc/group.lock already used by PID 5
Jul 22 12:18:48 microshift.example.com rpm-ostree(qemu-common.post)[2774]: groupadd: cannot lock /etc/group; try again later.
Jul 22 12:18:48 microshift.example.com rpm-ostree(qemu-common.post)[2776]: useradd: group 'qemu' does not exist 

Refer to the issue and workaround. The group is present in /usr/lib/group file, instead of /etc/group and isn’t expected by shadow-utils.

[root@microshift ~]# getent group | grep qemu
[root@microshift ~]# grep -E '^qemu:' /usr/lib/group | sudo tee -a /etc/group
qemu:x:107:
[root@microshift ~]# rpm-ostree install kata-containers
Checking out tree c6af56c... done
Enabled rpm-md repositories: fedora-cisco-openh264 updates fedora updates-modular fedora-modular copr:copr.fedorainfracloud.org:group_redhat-et:microshift updates-archive
Importing rpm-md... done
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2022-04-07T16:52:38Z solvables: 4
rpm-md repo 'updates' (cached); generated: 2022-07-21T16:33:49Z solvables: 14470
rpm-md repo 'fedora' (cached); generated: 2022-05-04T21:15:55Z solvables: 58687
rpm-md repo 'updates-modular' (cached); generated: 2022-06-14T02:11:42Z solvables: 1151
rpm-md repo 'fedora-modular' (cached); generated: 2022-05-04T21:11:12Z solvables: 822
rpm-md repo 'copr:copr.fedorainfracloud.org:group_redhat-et:microshift' (cached); generated: 2022-04-21T04:36:37Z solvables: 6
rpm-md repo 'updates-archive' (cached); generated: 2022-07-22T05:51:08Z solvables: 21612
Resolving dependencies... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Freed: 47.7 MB (pkgcache branches: 0)
Added:
  busybox-1:1.35.0-4.fc36.aarch64
  kata-containers-2.3.3-2.fc36.1.aarch64
  musl-filesystem-1.2.3-1.fc36.aarch64
  musl-libc-1.2.3-1.fc36.aarch64
Changes queued for next boot. Run "systemctl reboot" to start a reboot
[root@microshift ~]# systemctl reboot

Login and run the kata-runtime kata-check

[root@microshift ~]# kata-runtime kata-check --verbose
ERRO[0000] /usr/share/kata-containers/defaults/configuration.toml: file /var/cache/kata-containers/vmlinuz.container does not exist  arch=arm64 name=kata-runtime pid=1480 source=runtime
/usr/share/kata-containers/defaults/configuration.toml: file /var/cache/kata-containers/vmlinuz.container does not exist

Restart the kata-osbuilder-generate (not enabled by default) if you get the error above. This creates the required Kernel image and modules in /var/cache/kata-containers.

systemctl restart kata-osbuilder-generate.service
kata-collect-data.sh

Output:

[root@microshift /]# systemctl restart kata-osbuilder-generate.service
[root@microshift /]# ls /var/cache/kata-containers
kata-containers-initrd.img  osbuilder-images  vmlinuz.container
[root@microshift /]# kata-runtime kata-check
WARN[0000] Not running network checks as super user      arch=arm64 name=kata-runtime pid=5366 source=runtime
System is capable of running Kata Containers
System can currently create Kata Containers 
[root@microshift ~]# kata-runtime kata-check --verbose
WARN[0000] Not running network checks as super user      arch=arm64 name=kata-runtime pid=4615 source=runtime
INFO[0000] Unable to know if the system is running inside a VM  arch=arm64 source=virtcontainers
INFO[0000] kernel property found                         arch=arm64 description="Host kernel accelerator for virtio network" name=vhost_net pid=4615 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Host Support for Linux VM Sockets" name=vhost_vsock pid=4615 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Kernel-based Virtual Machine" name=kvm pid=4615 source=runtime type=module
INFO[0000] kernel property found                         arch=arm64 description="Host kernel accelerator for virtio" name=vhost pid=4615 source=runtime type=module
System is capable of running Kata Containers
INFO[0000] device available                              arch=arm64 check-type=full device=/dev/kvm name=kata-runtime pid=4615 source=runtime
INFO[0000] feature available                             arch=arm64 check-type=full feature=create-vm name=kata-runtime pid=4615 source=runtime
INFO[0000] kvm extension is supported                    arch=arm64 description="Maximum IPA shift supported by the host" id=165 name=KVM_CAP_ARM_VM_IPA_SIZE pid=4615 source=runtime type="kvm extension"
INFO[0000] IPA limit size: 44 bits.                      arch=arm64 name=KVM_CAP_ARM_VM_IPA_SIZE pid=4615 source=runtime type="kvm extension"
System can currently create Kata Containers 
[root@microshift /]# kata-runtime kata-env
[root@microshift ~]# kata-runtime kata-env | awk -v RS= '/\[Hypervisor\]/' # Determine currently configured hypervisor
[Hypervisor]
  MachineType = "virt"
  Version = "QEMU emulator version 6.2.0 (qemu-6.2.0-12.fc36)\nCopyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-system-aarch64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  SharedFS = "virtio-fs"
  VirtioFSDaemon = "/usr/libexec/virtiofsd"
  SocketPath = ""
  Msize9p = 8192
  MemorySlots = 10
  PCIeRootPort = 0
  HotplugVFIOOnRootBus = false
  Debug = false

Setup MicroShift to use Kata Containers

Add the registries line for default registry to download an image from and the NET_RAW to the default_capabilities to /etc/crio/crio.conf so that we can run ping from within the containers.

Add the registries line under [crio.image] for default registry to download an image from and the NET_RAW to the default_capabilities to /etc/crio/crio.conf so that we can run ping from within the containers.
[crio]
[crio.image]
registries=["quay.io","docker.io"]

# The crio.runtime table contains settings pertaining to the OCI runtime used
# and options for how to set up and manage the OCI runtime.
[crio.runtime]

default_capabilities = [
        "CHOWN",
        "DAC_OVERRIDE",
        "FSETID",
        "FOWNER",
        "SETGID",
        "SETUID",
        "SETPCAP",
        "NET_BIND_SERVICE",
        "KILL",
        "NET_RAW",
]

# If true, SELinux will be used for pod separation on the host.
selinux = true

Look at the file that was added for kata runtime /etc/crio/crio.conf.d/50-kata

[root@coreos kata]# cat /etc/crio/crio.conf.d/50-kata
[crio.runtime.runtimes.kata]
  runtime_path = "/usr/bin/containerd-shim-kata-v2"
  runtime_type = "vm"
  runtime_root = "/run/vc"
  privileged_without_host_devices = true

Restart crio for the kata changes to have effect

systemctl restart crio

Replace MicroShift binary and the service if you are on Fedora 36 - The microshift service, as installed, references the microshift binary in the /usr/bin directory

[root@microshift ~]# cat /usr/lib/systemd/system/microshift.service
[Unit]
Description=MicroShift
Wants=network-online.target crio.service
After=network-online.target crio.service

[Service]
WorkingDirectory=/usr/bin/
ExecStart=microshift run
Restart=always
User=root

[Install]
WantedBy=multi-user.target

The microshift binary from Apr 20, 2022 installed in /usr/bin by the rpm installer does not work with Fedora 36. It causes a crash-loop with the following error in journalctl:

Jun 10 21:12:36 microshift.example.com microshift[5336]: unexpected fault address 0x0
Jun 10 21:12:36 microshift.example.com microshift[5336]: fatal error: fault
Jun 10 21:12:36 microshift.example.com microshift[5336]: [signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x1c0cbdf]
Jun 10 21:12:36 microshift.example.com microshift[5336]: goroutine 48915 [running]:

We will replace it with a newer binary from May 11, 2022 downloaded from https://github.com/openshift/microshift/releases/. Note that the microshift-linux-arm64 binaries from 05-11-2022 Latest Nightly Build or 04-20-2022 (both 4.8.0-0.microshift-2022-04-20-182108 and 4.8.0-0.microshift-2022-04-20-141053) work. The microshift version installed by rpm shows 4.8.0-0.microshift-2022-04-20-141053 but it does not work. You may also build the microshift binary from sources as shown in Part 21.

Files that ship in packages downloaded from distribution repository go into /usr/lib/systemd/system. Modifications done by users go into /etc/systemd/system/. The /etc/systemd/system overrides the unit files in /usr/lib/systemd/system.

curl -L https://github.com/openshift/microshift/releases/download/nightly/microshift-linux-arm64 > /usr/local/bin/microshift
chmod +x /usr/local/bin/microshift
/usr/local/bin/microshift version
cp /usr/lib/systemd/system/microshift.service /etc/systemd/system/microshift.service
# vi /etc/systemd/system/microshift.service # Change path to /usr/local/bin
sed -i "s|/usr/bin|/usr/local/bin|" /etc/systemd/system/microshift.service

We need to run systemctl daemon-reload to take changed configurations from filesystem and regenerate dependency trees.

systemctl daemon-reload
systemctl start microshift

Setting the Kata runtime and running the Nginx sample

We clone the repo with the kata samples, create the runtime class and run the samples.

export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "oc get nodes;oc get pods -A;crictl stats -a"
cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/kata/
oc apply -f kata-runtimeclass.yaml
oc apply -f kata-nginx.yaml
journalctl -b -t kata -f

Output:

[root@microshift /]# systemctl start microshift

[root@microshift hack]# cd ~/microshift/raspberry-pi/kata/
[root@microshift kata]# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
[root@microshift kata]# watch "oc get nodes;oc get pods -A;crictl stats -a"

[root@microshift kata]# oc apply -f kata-runtimeclass.yaml
runtimeclass.node.k8s.io/kata created 

If you do not set the runtime class, you will get the following erro when starting a pod that uses the kata-runtimeclass

Error from server (Forbidden): error when creating "kata-nginx.yaml": pods "kata-nginx" is forbidden: pod rejected: RuntimeClass "kata" not found

If the nginx pod gets created successfully, you will see:

[root@microshift kata]# oc apply -f kata-nginx.yaml
pod/kata-nginx created

[root@microshift kata]# oc exec -it kata-nginx -- curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

If you get the “:10250: connect: no route to host” error such as follows, it means you have not set the correct ip address of the raspberry pi in /etc/hosts.

[root@microshift kata]# oc exec -it kata-nginx -- curl localhost
Error from server: error dialing backend: dial tcp 192.168.1.209:10250: connect: no route to host

Fix it by running the following where ipaddress is the ip address of your Raspberry pi 4 along with the short and full host name.

echo $ipaddress microshift microshift.example.com >> /etc/hosts

If you get the following error when creating the pod

Failed to create pod sandbox: rpc error: code = Unknown desc = CreateContainer failed: /usr/share/kata-containers/defaults/configuration.toml: file /var/cache/kata-containers/vmlinuz.container does not exist: not found

it means that the symbolic link /var/cache/kata-containers/vmlinuz.container is not set correctly. Just run the

systemctl restart kata-osbuilder-generate.service

Output:

[root@microshift kata]# systemctl restart kata-osbuilder-generate.service
[root@microshift kata]# ls -las  /var/cache/kata-containers/vmlinuz.container
0 lrwxrwxrwx. 1 root root 46 Jul 31 13:55 /var/cache/kata-containers/vmlinuz.container -> /lib/modules/5.18.11-200.fc36.aarch64//vmlinuz

We can check the iptable rules with:

oc logs ds/kube-flannel-ds -n kube-system

You may also run the other samples in this kata folder

oc apply -f kata-alpine.yaml -f kata-busybox.yaml -f kata-nginx.yaml

When done, you may delete the samples

oc delete -f kata-alpine.yaml -f kata-busybox.yaml -f kata-nginx.yaml

InfluxDB sample with multiple Kata containers

Now let’s run the influxdb sample with kata. We ran this sample in previous parts without kata.

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/influxdb/
restorecon -R -v "/var/hpvolumes"

We update the influxdb-deployment.yaml, telegraf-deployment.yaml and grafana/grafana-deployment.yaml to use the runtimeClassName: kata. With Kata containers, we do not directly get access to the host devices. So, we run the measure container as a runc pod.

# sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml telegraf-deployment.yaml grafana/grafana-deployment.yaml

sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' influxdb-deployment.yaml
sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' telegraf-deployment.yaml
sed -i '/^    spec:/a \ \ \ \ \ \ runtimeClassName: kata' grafana/grafana-deployment.yaml

We use the kata runtime for the deployments for influxdb, telegraf and Grafana that now show:

    spec:
      runtimeClassName: kata
      containers:

In runc, '--privileged' for a container means all the /dev/* block devices from the host are mounted into the guest. This will allow the privileged container to gain access to mount any block device from the host.

In Kata, '--privileged' means that containers in Kata VM can access all the devices in Kata guest VM. CRI-O allows configuring the privileged host devices behavior for each runtime in the CRI config with the privileged_without_host_devices option. Setting this to true will disable hot plugging of the host devices into the guest, even when privileged is enabled.

Now, get the nodename

[root@microshift influxdb]# oc get nodes
NAME                     STATUS   ROLES    AGE   VERSION
microshift.example.com   Ready    <none>   12h   v1.21.0

Replace the annotation kubevirt.io/provisionOnNode with the above nodename microshift.example.com (or coreos) and execute the runall-fedora-dynamic.sh. This will create a new project influxdb.

nodename=microshift.example.com
sed -i "s|kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" influxdb-data-dynamic.yaml
sed -i "s| kubevirt.io/provisionOnNode:.*| kubevirt.io/provisionOnNode: $nodename|" grafana/grafana-data-dynamic.yaml

./runall-fedora-dynamic.sh

Look at the pods and metrics for the kata containers. The “crictl stats” shows the memory for the kata containers, but the CPU shows 0.

watch "oc get nodes;oc get pods -A;crictl stats -a"

Output:

crictl ps | tail -n +2 | sort 
CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                            ATTEMPT             POD ID
[root@microshift influxdb]# crictl ps -a | tail -n +2 | sort
13845ebfbffb8       85fc911ceba5a5a5e43a7c613738b2d6c0a14dad541b1577cdc6f921c16f5b75                                                  2 hours ago         Running             kube-flannel                    0                   92fa1d65356e4
1c6ca1b970add       quay.io/microshift/haproxy-router@sha256:706a43c785337b0f29aef049ae46fdd65dcb2112f4a1e73aaf0139f70b14c6b5         2 hours ago         Running             router                          0                   52ab1e6e55b1f
2ea8bb780b17e       docker.io/karve/measure-fedora@sha256:28878e7159076bbc6433155a190078f22c7d1a43dd4732b570bbf469a8af95c7            18 minutes ago      Running             measure                         0                   7293dd5197b49
368aadb215366       quay.io/microshift/hostpath-provisioner@sha256:cb0c1cc60c1ba90efd558b094ba1dee3a75e96b76e3065565b60a07e4797c04c   2 hours ago         Running             kubevirt-hostpath-provisioner   0                   1e0c90554e7a9
3d6a2ea416d97       quay.io/microshift/kube-rbac-proxy@sha256:2b5f44b11bab4c10138ce526439b43d62a890c3a02d42893ad02e2b3adb38703        2 hours ago         Running             kube-rbac-proxy                 0                   baa182164cb5a
4922cc09ee36e       quay.io/microshift/cli@sha256:1848138e5be66753863c98b86c274bd7fb8572fe0da6f7156f1644187e4cfb84                    2 hours ago         Running             dns-node-resolver               0                   b390a09a29a59
4e75bbfa123b3       quay.io/microshift/flannel@sha256:13777a318497ae35593bb73499a0e2ff4cb7eda74f59c1ba7c3e88c717cbaab9                2 hours ago         Exited              install-cni                     0                   92fa1d65356e4
76166e4e99cc0       08c08436590bbbabe5a5b0212dbe2820b753918d1830e4ce3d77dc2d9f5ac0af                                                  18 minutes ago      Running             telegraf                        0                   5621ab846ac0d
7da8a47d17fd1       e331183e52199aa100a9a488a168454cd1d4328b10bfa1ee8ef84df753806cd7                                                  17 minutes ago      Running             grafana                         0                   be58bfd57c8af
a80544a2c47cc       quay.io/microshift/coredns@sha256:07e5397247e6f4739727591f00a066623af9ca7216203a5e82e0db2fb24514a3                2 hours ago         Running             dns                             0                   baa182164cb5a
a847739fc552b       4507f7da31b05c4e2fecbd648ebd8daa93c5c47b50ddfbc04948e5744728c0b9                                                  18 minutes ago      Running             influxdb                        0                   79fa3a0864781
e21a7605567cc       quay.io/microshift/service-ca-operator@sha256:1a8e53c8a67922d4357c46e5be7870909bb3dd1e6bea52cfaf06524c300b84e8    2 hours ago         Running             service-ca-controller           0                   df1a150a2c60d
e3522dab6f748       quay.io/microshift/flannel-cni@sha256:39f81dd125398ce5e679322286344a4c13dded73ea0bf4f397e5d1929b43d033            2 hours ago         Exited              install-cni-bin                 0                   92fa1d65356e4

crictl stats -a | tail -n +2 | sort
CONTAINER           CPU %               MEM                 DISK                INODES
13845ebfbffb8       0.00                0B                  138B                15
1c6ca1b970add       0.00                0B                  13.31kB             22
2ea8bb780b17e       0.00                0B                  35.41kB             17
368aadb215366       0.00                0B                  12B                 18
3d6a2ea416d97       0.00                0B                  12B                 19
4922cc09ee36e       0.00                0B                  0B                  4
76166e4e99cc0       0.00                52.08MB             186kB               11
7da8a47d17fd1       0.00                65.77MB             4.021MB             69
a80544a2c47cc       0.00                0B                  0B                  3
a847739fc552b       0.00                56.64MB             265B                11
e21a7605567cc       0.00                0B                  6.965kB             11

Pressure metric - The some statistic gives the percentage of time some (one or more) tasks were delayed due to lack of resources. The full metric indicates the percentage of time in which all tasks were delayed by lack of resources, i.e., the amount of completely unproductive time.

watch 'oc exec -it deployment/influxdb-deployment -- bash -c "cat /sys/fs/cgroup/cpu.pressure;cat /sys/fs/cgroup/cpu.stat"'

Identify hosting pod of a container

crictl inspect --output go-template --template "{{.info.runtimeSpec.hostname}}" YOUR_CONTAINER_ID

Find the PID of running container

crictl inspect --output go-template --template '{{.info.pid}}' YOUR_CONTAINER_ID

We can list the runtimeClass of the pods as follows:

oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName

Output:

[root@fedora influxdb]# oc get pods -o custom-columns=NAME:metadata.name,STATUS:.status.phase,RUNTIME_CLASS:.spec.runtimeClassName
NAME                                   STATUS    RUNTIME_CLASS
grafana-855ffb48d8-wbqj8               Running   kata
influxdb-deployment-6d898b7b7b-9hz2m   Running   kata
measure-deployment-5557947b8c-kbs2l    Running   <none>
telegraf-deployment-d746f5c6-wzvl5     Running   kata

The following command shows the three sandbox VMs running with qemu-system-aarch64.

ps -o pid,user,%mem,command ax | grep "/usr/bin/qemu-system-aarch64" | sort -b -k3 -r

Output:

[root@microshift influxdb]# ps -o pid,user,%mem,command ax | grep "/usr/bin/qemu-system-aarch64" | sort -b -k3 -r
  26330 root      4.2 /usr/bin/qemu-system-aarch64 -name sandbox-be58bfd57c8af2eaec3c96a1c9c9a3d6668fe64d8d258ba3f836a4f3cdb5457e -uuid 3833677e-37de-48e8-8614-dbfcc31c3a03 -machine virt,usb=off,accel=kvm,gic-version=host -cpu host,pmu=off -qmp unix:/run/vc/vm/be58bfd57c8af2eaec3c96a1c9c9a3d6668fe64d8d258ba3f836a4f3cdb5457e/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=7773M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/be58bfd57c8af2eaec3c96a1c9c9a3d6668fe64d8d258ba3f836a4f3cdb5457e/console.sock,server=on,wait=off -device virtio-scsi-pci,id=scsi0,disable-modern=false -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=false,vhostfd=3,id=vsock-2576937113,guest-cid=2576937113 -chardev socket,id=char-ea0bd4a544682428,path=/run/vc/vm/be58bfd57c8af2eaec3c96a1c9c9a3d6668fe64d8d258ba3f836a4f3cdb5457e/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-ea0bd4a544682428,tag=kataShared -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=52:fa:21:24:f8:69,disable-modern=false,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /usr/lib/modules/5.18.11-200.fc36.aarch64/vmlinuz -initrd /var/cache/kata-containers/osbuilder-images/5.18.11-200.fc36.aarch64/fedora-kata-5.18.11-200.fc36.aarch64.initrd -append console=hvc0 console=hvc1 iommu.passthrough=0 quiet panic=1 nr_cpus=4 scsi_mod.scan=none -pidfile /run/vc/vm/be58bfd57c8af2eaec3c96a1c9c9a3d6668fe64d8d258ba3f836a4f3cdb5457e/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4
  25771 root      4.1 /usr/bin/qemu-system-aarch64 -name sandbox-79fa3a08647811ff47f60645a38ec42224c9b44b265daddb22d553160d8666b8 -uuid 0931349c-a5d0-465d-a999-63d7d0d43ec7 -machine virt,usb=off,accel=kvm,gic-version=host -cpu host,pmu=off -qmp unix:/run/vc/vm/79fa3a08647811ff47f60645a38ec42224c9b44b265daddb22d553160d8666b8/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=7773M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/79fa3a08647811ff47f60645a38ec42224c9b44b265daddb22d553160d8666b8/console.sock,server=on,wait=off -device virtio-scsi-pci,id=scsi0,disable-modern=false -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=false,vhostfd=3,id=vsock-1591170364,guest-cid=1591170364 -chardev socket,id=char-66516efb06bda651,path=/run/vc/vm/79fa3a08647811ff47f60645a38ec42224c9b44b265daddb22d553160d8666b8/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-66516efb06bda651,tag=kataShared -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=02:04:8a:e8:05:da,disable-modern=false,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /usr/lib/modules/5.18.11-200.fc36.aarch64/vmlinuz -initrd /var/cache/kata-containers/osbuilder-images/5.18.11-200.fc36.aarch64/fedora-kata-5.18.11-200.fc36.aarch64.initrd -append console=hvc0 console=hvc1 iommu.passthrough=0 quiet panic=1 nr_cpus=4 scsi_mod.scan=none -pidfile /run/vc/vm/79fa3a08647811ff47f60645a38ec42224c9b44b265daddb22d553160d8666b8/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4
  26088 root      4.0 /usr/bin/qemu-system-aarch64 -name sandbox-5621ab846ac0d0f972f328a3fe9d8badcd430cd4982ff30825af7b72364e49a1 -uuid a26f822e-dd24-4602-a60a-1f2181c4f032 -machine virt,usb=off,accel=kvm,gic-version=host -cpu host,pmu=off -qmp unix:/run/vc/vm/5621ab846ac0d0f972f328a3fe9d8badcd430cd4982ff30825af7b72364e49a1/qmp.sock,server=on,wait=off -m 2048M,slots=10,maxmem=7773M -device pci-bridge,bus=pcie.0,id=pci-bridge-0,chassis_nr=1,shpc=off,addr=2,io-reserve=4k,mem-reserve=1m,pref64-reserve=1m -device virtio-serial-pci,disable-modern=false,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/5621ab846ac0d0f972f328a3fe9d8badcd430cd4982ff30825af7b72364e49a1/console.sock,server=on,wait=off -device virtio-scsi-pci,id=scsi0,disable-modern=false -object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0 -device vhost-vsock-pci,disable-modern=false,vhostfd=3,id=vsock-4249272132,guest-cid=4249272132 -chardev socket,id=char-7b998453a174897b,path=/run/vc/vm/5621ab846ac0d0f972f328a3fe9d8badcd430cd4982ff30825af7b72364e49a1/vhost-fs.sock -device vhost-user-fs-pci,chardev=char-7b998453a174897b,tag=kataShared -netdev tap,id=network-0,vhost=on,vhostfds=4,fds=5 -device driver=virtio-net-pci,netdev=network-0,mac=9e:52:ca:ab:b8:32,disable-modern=false,mq=on,vectors=4 -rtc base=utc,driftfix=slew,clock=host -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic --no-reboot -daemonize -object memory-backend-file,id=dimm1,size=2048M,mem-path=/dev/shm,share=on -numa node,memdev=dimm1 -kernel /usr/lib/modules/5.18.11-200.fc36.aarch64/vmlinuz -initrd /var/cache/kata-containers/osbuilder-images/5.18.11-200.fc36.aarch64/fedora-kata-5.18.11-200.fc36.aarch64.initrd -append console=hvc0 console=hvc1 iommu.passthrough=0 quiet panic=1 nr_cpus=4 scsi_mod.scan=none -pidfile /run/vc/vm/5621ab846ac0d0f972f328a3fe9d8badcd430cd4982ff30825af7b72364e49a1/pid -smp 1,cores=1,threads=1,sockets=4,maxcpus=4
  30228 root      0.0 grep --color=auto /usr/bin/qemu-system-aarch64

A Kata Container runs in an isolated environment inside a virtual machine. Capabilities will not allow you to access the host system, but only the guest kernel in the VM. The sandbox is running a kernel version that is same as the host. Behind the scene, Kata Container is creating the necessary Kernel image and modules by the systemd service 'kata-osbuilder-generate.service' (default not enabled). Also, by checking the boot parameters, it shows that the kernel instance is not the same as the host although the version is the same.

# On the sandbox
[root@microshift influxdb]# oc exec -it deployment/grafana -- uname -r
5.18.11-200.fc36.aarch64
[root@microshift influxdb]# oc exec -it deployment/grafana -- cat /proc/cmdline
console=hvc0 console=hvc1 iommu.passthrough=0 quiet panic=1 nr_cpus=4 scsi_mod.scan=none
[root@microshift ~]# oc exec -it deployment/grafana -- ls /dev
fd  full  kcore  mqueue  null  ptmx  pts  random  shm  stderr  stdin  stdout  termination-log  tty  urandom  zero

# On the host
[root@microshift influxdb]# uname -r
5.18.11-200.fc36.aarch64
[root@microshift influxdb]# cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos3)/ostree/fedora-b139a339f99bfca027abe84c64e79d3d9050b68da05a913b65929469d34751e1/vmlinuz-5.18.11-200.fc36.aarch64 rhgb quiet root=UUID=460c2b61-a465-4789-a9e4-36940540d353 rootflags=subvol=root ostree=/ostree/boot.0/fedora/b139a339f99bfca027abe84c64e79d3d9050b68da05a913b65929469d34751e1/0
[root@microshift ~]# ls /dev
autofs           hidraw0       media0     rtc       tty16  tty31  tty47  tty62    ttyS19  ttyS6    vcs5   vga_arbiter
block            hidraw1       mem        rtc0      tty17  tty32  tty48  tty63    ttyS2   ttyS7    vcs6   vhci
…

Apply the metrics-server and look at the output of top. It only shows the CPU and Memory usage for non-kata containers.

[root@microshift ~]# oc apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created


[root@microshift ~]# oc adm top nodes;oc adm top pods -n influxdb
NAME                     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
microshift.example.com   628m         15%    3536Mi          45%
NAME                                  CPU(cores)   MEMORY(bytes)
measure-deployment-5557947b8c-4bmlm   1m           25Mi

Add the "<RaspberryPiIPAddress> grafana-service-influxdb.cluster.local" to /etc/hosts on your laptop and login to http://grafana-service-influxdb.cluster.local/login using admin/admin. You will need to change the password on first login. Go to the Dashboards list (left menu > Dashboards > Manage). Open the Analysis Server dashboard to display monitoring information for MicroShift. Open the Balena Sense dashboard to show the temperature, pressure, and humidity from SenseHat.

Finally, after you are done working with this sample, you can run the deleteall-fedora-dynamic.sh

./deleteall-fedora-dynamic.sh

Deleting the persistent volume claims automatically deletes the persistent volumes.

Jupyter notebook samples in Kata containers

We ran these notebook samples in previous parts in standard containers. Now we will run them in Kata Containers by setting the runtimeClassName: kata.

We need to increase the assigned memory for the Kata containers for some notebook samples. Change the default_memory = 4096 (the default setting is 2048 MB). We do this by copying the configuration.toml to the /etc/kata-containers directory because the /usr/share/kata-containers/defaults directory is readonly on some editions of Fedora.

mkdir /etc/kata-containers
cp /usr/share/kata-containers/defaults/configuration.toml /etc/kata-containers/.
vi /etc/kata-containers/configuration.toml # Change the default_memory = 4096

Restart crio for this to take effect. Note that if you are running MicroShift Containerized, then you need to stop microshift, restart crio, then start microshift. You can verify that the kata runtime is using this new configuration.toml

[root@coreos tensorflow-notebook]# kata-runtime kata-env | grep -A1 "Runtime.Config"
  [Runtime.Config]
    Path = "/etc/kata-containers/configuration.toml"

Now let’s run the samples

cd ~
git clone https://github.com/thinkahead/microshift.git
cd ~/microshift/raspberry-pi/tensorflow-notebook

Digit recognition sample

sed -i '/^  initContainers:/i \ \ runtimeClassName: kata' digit-recognition.yaml
oc -n default wait pod digit-recognition --for condition=Ready --timeout=600s
oc get routes

Add the ipaddress of the Raspberry Pi 4 device for digit-recognition-route-default.cluster.local to /etc/hosts on your Laptop. When the pod status shows Running, browse to http://digit-recognition-route-default.cluster.local/notebooks/work/digits.ipynb. The default password is mysecretpassword. Run this notebook as described in Part 15. When we are done working with the digit recognition sample notebook, we can delete it:

oc delete -f digit-recognition.yaml

License plate recognition sample

sed -i '/^  initContainers:/i \ \ runtimeClassName: kata' notebook.yaml
oc apply -f notebook.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=600s
oc get routes

Add the ipaddress of the Raspberry Pi 4 device for notebook-route-default.cluster.local to /etc/hosts on your Laptop and browse to http://notebook-route-default.cluster.local/tree?. Login with the default password mysecretpassword. Run the notebooks as mentioned in Part 20.

Detect plate number



When we are done working with the license plate recognition sample notebook, we can delete it as follows:

oc delete -f notebook.yaml

Object Detection Sample

sed -i '/^  initContainers:/i \ \ runtimeClassName: kata' object-detection-rest.yaml
oc apply -f object-detection-rest.yaml 
oc -n default wait pod notebook --for condition=Ready --timeout=300s
oc get routes

Run the notebooks as mentioned in Part 17. The default password is mysecretpassword. First we can run the 1_explore.ipynb that will download twodogs.jpg and use a pre-trained model to identify objects in images. In the next notebooks (2_predict.ipynb, 3_run_flask.ipynb, and 4_test_flask.ipynb), this model is wrapped in a flask app that can be used as part of a larger application. When we are done working with the object detection sample notebook, we can delete it as follows:

oc delete -f object-detection-rest.yaml

MicroShift Containerized

We can use the latest prebuilt image.

Run the cleanup to delete the microshift that was installed directly on the Raspberry Pi 4. You may also delete the /etc/microshift directory on the host. Then, start Microshift in a container in podman. The CRI-O Systemd service runs directly on the host and microshift data may be stored in /var/lib/microshift or in a podman volume.

IMAGE=quay.io/microshift/microshift:4.8.0-0.microshift-2022-04-20-182108-linux-arm64
podman pull $IMAGE

podman run --rm --ipc=host --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -v /var/hpvolumes:/var/hpvolumes -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig $IMAGE
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch "podman ps;oc get nodes;oc get pods -A;crictl pods"

This will run the microshift container, and we can run the kata samples shown earlier.

After we are done, we can delete the microshift container. The --rm we used in the podman run will delete the container when we stop it.

podman stop microshift

After it is stopped, we can run the cleanup.sh to delete the pods and images from crio.

Alternatively, we can create the /etc/systemd/system/microshift.service and start the microshift service.

[Unit]
Description=MicroShift Containerized
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target crio.service
After=network-online.target crio.service
RequiresMountsFor=%t/containers

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/usr/bin/mkdir -p /var/lib/kubelet ; /usr/bin/mkdir -p /var/hpvolumes
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/bin/podman run \
  --cidfile=%t/%n.ctr-id \
  --cgroups=no-conmon \
  --rm \
  --replace \
  --sdnotify=container \
  --label io.containers.autoupdate=registry \
  --network=host \
  --privileged \
  -d \
  --name microshift \
  -v /var/hpvolumes:/var/hpvolumes:z,rw,rshared \
  -v /var/run/crio/crio.sock:/var/run/crio/crio.sock:rw,rshared \
  -v microshift-data:/var/lib/microshift:rw,rshared \
  -v /var/lib/kubelet:/var/lib/kubelet:z,rw,rshared \
  -v /var/log:/var/log \
  -v /etc:/etc quay.io/microshift/microshift:latest
ExecStop=/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target

The following output shows the Microshift Containerized with kata on fedora 36 server. This service uses the podman volume microshift-data

[root@fedora ~]# systemctl daemon-reload
[root@fedora ~]# systemctl enable --now crio microshift
Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /etc/systemd/system/microshift.service.
Created symlink /etc/systemd/system/default.target.wants/microshift.service → /etc/systemd/system/microshift.service.
[root@fedora ~]# podman volume ls
DRIVER      VOLUME NAME
local       microshift-data
[root@fedora ~]# podman volume inspect microshift-data # Get the Mountpoint where kubeconfig is located
[
     {
          "Name": "microshift-data",
          "Driver": "local",
          "Mountpoint": "/var/lib/containers/storage/volumes/microshift-data/_data",
          "CreatedAt": "2022-07-30T09:08:43.379956373-04:00",
          "Labels": {},
          "Scope": "local",
          "Options": {},
          "MountCount": 0,
          "NeedsCopyUp": true
     }
]
[root@fedora ~]# export KUBECONFIG=/var/lib/containers/storage/volumes/microshift-data/_data/resources/kubeadmin/kubeconfig
[root@fedora ~]# watch "oc get nodes;oc get pods -A;crictl pods;crictl images;podman ps"

Now we can apply the kata runtime class and run the samples with kata as shown earlier

[root@fedora kata]# oc apply -f kata-runtimeclass.yaml
runtimeclass.node.k8s.io/kata created
[root@fedora kata]# oc apply -f kata-alpine.yaml -f kata-nginx.yaml -f kata-busybox.yaml
pod/kata-alpine created
pod/busybox-1 created
pod/kata-nginx created
[root@fedora kata]# watch "oc get nodes;oc get pods -A;crictl stats"

Output:

NAME                 STATUS   ROLES    AGE   VERSION
fedora.example.com   Ready    <none>   11m   v1.21.0
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE
default                         busybox-1                             1/1     Running   0          2m29s
default                         kata-alpine                           1/1     Running   0          2m30s
default                         kata-nginx                            1/1     Running   0          117s
kube-system                     kube-flannel-ds-95nx9                 1/1     Running   0          11m
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-lvv67   1/1     Running   0          11m
openshift-dns                   dns-default-5cgj7                     2/2     Running   0          11m
openshift-dns                   node-resolver-gb9hf                   1/1     Running   0          11m
openshift-ingress               router-default-85bcfdd948-dxscg       1/1     Running   0          11m
openshift-service-ca            service-ca-7764c85869-xjf8c           1/1     Running   0          11m

CONTAINER           CPU %               MEM                 DISK                INODES
519d8dd386f0a       0.00                1.47MB              89B                 8
8d1739f6395fe       0.00                10.75MB             1.225kB             21
9e74a305f4f5b       0.00                1.012MB             12B                 16

When done, we can delete the kata containers

[root@fedora kata]# oc delete -f kata-alpine.yaml -f kata-nginx.yaml -f kata-busybox.yaml
pod "kata-alpine" deleted
pod "kata-nginx" deleted
pod "busybox-1" deleted

When running MicroShift Containerized, if your container is unable to resolve external hostnames, check the logs of the dns-default daemonset in the openshift-dns namespace.

[root@coreos hack]# oc logs daemonset.apps/dns-default -n openshift-dns -c dns -f
…
[ERROR] plugin/errors: 2 google.com.fios-router.home. A: read udp 127.0.0.1:57939->127.0.0.53:53: read: connection refused
[ERROR] plugin/errors: 2 google.com.fios-router.home. AAAA: read udp 127.0.0.1:49760->127.0.0.53:53: read: connection refused
…

You must be using the system-resolved as a service if you see the 127.0.0.53:53. Check the /etc/resolv.conf on your host Raspberry Pi 4, if you have nameserver 127.0.0.53, replace it with a nameserver accessible by the container, cleanup and start the microshift service again.

nameserver 8.8.8.8
#nameserver 127.0.0.53

Errors

QMP command failed: The feature 'query-hotpluggable-cpus' is not enabled

This error occurs if you add resource limits to the containers. The dedicated legacy interface: cpu-add QMP command is removed in QEMU v5.2. Need to use the device_add interface.

Conclusion

In this Part 23 we looked at running MicroShift with Kata containers on Fedora 36 in a couple of ways: either directly and containerized. We ran multiple samples and saw what kind of metrics were available from the containers. The Kata Containers runtime is compatible with the OCI runtime specification that works seamlessly with the Kubernetes Container Runtime Interface (CRI) through the CRI-O and containerd implementations. Kata Containers provides a "shimv2" compatible runtime. In the next Part 24 we will install kata containers from source on Manjaro.

Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift, KubeVirt and Kata Containers on ARM devices and if you would like to see something covered in more detail.

References


​​​​

0 comments
174 views

Permalink