Virtualization with MicroShift on Raspberry Pi 4
Introduction
MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. In Part 4, we looked at installing and running MicroShift on Raspberry Pi 4 with Raspberry Pi OS (64 bit). In this part, we will install MicroShift on a USB Flash drive. We will work with KubeVirt, install the OKD Web Console and finally leverage the integration between KubeVirt and the OKD Web console.
OKD is the upstream version of Red Hat’s OpenShift. KubeVirt is the open-source project that makes it possible to run virtual machines in a Kubernetes-managed container platform. Since OpenShift Virtualization, formerly container-native virtualization (CNV) includes KubeVirt, there is already integration between OKD console and KubeVirt. KubeVirt delivers container-native virtualization by leveraging KVM, the Linux Kernel hypervisor, through a Kubernetes container. The KVM, or Kernel Virtual Machine, is the virtualization solution for Linux. It is a Linux kernel module that allows the Linux kernel to act as a hypervisor program. It is possible to run KVM on the Raspberry Pi 4 using the Raspberry Pi OS with the 64-bit kernel.
KubeVirt as an add-on to MicroShift delivers container-native virtualization that allows us to run and manage virtual machine workloads alongside container workloads. Using a unified management approach simplifies deployments and allows for better resource utilization. Existing virtual machine disks are imported into persistent volumes (PVs), which are made accessible to virtual machines using persistent volume claims (PVCs). With the installation of KubeVirt, new types are added to the Kubernetes API to manage Virtual Machines. We can interact with the new resources via kubectl as we would with any other API resource. In general, the management of VirtualMachineInstances is like the management of Pods. Every VM that is defined in the cluster is expected to be running, just like Pods. Deleting a VirtualMachineInstance (VMI) is equivalent to shutting it down. We will build the virtctl client tool that is used to control virtual machine related operations on the cluster like connecting to the serial console, log onto the new VM, and starting/stopping the VMs.
The OKD web console is a user interface accessible from a web browser in the form of a single page webapp. Developers can use the web console to visualize, browse, and manage the contents of namespaces. The web console uses WebSockets to maintain a persistent connection with the API server and receive updated information as soon as it is available.
Setting up the Raspberry Pi OS on USB flash drive
Although the Raspberry Pi OS with MicroShift installed in Part 4 on the MicroSDXC card worked, it is slow for VMs. For this setup, we install the Raspberry Pi OS (64 bit) on a USB drive (at least 128GB) using the Raspberry Pi Imager or Balena Etcher. By default, a Raspberry Pi boots up and stores all its programs on a microSDXC card. With a faster and larger USB drive on hand, we can make use of a temporary MicroSDXC card to first update the firmware that lets us use any USB device to boot a Pi 4. We can use the Raspberry Pi Imager to prepare a Raspberry Pi 4 by launching the Raspberry Pi Imager and under Operating System scroll down to Misc Utility Images, Select Bootloader and then Select USB Boot and write to a MicroSDXC card. Insert the microSDXC card into your Raspberry Pi 4 and power on. The green activity light will blink a steady pattern once the update has been completed. If you have an HDMI monitor attached, the screen will go green once the update is complete. Allow 10 seconds or more for the update to complete. Power off the Raspberry Pi and remove the microSDXC card. Insert the USB drive with the Raspberry Pi OS (64 bit) and power on.
Enable Virtualization on Raspberry Pi OS and setup KubeVirt on MicroShift
Follow the instructions from Part 4 to install MicroShift on the Raspberry Pi 4. This includes updating the cgroup kernel parameters, setting the hostname with domain, pulling the microshift github repo, updating the DISTRO=ubuntu and OS_VERSION=20.04 in install.sh, running the install.sh and installing the oc client. Then, follow the following steps to setup Virtualization on MicroShift:
1. Install KVM
sudo apt install -y virt-manager libvirt0 qemu-system
vi /etc/firewalld/firewalld.conf # FirewallBackend=iptables
systemctl restart firewalld
sudo virsh net-start default
sudo virsh net-autostart default
2. Validate Host Virtualization Setup
The virt-host-validate command validates that the host is configured in a suitable way to run libvirt hypervisor driver qemu.
virt-host-validate qemu
Output:
root@raspberrypi:~# virt-host-validate qemu
QEMU: Checking if device /dev/kvm exists : PASS
QEMU: Checking if device /dev/kvm is accessible : PASS
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuset' controller support : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
WARN (Unknown if this platform has IOMMU support)
QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)
You may optionally test KVM with a Fedora VM as shown in “Testing KVM on the Raspberry Pi OS” section later.
3. Install KubeVirt on MicroShift
If you have a slow MicroSDXC card or USB flash drive, see the section "Preload Images"
LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
echo $LATEST
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
oc apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml
oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator
# The .status.phase will show Deploying
oc get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.phase}"
oc -n kubevirt wait kv kubevirt --for condition=Available --timeout=300s
oc get pods -n kubevirt
Build and Install the OKD Web Console
Although some parts of the Web Console are not functional for MicroShift, it is usable. We will build the OKD Web Console (Codename: “bridge”) from the source. We can either build and run the “bridge” binary directly on the Raspberry Pi 4 or as a container image that we run within MicroShift.
git clone https://github.com/openshift/console.git
cd console
a. Build the binary directly on the Raspberry Pi 4
Install the nodejs and dependencies
sudo apt-get -y install gcc g++ make
curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
apt-get install -y nodejs
curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | sudo tee /usr/share/keyrings/yarnkey.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update && sudo apt-get install yarn
vi ./contrib/environment.sh
Replace the following line in the environment.sh, we will use the serviceaccount console that we will create next
#secretname=$(kubectl get serviceaccount default --namespace=kube-system -o jsonpath='{.secrets[0].name}')
secretname=$(kubectl get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}')
source ./contrib/environment.sh
yarn config set network-timeout 300000
./build.sh
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
Open the port 9000 and run the console
firewall-cmd --zone=public --permanent --add-port=9000/tcp
firewall-cmd --reload
bin/bridge
b. Build the container image and run in MicroShift
Create the following Dockerfile.arm64
FROM arm64v8/debian AS build
RUN apt-get update;apt-get -y install gcc g++ make git curl wget
RUN wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
RUN rm -f go1.17.2.linux-arm64.tar.gz
RUN export PATH=$PATH:/usr/local/go/bin;export GOPATH=/root/go;go version
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor > /usr/share/keyrings/yarnkey.gpg
RUN echo "deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main" > /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get -y install yarn
RUN mkdir -p /go/src/github.com/openshift/console/
ADD . /go/src/github.com/openshift/console/
WORKDIR /go/src/github.com/openshift/console/
RUN yarn config set network-timeout 300000
RUN export PATH=$PATH:/usr/local/go/bin;export GOPATH=/root/go;./build.sh
FROM openshift/origin-base
FROM arm64v8/debian
COPY --from=build /go/src/github.com/openshift/console/frontend/public/dist /opt/bridge/static
COPY --from=build /go/src/github.com/openshift/console/bin/bridge /opt/bridge/bin/bridge
COPY --from=build /go/src/github.com/openshift/console/pkg/graphql/schema.graphql /pkg/graphql/schema.graphql
LABEL io.k8s.display-name="OpenShift Console" \
io.k8s.description="This is a component of OpenShift Container Platform and provides a web console." \
io.openshift.tags="openshift" \
maintainer="Alexei Karve <karve@us.ibm.com>"
# doesn't require a root user.
USER 1001
CMD [ "/opt/bridge/bin/bridge", "--public-dir=/opt/bridge/static" ]
Install Podman or Docker and build the container image using the Dockerfile.arm64. This will take a few hours to build.
apt-get -y install podman
podman build -t docker.io/karve/console:latest -f Dockerfile.arm64 .
podman push docker.io/karve/console:latest
For local development, we disable OAuth and run “bridge” with an OpenShift user's access token.
oc create serviceaccount console -n kube-system
oc create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
oc get serviceaccount console --namespace=kube-system -o jsonpath='{.secrets[0].name}'
Replace BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT and secretRef token for BRIDGE_K8S_AUTH_BEARER_TOKEN in yaml below:
okd-web-console-install.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: console-deployment
namespace: kube-system
labels:
app: console
spec:
replicas: 1
selector:
matchLabels:
app: console
template:
metadata:
labels:
app: console
spec:
containers:
- name: console-app
image: docker.io/karve/console:latest
env:
- name: BRIDGE_USER_AUTH
value: disabled
- name: BRIDGE_K8S_MODE
value: off-cluster
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT
value: https://192.168.1.209:6443
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS
value: "true"
- name: BRIDGE_K8S_AUTH
value: bearer-token
- name: BRIDGE_K8S_AUTH_BEARER_TOKEN
valueFrom:
secretKeyRef:
name: console-token-ztr6r
key: token
---
kind: Service
apiVersion: v1
metadata:
name: console-np-service
namespace: kube-system
spec:
selector:
app: console
type: NodePort
ports:
- name: http
port: 9000
targetPort: 9000
nodePort: 30036
protocol: TCP
...
Create the Deployment, Service and Route for the Web Console
oc apply -f okd-web-console-install.yaml
oc expose svc console-np-service -n kube-system
oc get routes -n kube-system
oc logs deployment/console-deployment -f -n kube-system
Output:
root@raspberrypi:~# oc logs deployment/console-deployment -f -n kube-system
W0228 23:41:18.763662 1 main.go:212] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!
W0228 23:41:18.764022 1 main.go:345] cookies are not secure because base-address is not https!
W0228 23:41:18.764141 1 main.go:650] running with AUTHENTICATION DISABLED!
I0228 23:41:18.767700 1 main.go:766] Binding to 0.0.0.0:9000...
I0228 23:41:18.767787 1 main.go:771] not using TLS
root@raspberrypi:~# oc get routes -n kube-system
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
console-np-service console-np-service-kube-system.cluster.local console-np-service http None
Add the Raspberry Pi IP address to /etc/hosts on your Macbook Pro to resolve console-np-service-kube-system.cluster.local. Now you can access the OKD Web Console from your Laptop http://console-np-service-kube-system.cluster.local/
Creating Virtual Machine Instances
A virtual machine instance (VMI) is the representation of a running virtual machine. Let’s create three standalone VMIs: Alpine, Cirros and Fedora using the vmi-alpine.yaml, vmi-cirros.yaml, vmi-fedora.yaml respectively.
cd /root/microshift/raspberry-pi/vmi
oc apply -f vmi-alpine.yaml
oc apply -f vmi-cirros.yaml
oc apply -f vmi-fedora.yaml
oc wait --for=condition=Ready vmis/vmi-fedora --timeout=300s
oc get vmi
Output:
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get vmi
NAME AGE PHASE IP NODENAME READY
vmi-alpine 25m Running 10.42.0.13 raspberrypi.example.com True
vmi-cirros 23m Running 10.42.0.14 raspberrypi.example.com True
vmi-fedora 51s Running 10.42.0.15 raspberrypi.example.com True
With the three VMs running now, it would be great to access them not only through the Console in the UI but connecting to them using ssh client either from another pod or from outside the cluster.
Connecting to the Virtual Machine Instances
In this section, we look at multiple ways to connect to Virtual Machine Instances.
a. From Console Web UI
Virtual Machine Instances are displayed in the UI.
Select Console tab and “Serial console” to connect to the VM.
b. Using virtctl
The Experimental ARM64 developer builds do not have the virtctl binaries. We need to build virtctl for arm64.
apt-get -y install podman # Install podman (or docker)
git clone https://github.com/kubevirt/kubevirt.git
cd kubevirt/
make bazel-build
cp _out/cmd/virtctl/virtctl /usr/local/bin
cd ..
rm -rf kubevirt
i. Login to the alpine VM using virtctl
Login as user root, no password. We will setup the password and ip address with the setup-alpine command.
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl console vmi-alpine
Successfully connected to vmi-alpine console. The escape sequence is ^]
Welcome to Alpine Linux 3.11
Kernel 5.4.27-0-virt on an aarch64 (/dev/ttyAMA0)
localhost login: root
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org/>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
localhost:~# setup-alpine
…
Use the defaults eth0, dhcp, random for mirrors, none to store configs. This is illustrated step by step in detail.
localhost:~# ping google.com
PING google.com (142.250.81.238): 56 data bytes
64 bytes from 142.250.81.238: seq=0 ttl=117 time=4.902 ms
64 bytes from 142.250.81.238: seq=1 ttl=117 time=4.167 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.167/4.534/4.902 ms
localhost:~# vi /etc/ssh/sshd_config # set PermitRootLogin yes
localhost:~# service sshd restart
* Stopping sshd ...
[ ok ]
* Starting sshd ...
[ ok ]
localhost:~#
ii. Login to the cirros VM using virtctl
Userid cirros and Password gocubsgo
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl console vmi-cirros
Successfully connected to vmi-cirros console. The escape sequence is ^]
vmi-cirros login: cirros
Password:
$
iii. Login to the Fedora VM using virtctl
Userid fedora and Password fedora
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl console vmi-fedora
Successfully connected to vmi-fedora console. The escape sequence is ^]
vmi-fedora login: fedora
Password:
[fedora@vmi-fedora ~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP group default qlen 1000
link/ether 2e:61:1c:ab:6e:97 brd ff:ff:ff:ff:ff:ff
altname enp1s0
inet 10.42.0.15/24 brd 10.42.0.255 scope global dynamic noprefixroute eth0
valid_lft 86312229sec preferred_lft 86312229sec
inet6 fe80::2c61:1cff:feab:6e97/64 scope link
valid_lft forever preferred_lft forever
[fedora@vmi-fedora ~]$ ping google.com
PING google.com (142.250.80.78) 56(84) bytes of data.
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=1 ttl=116 time=5.31 ms
64 bytes from lga34s35-in-f14.1e100.net (142.250.80.78): icmp_seq=2 ttl=117 time=5.14 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 5.141/5.226/5.312/0.085 ms
[fedora@vmi-fedora ~]$
c. Login to each of the three VMs from a pod in MicroShift
Build the ssh client image from the Dockerfile and run a pod. We will ssh to the VMIs from this pod.
podman build -t docker.io/karve/alpine-sshclient .
podman push docker.io/karve/alpine-sshclient
oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
or run the following to install ssh client on the alpine image
kubectl run alpine --privileged --rm -ti --image=alpine -- /bin/sh
apk update && apk add --no-cache openssh-client
Output:
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc run sshclient --privileged --rm -ti --image=karve/alpine-sshclient:arm64 -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ # ping 10.42.0.14
PING 10.42.0.14 (10.42.0.14): 56 data bytes
64 bytes from 10.42.0.14: seq=0 ttl=64 time=0.464 ms
64 bytes from 10.42.0.14: seq=1 ttl=64 time=0.400 ms
^C
--- 10.42.0.14 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.400/0.432/0.464 ms
/ # ping 10.42.0.15
PING 10.42.0.15 (10.42.0.15): 56 data bytes
64 bytes from 10.42.0.15: seq=0 ttl=64 time=0.468 ms
64 bytes from 10.42.0.15: seq=1 ttl=64 time=0.508 ms
^C
--- 10.42.0.15 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.468/0.488/0.508 ms
/ # ssh fedora@10.42.0.15
The authenticity of host '10.42.0.15 (10.42.0.15)' can't be established.
ED25519 key fingerprint is SHA256:OndsjKvhEJyYhXqQZUcLu2lf9BxKIJBUolN8ZmTt+2U.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.15' (ED25519) to the list of known hosts.
fedora@10.42.0.15's password:
[fedora@vmi-fedora ~]$ exit
logout
Connection to 10.42.0.15 closed.
/ # ssh cirros@10.42.0.14
The authenticity of host '10.42.0.14 (10.42.0.14)' can't be established.
ECDSA key fingerprint is SHA256:z59lFNp3QynajepOYKHL/xMP4YzJHLthM6+/jYp2y88.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.42.0.14' (ECDSA) to the list of known hosts.
cirros@10.42.0.14's password:
$ exit
Connection to 10.42.0.14 closed.
/ # ssh root@10.42.0.13
root@10.42.0.13's password:
Welcome to Alpine!
The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <http://wiki.alpinelinux.org/>.
You can setup the system with the command: setup-alpine
You may change this message by editing /etc/motd.
localhost:~# exit
Connection to 10.42.0.13 closed.
/ #
The above output shows us ping a couple of VMIs and the login into the Fedora, Cirros and the Apline VMIs. We had already setup the sshd on the Alpine VMI with “PermitRootLogin yes” when we logged in using “virtctl console” command.
d. Accessing VMs using NodePort
We can expose the port 22 for the Fedora Virtual Machine Instance with the Service Port 22222. We can connect to the assigned Node Port (31440 below).
virtctl expose vmi vmi-fedora --port=22222 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Output:
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl expose vmi vmi-fedora --port=22222 --target-port=22 --name=vmi-fedora-ssh --type=NodePort
Service vmi-fedora-ssh successfully exposed for vmi vmi-fedora
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get svc vmi-fedora-ssh
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vmi-fedora-ssh NodePort 10.43.84.161
Deleting the Virtual Machine Instances
cd /root/microshift/raspberry-pi/vmi
oc delete -f vmi-alpine.yaml
oc delete -f vmi-cirros.yaml
oc delete -f vmi-fedora.yaml
Creating a Virtual Machine
We will create a Virtual Machine (VM) using the vm-fedora.yaml. When a VMI is owned by a VM or by another object, it is managed through its owner in the web console or by using the oc command-line interface (CLI). A VirtualMachine provides additional management capabilities to a VirtualMachineInstance inside the cluster. A VirtualMachine will make sure that a VirtualMachineInstance object with an identical name will be present in the cluster, if spec.running is set to true. Further it will make sure that a VirtualMachineInstance will be removed from the cluster if spec.running is set to false. Once a VirtualMachineInstance is created, its state will be tracked via status.created and status.ready fields of the VirtualMachine.
cd /root/microshift/raspberry-pi/vmi
oc apply -f vm-fedora.yaml
virtctl start vm-example
virtctl console vm-example
Output:
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc apply -f vm-fedora.yaml
virtualmachine.kubevirt.io/vm-example created
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get vm,vmi
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 3m40s Stopped False
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl start vm-example
VM vm-example was scheduled to start
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get vm,vmi
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 6m43s Starting False
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/vm-example 9s Scheduled raspberrypi.example.com False
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get pods
NAME READY STATUS RESTARTS AGE
virt-launcher-vm-example-gc6n5 2/2 Running 0 17s
root@raspberrypi:~/microshift/raspberry-pi/vmi# oc get vm,vmi
NAME AGE STATUS READY
virtualmachine.kubevirt.io/vm-example 7m Running True
NAME AGE PHASE IP NODENAME READY
virtualmachineinstance.kubevirt.io/vm-example 26s Running 10.42.0.13 raspberrypi.example.com True
root@raspberrypi:~/microshift/raspberry-pi/vmi# virtctl console vm-example
Successfully connected to vm-example console. The escape sequence is ^]
vm-example login: fedora
Password:
[fedora@vm-example ~]$ ping google.com
PING google.com (142.250.80.46) 56(84) bytes of data.
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=1 ttl=115 time=4.10 ms
64 bytes from lga34s34-in-f14.1e100.net (142.250.80.46): icmp_seq=2 ttl=115 time=3.31 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.314/3.707/4.100/0.393 ms
[fedora@vm-example ~]$ exit
logout
Fedora 32 (Cloud Edition)
Kernel 5.6.6-300.fc32.aarch64 on an aarch64 (ttyAMA0)
SSH host key: SHA256:kij4h5HVVXR+jDu3xIDpJAoJyOYpRYJxtlk01lDNgFw (RSA)
SSH host key: SHA256:IfBmuz+HYD5XlqADHDmv2aG/SLhOncB8mqp400oipLY (ECDSA)
SSH host key: SHA256:sxgqXNtK2SOvRbXwVjYmPDH8fgNTs4iGQJNGhB4Dl/Y (ED25519)
eth0: 10.0.2.2 fe80::5054:ff:fe0e:772b
vm-example login:
Using the Web Console, we can see the VMs and VMIs in a single list view. VMIs are not surface in the list if they are owned by a VM. In the picture below, the VM vm-example and the VMI vmi-fedora were created independently.
We can see the VNC console for the vm-example in picture below. You can also connect to the Serial console. You may need to click on “Open the console in a new window” for the VNC Console. We can also execute the Actions from the UI.
We can delete the Virtual Machine from the Actions menu.
With MicroShift Virtualization, we let VMs run as a native citizen next to container workloads. In addition to exporting the ssh as a service, we can expose other services in the project the VM is running. Deployed in the same project both use the same network and can access each other’s services.
Testing KVM on the Raspberry Pi OS
We will download and run the Fedora Cloud VM on the Raspberry Pi 4. The virt-customize allows us to set a default root password and uninstalls cloud-init from the downloaded image.
wget https://fedora.mirror.garr.it/fedora/linux//releases/34/Cloud/aarch64/images/Fedora-Cloud-Base-34-1.2.aarch64.qcow2
mv Fedora-Cloud-Base-34-1.2.aarch64.qcow2 /var/lib/libvirt/images/fedora-cloud.qcow2
chmod 777 /var/lib/libvirt/images/fedora-cloud.qcow2 # or change owner to libvirt-qemu
apt-get install -y libguestfs-tools
virt-customize -a /var/lib/libvirt/images/fedora-cloud.qcow2 --root-password password:fedora --uninstall cloud-init
virt-install \
--name fedora-cloud \
--vcpus 4 \
--memory 4096 \
--disk /var/lib/libvirt/images/fedora-cloud.qcow2 --import \
--network default \
--graphics none \
--os-variant fedora33 # fedora34 not seen in /usr/share/osinfo/os/fedoraproject.org/
Output:
…
fedora login: root
Password:
[systemd]
Failed Units: 1
ldconfig.service
[root@fedora ~]# ping google.com
PING google.com (142.250.72.110) 56(84) bytes of data.
64 bytes from lga34s32-in-f14.1e100.net (142.250.72.110): icmp_seq=1 ttl=117 time=4.01 ms
64 bytes from lga34s32-in-f14.1e100.net (142.250.72.110): icmp_seq=2 ttl=117 time=3.92 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.920/3.964/4.009/0.044 ms
[root@fedora ~]#
We can disconnect with ^]. We will connect to this VM again after listing the VMs.
root@raspberrypi:~# virsh list
Id Name State
------------------------------
1 fedora-cloud running
Reconnect to the VM
root@raspberrypi:~# virsh console fedora-cloud
Connected to domain 'fedora-cloud'
Escape character is ^] (Ctrl + ])
[root@fedora ~]#
We can disconnect with ^], then shutdown and undefine the VM.
root@raspberrypi:~# virsh shutdown fedora-cloud
Domain 'fedora-cloud' is being shutdown
root@raspberrypi:~# virsh undefine --nvram fedora-cloud
Domain 'fedora-cloud' has been undefined
Error Messages in MicroShift Logs
The following journalctl command will continuously show warning messages “failed to fetch hugetlb info”. The default kernel for the Raspberry Pi OS does not support HugeTLB hugepages.
journalctl -u microshift -f
Output:
...
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: W0301 15:37:24.217665 108923 container.go:586] Failed to update stats for container "/system.slice/system-getty.slice": error while statting cgroup v2: [open /sys/kernel/mm/hugepages: no such file or directory
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: failed to fetch hugetlb info
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: github.com/opencontainers/runc/libcontainer/cgroups/fs2.statHugeTlb
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: /opt/app-root/src/github.com/redhat-et/microshift/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/hugetlb.go:35
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: github.com/opencontainers/runc/libcontainer/cgroups/fs2.(*manager).GetStats
Mar 01 15:37:24 raspberrypi.example.com microshift[108923]: /opt/app-root/src/github.com/redhat-et/microshift/vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/fs2.go:123
...
To remove these messages, we can recompile the microshift binary using the changes from hugetlb.go.
apt -y install build-essential curl libgpgme-dev pkg-config libseccomp-dev
# Install golang
wget https://golang.org/dl/go1.17.2.linux-arm64.tar.gz
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.17.2.linux-arm64.tar.gz
rm -f go1.17.2.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
cat << EOF >> /root/.bashrc
export PATH=$PATH:/usr/local/go/bin
export GOPATH=/root/go
EOF
mkdir $GOPATH
git clone https://github.com/thinkahead/microshift.git
cd microshift
Edit the file vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs2/hugetlb.go and remove the return on err.
func statHugeTlb(dirPath string, stats *cgroups.Stats) error {
hugePageSizes, _ := cgroups.GetHugePageSize()
//hugePageSizes, err := cgroups.GetHugePageSize()
//if err != nil {
// return errors.Wrap(err, "failed to fetch hugetlb info")
//}
hugetlbStats := cgroups.HugetlbStats{}
Build and replace the microshift binary. Restart MicroShift.
make
mv microshift /usr/local/bin/.
systemctl restart microshift
Preload Images
To prevent unresponsive node, preload the VM images to crio before installing kubevirt-operator. You could load the images used by KubeVirt to crio before starting MicroShift to prevent high iowaits after MicroShift is started. For example:
crictl pull quay.io/kubevirt/fedora-cloud-container-disk-demo:20210811_9fec1f849-arm64
If you do not want to build the OKD Web console for arm64, you can preload this image.
crictl pull docker.io/karve/console
Conclusion
In this Part 9, we saw how to build and manage VMs using KubeVirt on MicroShift on the Raspberry Pi 4 with the Raspberry Pi OS (64 bit). We ran samples with Alpine, Cirros, and Fedora. We showed multiple ways to connect to these VMs. MicroShift allows a unified platform where we can build, modify, and deploy applications residing in both Pods as well as Virtual Machines in a common and shared environment. In Part 10, Part 11, and Part 12, we will deploy MicroShift and KubeVirt on Fedora IoT, Fedora Server and Fedora CoreOS respectively.
Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your use of MicroShift and Virtualization on ARM devices and if you would like to see something covered in more detail.
References
#MicroShift#Openshift#containers#crio#Edge#raspberry-pi#raspberry-pi-os