Working with MicroShift on MacBook Pro with VirtualBox and Multipass
Introduction
MicroShift is a research project that is exploring how OpenShift OKD Kubernetes distribution can be optimized for small form factor devices and edge computing. Edge devices pose very different operational, environmental, and business challenges from those of cloud computing. We will look at the different edge computing requirements and where MicroShift fits in. We will build and deploy MicroShift on a Fedora 35 Virtual Machine in VirtualBox and also in a Ubuntu 20.04 VM using Multipass on a MacBook Pro. Finally, we will look at some samples to run on MicroShift in this introductory article of this series. In the next part of this series, we will continue to build and deploy MicroShift on ARM devices, KubeVirt and Kata Containers. MicroShift is still in its early days and moving fast. MicroShift can be deployed on various platforms and aims to provide a development and management experience consistent with standard OpenShift.
Overview of Edge Computing
The Edge is a distributed computing architecture that brings compute closer to things and people that produce and use the data. Edge computing complements and extends Internet of Things (IoT) and sensors that provide sensory data for upstream processing. The exponential growth of mobile and IoT applications continues to fuel the need for a highly available and a low-latency platform to process data being generated and consumed at the edge. For real-time control systems like automated vehicles, factory automation and field inventory control, cloud network latency is too high. The Cloud is limited by its centralized processing model. Decentralizing is the next step in delivering a faster, content-rich experience to customers as cloud operations shift to the edge facilitated by a “consistent distributed platform” for application lifecycle management. The edge introduces significant challenges in the operating model that need balancing through a myriad of conflicting requirements as follows:
- Node Autonomy for disconnected operations and Network Tolerant – outages in connectivity are normal. The applications have to handle the specific use cases (car/truck automotive, airplanes, warehouses, oil rigs, etc.) for disconnected environment, i.e. what to do with the sensors/data when connectivity is degraded
- Resource-efficient – Memory, storage, and network are constrained, ability to run image processing using GPU acceleration at the edge, data reduction/data sketching to reduce network traffic - cannot flow all data to the central data center
- Realtime characteristics, low latency
- Management of edge requires consideration of two orthogonal issues: Platform and Application. The platform may be a Location, a Node/Device, a grouping of Nodes or a Cluster. The application lifecycle management offers the ability to create, deploy, run, secure, monitor, maintain and scale applications and AI analytics across edge deployments
- Fleet management capability to allow scaling 10s of millions of devices. The devices may have a natural hierarchy that allows distributed management
- Group operations through taint, annotation, label, etc. Multiple nodes need to communicate with each other in a group. There is a need to address homogeneity and non-homogeneity of devices on the edge, interoperability of hardware and consistency of operations
- Health check or Heartbeat from multiple disconnected nodes.
- Zero Touch – plug-and-go (wire it, and leave)
- Safe – difficult to ‘brick’
- Secure – can operate behind NAT and Firewalls
- Asynchronous management spread in inaccessible areas. For example: deep reaches of space, assembly lines, railroad tracks
- Ruggedized – Edge environments might include an autonomous vehicle, an industrial processing facility, or even outer space. Equipment must often operate in harsh settings that can include very large temperature variations, temperature extremes, dust, vibration, and more
- Reproducible – Must be a repeatable solution
- Observable – Monitoring for status and logging, based on end-to-end observability
- Immutable OS – Edge-optimized OSes like Fedora IoT and RHEL for Edge offer efficient OTA transactional updates, rollback (greenboot), container-hosting compute devices
There are three categories of edge use cases in Red Hat's approach which sometimes overlap:
- Enterprise edge use cases feature an enterprise data store at the core, in a datacenter or as a cloud resource. The enterprise edge allows users to extend their application services to remote locations
- Operations edge use cases concern industrial edge devices, with significant involvement from operational technology (OT) teams. The operations edge is a place to gather, process, and act on data, right there on site
- Provider edge use cases involve both building out networks and offering services delivered with them, as in the case of a telecommunications company. The service provider edge supports reliability, low latency, and high performance with computing environments close to customers and devices
Red Hat Portfolio
Red Hat offers a broad portfolio of products and services to help customers with edge requirements.
- Red Hat OpenShift on Cloud/Hyperscalers/On-Premises
-
- Highly opinionated Kubernetes platform built for an open hybrid cloud strategy with out-of-the-box features: RHEL/RHCOS, monitoring based on Prometheus and Grafana, secure networking through OpenShift Service Mesh (Istio, Kiali, Jaeger), Tekton pipeline, Templates vs Helm 3, logging stack (Kibana, Fluentd, Easticsearch), Serverless (KNative), Virtualization
- Cloud Platform Agnostic - Consistent application platform for the management of existing, modernized, and cloud-native applications that runs on any cloud/hyperscalers
- Common abstraction layer across any infrastructure to give both developers and operations teams commonality in how applications are packaged, deployed, and managed
- Ability to use the ecosystem (same tools and processes) while easing the burden of configuring, deploying, provisioning, managing, tracking metrics, and monitoring even the largest-scale containerized applications
- Flexibility of cloud services, virtualization, microservices, and containerization combined with the speed and efficiency of edge computing to increase functionality, decrease latency, expand bandwidth, and get the most from your virtual and on-premises infrastructure
- Self-Managing - Operators manage the whole cluster including the underlying OS
- Requirements – 3 Master/infra nodes (8 vCPU, 32GB RAM), 2+ Compute nodes (16 vCPU, 64GB RAM), Load balancer (4 vCPU, 4GB RAM)
- OpenShift at the Edge - Provides users with a consistent experience across the sites where OpenShift is deployed, regardless of the size of the deployment
-
- OpenShift 4.5 Three-node (3+) architecture with HA
- Both supervisor and worker roles on RHEL CoreOS, 6 CPU, 24GB RAM, 120GB disk space
- OpenShift 4.6 Remote Worker Node Topology
- Supervisory nodes located in larger sites (regional/central) and worker nodes distributed across smaller edge sites, enables efficient use of worker nodes in their entirety for workloads
- Latency between the control plane and the edge site be under 200 ms (not disconnected)
- Moderation strategies for network separation based on standard Kube machinery using Daemon Sets, static pods to deal with network partition, disruptions power loss & reboot scenarios, configuring the kubelet to control the timing of when it marks nodes as unhealthy, using Kubernetes zones to control pod eviction
- OpenShift 4.9 Single Node OpenShift (SNO) – low bandwidth, disconnected
- Control and worker capabilities into a single server, no option to add additional hosts
- Minimum host resources: 8 CPU, 32GB RAM, 120GB disk space
- Does not offer high availability
- Workload "autonomous" to continue operating with its existing configuration and running the existing workload, even when any centralized management functionality is unavailable
- Assisted Installer from the OpenShift customer portal
- RHEL for Edge (RHEL 8.4) - Edge-focused updates to Red Hat Enterprise Linux include:
-
- Rapid creation of operating system images for the edge through the Image Builder capability as the engine to create rpm-ostree images. This enables IT organizations to more easily create purpose-built images optimized for the broad architectural challenges inherent to edge computing but customized for the exact needs of a given deployment
- Remote device update mirroring to stage and apply updates at the next device reboot or power cycle, helping to limit downtime and manual intervention from IT response teams. OS image updates are staged in the background
- Over-the-air updates that transfer less data while still pushing necessary code, an ideal feature for sites with limited or intermittent connectivity. Network efficient OTA updates deliver only deltas
- Intelligent rollbacks built on OSTree capabilities, which enable users to provide health checks specific to their workloads to detect conflicts or code issues. When a problem is detected, the image is automatically reverted to the last good update, helping to prevent unnecessary downtime at the edge. Rollbacks controlled by application health-checks using the greenboot framework
- Podman 3 for automatic container image updates
- Edge-focused improvement to Universal Base Image (UBI)
- Fairly static control plane that does not need the dynamicity of Kubernetes
- Red Hat Ansible Automation Platform for Edge automation for connecting a variety of devices, applications and data
-
- Repeatable instructions and processes that reduce human interaction with IT systems
- Roll out automation software closer to the physical execution location at the edges of a network to speed up transactions
- Can run as a fully supported solution on top of Linux® operating system (OS) or container orchestration engine at network’s edge
What is MicroShift?
MicroShift is experimental small form-factor OpenShift for low-resource (CPU: ARM Cortex or Intel Atom class 2 cores, 2GB RAM, around 124MB microshift binary) field-deployed devices (SBCs, SoCs – Raspberry Pi 4, Jetson Nano), a minimal K8s container orchestration layer extended with OpenShift APIs that runs on <1GB RAM and <1 CPU core, consumes <500MB on the wire and <1GB at rest (excluding etcd state). The memory footprint is reduced primarily by running many components inside of a single process. This eliminates significant overhead that would otherwise be duplicated for each component. The binary is smaller by removing third-party storage drivers and cloud providers. Dropping cluster operators significantly reduces MicroShift’s resource footprint, but comes at the cost of requiring more manual, lower level configuration from users if operating outside of MicroShift’s opinionated parameters. If a functionality can be added post-cluster-up with reasonable effort, then it is not part of the MicroShift core/binary. MicroShift does not attempt to squeeze out the last 20% of resources, e.g., by patching or compressing code or inventing lighter-weight components. MicroShift enables:
- Reuse of OpenShift's code, operational logic, and production chain tools and processes where possible with reasonable effort (openshift-dns, openshift-router, service-ca, local storage provider)
- Use of OpenShift content to achieve “develop once, deploy anywhere”, but only providing a subset of OpenShift features and not offering the management flexibility and convenience of cluster operators
- Making control plane restarts cheap - no workload impact, fast (re)starts and be lifecycle-managed as a single unit by the OS
- Updates, rollbacks, and config changes. These become a matter of staging another version in parallel and then - totally without relying on network at this point - flipping to/from that version
- Autonomous operation - it does not require external orchestration
- The binary to run the Control Plane/Node process, it is not a tool to manage or be a client to those processes (like oc or kubectl)
- Host networking configured by device management. MicroShift must work with what it's been given by the host OS
MicroShift Deployment Strategies
- MicroShift may be deployed containerized on Podman/Docker or native via rpm, managed via systemd
- Containerized: Podman 3.4 autoupdate, pull down new images if needed and restart containers
- Native: Image Builder as the engine with rpm-ostree images. With the greenboot health check framework for systemd, administrators can ensure the system boots into the expected state
- MicroShift is delivered as an application customers deploy on managed RHEL for Edge devices that:
- cannot assume any responsibility or control over the device or OS it runs on
- internet connectivity intermittent, high latency, low bandwidth, expensive (disconnected mode)
- minimal open server ports typically firewalled and NATed, not accessible from outside
- no SSH access, no way to reconfigure or add network services (PXE, DNS, bastion node)
- no Kubernetes API access from outside LAN (incl. management hub), no access as privileged user
- communication is always initiated from the device/cluster towards the management system
- measured boot + remote attestation (authenticate h/w s/w config to remote server)
- control plane-only instances should run completely non-privileged
- instances with node role run a kubelet and kube-proxy and interface with CRI-O for running workloads, thus require higher system privileges
MicroShift Features
- Deployments with 1 or 3 control plane and 0..N worker instances
- MicroShift's lifecyle is decoupled from the underlying OS's lifecycle
- Updates or changes to MicroShift do not disrupt running workloads
- Meets DISA STIG and FedRAMP security requirements; it runs as non-privileged workload and supports common CVE and auditing workflows
- Allows segregation between the "edge device administrator" and the "edge service development and operations" personas
- the former for device+OS lifecycle and installing MicroShift as a workload, the latter for services on and resources of the MicroShift cluster
- Provides application-level events and metrics for observability
- MicroShift runs all workloads that OpenShift runs, except those which depend on OpenShift's cluster operators
- Clusters can be managed like OpenShift through Open Cluster Management
Deploying MicroShift on VirtualBox
We will use Vagrant to create a VM in VirtualBox on the MacBook Pro. The Vagrantfile for the Fedora 35 uses the box with the fedora/35-cloud-base. You can replace with fedora/34-cloud-base or another image of your choice. The Vagrantfile.fc36 may be used for Fedora 36 and the Vagrantfile.fc37 for Fedora Rawhide. We configure the RAM=3GB and CPUs=2 before starting the Vagrant box with the command:
vagrant up
Once the VM is up, we can connect to it and watch the node and pods in MicroShift:
vagrant ssh
sudo su –
./w.sh
The Vagrantfile installs the MicroShift
dependencies (container-selinux, containernetworking-plugins, cri-o, etc.), sets up the
networking configuration, opens the
firewall ports, and downloads the
kubectl,
oc command-line interface,
helm, and
odo - a fast and easy-to-use CLI tool for creating applications. When starting the MicroShift for the first time the kubeconfig file is created. The
KUBECONFIG is exported to the .bash_profile for root. If you need it for another user or to use externally, you can copy it from /var/lib/microshift/resources/kubeadmin/kubeconfig. Once the installation is complete and all the pods are in Running state, the single node MicroShift cluster is operational and ready to deploy workloads.
The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting.
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
oc patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'
You may also need to patch the service-ca deployment if it keeps restarting:
oc patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'
In addition to the ssh port 22, the port 80 is forwarded. This allows us to connect to the routes exposed from the VM on port 80 accessible using localhost on the Mac.
We can look at the microshift logs with the command:
journalctl -u microshift -f
Samples in MicroShift
The following samples show the use of odo, persistent volume, helm and web console in Microshift.
- Node Red sample
We will use odo to deploy the Node Red using the following command:
./odo-sample.sh
It should finally show the following output in the logs:
Welcome to Node-RED
===================
You can hit Ctrl-C to get out of the logs. During “npm install”, if you see the error “ssh: Could not resolve hostname github.com: Name or service not known”, you may want to run the patches from above and then run the odo-sample.sh script again.
Add the following to /etc/hosts on your Mac
127.0.0.1 test-app-node-red.cluster.local
- Mysql sample
Run the helm mysql sample as follows:
oc new-project mysql
./sql-sample.sh
oc get pods -n mysql -w
Wait for the mysql pod to go to Running state. The
sql-sample.sh creates the persistent volume with
hostpathpv.yaml and fixes the read-only error using the
secpatch.yaml. We will create a new pod to connect to the mysql server. The pod cannot resolve the external ip addresses, to fix this, first run:
firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i cni0 -j ACCEPT
firewall-cmd --reload
Get the clusterIP for the mysql service
clusterip=`oc get svc -n mysql -o jsonpath='{.items[0].spec.clusterIP}'`
echo $clusterip
kubectl run -i --tty ubuntu --image=ubuntu:18.04 --restart=Never -- bash -il
apt-get update && apt-get install mysql-client -y
mysql -h$clusterip -umy-user -pmy-password # Replace $clusterip from above
quit
mysql -h$clusterip -uroot -psecretpassword # Replace $clusterip from above
quit
exit
If you get an Access denied error connecting as root from this client pod: "ERROR 1045 (28000): Access denied for user 'root'@'10.85.0.1' (using password: YES)", you can run the following on the mysql server directly by connecting as root. After setting the host='%', you can connect remotely using root.
oc exec -it deployment/mysql -c mysql -- bash
mysql -uroot -psecretpassword
update mysql.user set host='%' where user='root';
If you see the "ERROR 1062 (23000): Duplicate entry '%-root' for key 'PRIMARY'", it means the entry for root already exists.
- OKD Web Console
Run the following script to install the Web Console
./okd-web-console-install.sh
oc get pods -n kube-system -w
Wait for console-deployment pod to go to Running state. Add the following to /etc/hosts on your Mac.
127.0.0.1 console-np-service-kube-system.cluster.local
Cleanup Microshift
The cleanup script to uninstall MicroShift is provided in github. We can run the cleanup.sh to uninstall Microshift.
sudo dnf install gitgit clone https://github.com/thinkahead/microshift.gitcd microshift/hack./cleanup.sh
Building the MicroShift binary
We can build and run the MicroShift binary directly on the VM or using podman in an image.
To build directly on the VM:
sudo dnf -y install git make golang glibc-static
git clone https://github.com/thinkahead/microshift.gitcd microshiftmake./microshift version
Alternatively, to build the image using podman (running on the VM):
sudo dnf -y install git make golang podmangit clone https://github.com/thinkahead/microshift.gitcd microshiftmake microshiftThe above “make microshift” command will run podman to build an image as follows (partial logs shown below):
echo BIN_TIMESTAMP==2021-11-23T00:13:49ZBIN_TIMESTAMP==2021-11-23T00:13:49Z/usr/bin/podman build -t quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64 \-f "/root/microshift"/packaging/images/microshift/Dockerfile \--build-arg SOURCE_GIT_TAG=4.8.0-0.microshift-unknown \--build-arg BIN_TIMESTAMP=2021-11-23T00:13:49Z \--build-arg ARCH=amd64 \--build-arg MAKE_TARGET="cross-build-linux-amd64" \--build-arg FROM_SOURCE=false \--platform="linux/amd64" \.…[2/2] STEP 4/6: COPY --from=builder /opt/app-root/src/github.com/redhat-et/microshift/_output/bin/linux_$ARCH/microshift /usr/bin/microshift--> aad4f9e0f91[2/2] STEP 5/6: ENTRYPOINT ["/usr/bin/microshift"]--> acff101db87[2/2] STEP 6/6: CMD ["run"][2/2] COMMIT quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64--> 93004757027Successfully tagged quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64
The logs show that the microshift binary is at /usr/bin/microshift within the image quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64 we just built, let’s get it out from the image
id=$(podman create quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64)
podman cp $id:/usr/bin/microshift ./microshift
./microshift version
podman rm $id
We can also build the all-in-one MicroShift image with the command:
make microshift-aio
The aio image build installs cri-o, container-selinux, containernetworking-plugins within the image quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-amd64.
Running Microshift from our binary image directly on the VM
We can run microshift as follows directly on the VM using the binary we just built:
sudo ./microshift runor
sudo ./microshift run -v=<log verbosity>
Running MicroShift within containers
This can be done in two ways:
-
MicroShift Containerized - MicroShift binary runs in a Podman container, CRI-O Systemd service runs directly on the host and data is stored at /var/lib/microshift and /var/lib/kubelet on the host VM.
curl -o /etc/systemd/system/microshift.service https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift-containerized.service
curl -o /usr/bin/microshift-containerized https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift-containerized
# Replace the image in /etc/systemd/system/microshift.service with image we just built earlier quay.io/microshift/microshift:4.8.0-0.microshift-unknown-linux-amd64
sudo systemctl enable crio --now
sudo systemctl enable microshift --now
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
The /etc/systemd/system/microshift.service shows that podman runs microshift within a container, but uses cri-o on the host VM with the volume mounts.
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon --rm --replace --sdnotify=container --label io.containers.autoupdate=registry --network=host --privileged -d --name microshift -v /var/run:/var/run -v /sys:/sys:ro -v /var/lib:/var/lib:rw,rshared -v /lib/modules:/lib/modules -v /etc:/etc -v /run/containers:/run/containers -v /var/log:/var/log -e KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig quay.io/microshift/microshift:latest
The ”podman images” shows that the images are directly on the host VM.
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/microshift/microshift latest 4c0b4ba5c378 3 days ago 373 MB
quay.io/microshift/flannel-cni 4.8.0-0.okd-2021-10-10-030117 1aeff725f9c3 4 days ago 9.15 MB
quay.io/openshift/okd-content <none> d1fa6aa2ebe8 7 weeks ago 417 MB
quay.io/openshift/okd-content <none> f5311e164309 2 months ago 401 MB
quay.io/openshift/okd-content <none> dd9805ef0a0c 3 months ago 383 MB
quay.io/kubevirt/hostpath-provisioner v0.8.0 7cbc61ff04c8 5 months ago 180 MB
quay.io/openshift/okd-content <none> 919f4f2485b1 6 months ago 337 MB
quay.io/coreos/flannel v0.14.0 8522d622299c 6 months ago 68.9 MB
quay.io/openshift/okd-content <none> 811b973eed45 6 months ago 295 MB
k8s.gcr.io/pause 3.5 ed210e3e4a5b 8 months ago 690 kB
The “podman ps -a” shows that microshift container is running in podman
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fe662f310675 quay.io/microshift/microshift:latest run 3 minutes ago Up 3 minutes ago microshift
The “crictl pods” shows the pods created in cri-o on the host VM
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
a80db6d5107bd 4 minutes ago Ready dns-default-4q9jv openshift-dns 0 (default)
1e8681468c527 4 minutes ago Ready router-default-6c96f6bc66-d4p9c openshift-ingress 0 (default)
73e2bd459b623 5 minutes ago Ready service-ca-84b44986cb-mz86m openshift-service-ca 0 (default)
2870dda21870a 5 minutes ago Ready kube-flannel-ds-g95ch kube-system 0 (default)
a61f142f706e2 5 minutes ago Ready node-resolver-nmlcz openshift-dns 0 (default)
4c4da0663246d 5 minutes ago Ready kubevirt-hostpath-provisioner-mch9x kubevirt-hostpath-provisioner 0 (default)
Using crio service on host allows workload continuity during MicroShift upgrade. Cleanup MicroShift by running the hack/cleanup.sh script referenced earlier.
- MicroShift Containerized All-In-One - Containerized All-In-One where the MicroShift binary and CRI-O service run within a Podman container and data is stored in a podman volume, microshift-data. This should be used for “Testing and Development” only. The /usr/bin/microshift-aio creates the podman volume and exports the KUBECONFIG from this volume in microshift-aio.conf.
curl -o /etc/systemd/system/microshift.service https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift-aio.service
curl -o /usr/bin/microshift-aio https://raw.githubusercontent.com/redhat-et/microshift/main/packaging/systemd/microshift-aio
# Replace the image in /etc/systemd/system/microshift.service with image we just built earlier quay.io/microshift/microshift-aio:4.8.0-0.microshift-unknown-linux-nft-amd64
sudo systemctl enable microshift --now
The /etc/systemd/system/microshift.service shows that podman runs microshift within a container. The crio will also run within the container.
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon --rm --replace -d --name microshift-aio --privileged -v /lib/modules:/lib/modules -v microshift-data:/var/lib --label io.containers.autoupdate=registry -p 6443:6443 quay.io/microshift/microshift:4.7.0-0.microshift-2021-08-31-224727-aio
The “podman volume ls” command shows the microshift-data volume:
DRIVER VOLUME NAME
local microshift-data
We can get inside the container:
sudo podman exec -ti microshift-aio bash
Inside the container, run the following to see the pods:
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
kubectl get pods -A
exit
We can execute oc and kubectl commands from the VM to see the resources within the podman aio container. On the VM, set the KUBECONFIG by sourcing the /etc/microshift-aio/microshift-aio.conf:
source /etc/microshift-aio/microshift-aio.conf # aio data-dir is a podman volume
echo $KUBECONFIG #/var/lib/containers/storage/volumes/microshift-data/_data/microshift/resources/kubeadmin/kubeconfig
kubectl get pods -A
We can cleanup MicroShift by running the hack/cleanup.sh script referenced earlier. Also delete the volume:
podman volume rm microshift-data
The system file shows --label "io.containers.autoupdate=registry" in the ExecStart above. The “podman auto-update” command will look for containers with the label and a systemd service file, it will check for a new image, download it, and restart the container service.
Running MicroShift using Multipass
Multipass is a lightweight VM manager from Canonical that can launch and run virtual machines and configure them with cloud-init. The default backend on macOS is hyperkit, wrapping Apple’s Hypervisor.framework. Multipass is designed for developers who want a fresh Ubuntu environment with a single command. We launch an Ubuntu 20.04.3 VM with 3GB RAM and 2 CPUs. Note the ip address of the VM, we will add hostnames for the sample routes using this ip address to /etc/hosts later. The DNS set in the cloud-init with the systemd-resolved.yaml is to prevent name resolution problems later. For this test, you may want to disable VPN on your MacBook Pro to prevent routing issues and in System Preferences - uncheck the "Enable stealth mode" in Firewall Options under Security & Privacy.
cat << EOF > systemd-resolved.yaml
#cloud-config
bootcmd:
- printf "[Resolve]\nDNS=8.8.8.8" > /etc/systemd/resolved.conf
- [systemctl, restart, systemd-resolved]
EOF
multipass launch --name microshift --mem 3G --disk 20G --cloud-init systemd-resolved.yaml
multipass list # Note the IPv4 address
multipass info microshift
Login to the VM and install MicroShift using the install.sh script from github:
multipass shell microshift
sudo su -
hostnamectl set-hostname microshift.example.com # the host needs a fqdn domain for microshift to work well
apt-get update
git clone https://github.com/thinkahead/microshift.git
cd microshift
./install.sh
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
watch kubectl get pods -A
The install.sh has been deprecated and moved into /hack as precursor to removing it completely in favour of the .rpm- and Podman-based deployments.
The following patch may be required if the dns-default pod in the openshift-dns namespace keeps restarting:
kubectl patch daemonset/dns-default -n openshift-dns -p '{"spec": {"template": {"spec": {"containers": [{"name": "dns","resources": {"requests": {"cpu": "80m","memory": "90Mi"}}}]}}}}'
You may also need to patch the service-ca deployment if it keeps restarting:
kubectl patch deployments/service-ca -n openshift-service-ca -p '{"spec": {"template": {"spec": {"containers": [{"name": "service-ca-controller","args": ["-v=4"]}]}}}}'
Download the oc, helm and odo:
# Install the oc client
ARCH=x86_64
wget -q https://mirror.openshift.com/pub/openshift-v4/$ARCH/clients/ocp/candidate/openshift-client-linux.tar.gz
mkdir tmp;cd tmp
tar -zxvf ../openshift-client-linux.tar.gz
mv -f oc /usr/local/bin
cd ..;rm -rf tmp
rm -f openshift-client-linux.tar.gz
# Install helm
curl -o helm-v3.5.2-linux-amd64.tar.gz https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz 2>/dev/null
tar -zxvf helm-v3.5.2-linux-amd64.tar.gz
mv -f linux-amd64/helm /usr/local/bin
rm -rf linux-amd64/
rm -f helm-v3.5.2-linux-amd64.tar.gz
# Install odo
OS="$(uname | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')"
curl -L https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/latest/odo-$OS-$ARCH -o odo
install odo /usr/local/bin/
rm -f odo
Finally create the scripts and yaml to run the samples as we did previously in the Vagrantfile for VirtualBox:
# Mysql
cat << EOF > /root/hostpathpv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: hostpath-provisioner
spec:
#storageClassName: "kubevirt-hostpath-provisioner"
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/hpvolumes/mysql"
EOF
cat << EOF > /root/secpatch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
template:
spec:
containers:
- name: mysql
securityContext:
privileged: true
initContainers:
- name: remove-lost-found
securityContext:
privileged: true
EOF
cat << EOF > /root/sql-sample.sh
oc project default
chmod 600 /var/lib/microshift/resources/kubeadmin/kubeconfig
rm -rf /var/hpvolumes/mysql
mkdir /var/hpvolumes/mysql
#chown systemd-oom:systemd-oom /var/hpvolumes/mysql
oc delete -f hostpathpv.yaml
helm repo add stable https://charts.helm.sh/stable
helm install mysql stable/mysql --set mysqlRootPassword=secretpassword,mysqlUser=my-user,mysqlPassword=my-password,mysqlDatabase=my-database --set persistence.enabled=true --set storageClass=kubevirt-hostpath-provisioner
oc apply -f hostpathpv.yaml
sleep 1
oc apply -f secpatch.yaml
EOF
chmod +x /root/sql-sample.sh
# OKD Web Console
cat << EOF > /root/okd-web-console-install.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: console-deployment
namespace: kube-system
labels:
app: console
spec:
replicas: 1
selector:
matchLabels:
app: console
template:
metadata:
labels:
app: console
spec:
containers:
- name: console-app
image: quay.io/openshift/origin-console:4.2
env:
- name: BRIDGE_USER_AUTH
value: disabled # no authentication required
- name: BRIDGE_K8S_MODE
value: off-cluster
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT
value: https://kubernetes.default #master api
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS
value: "true" # no tls enabled
- name: BRIDGE_K8S_AUTH
value: bearer-token
- name: BRIDGE_K8S_AUTH_BEARER_TOKEN
valueFrom:
secretKeyRef:
name: console-token-ppfc2 # console serviceaccount token
key: token
---
kind: Service
apiVersion: v1
metadata:
name: console-np-service
namespace: kube-system
spec:
selector:
app: console
type: NodePort # nodePort configuration
ports:
- name: http
port: 9000
targetPort: 9000
nodePort: 30036
protocol: TCP
...
EOF
cat << EOF > /root/okd-web-console-install.sh
kubectl create serviceaccount console -n kube-system
kubectl create clusterrolebinding console --clusterrole=cluster-admin --serviceaccount=kube-system:console -n kube-system
sa=\$(kubectl get serviceaccount console --namespace=kube-system -o jsonpath='{.imagePullSecrets[0].name}' -n kube-system)
tokenname=\$(kubectl get secret \$sa -n kube-system -o jsonpath='{.metadata.ownerReferences[0].name}')
sed -i "s/name: .* # console serviceaccount token/name: \$tokenname # console serviceaccount token/" okd-web-console-install.yaml
#oc get secret -n \$sa -o yaml
kubectl create -f okd-web-console-install.yaml
sleep 2
oc expose svc console-np-service -n kube-system
EOF
chmod +x /root/okd-web-console-install.sh
# Node Red
cat << EOF > /root/odo-sample.sh
odo preference set ConsentTelemetry false -f
odo project delete node-red -f -w
rm -rf node-red
git clone https://github.com/node-red/node-red.git && cd node-red
odo project create node-red
odo create nodejs
odo url create test --port 1880
odo url delete http-3000 -f
sed -i 's/npm start/npm install \&\& npm run build \&\& npm start/' devfile.yaml
odo push
odo url list
#oc expose \$(oc get svc -o name)
oc get po,svc,routes
oc logs \$(oc get deployments -o name) -f
EOF
chmod +x /root/odo-sample.sh
# watch the nodes, pods, pv, pvc, images
cat << EOF > /root/w.sh
watch "kubectl get nodes;kubectl get pods -A;kubectl get pv,pvc -n default;crictl images;crictl pods"
EOF
chmod +x /root/w.sh
# Run the sql server
./sql-sample.sh
# Run the Node Red
./odo-sample.sh
# Run the okd web console
./okd-web-console-install.sh
We can now access the Node Red and OKD Web Console by adding the ip address of the VM noted earlier with the multipass list command to the /etc/hosts on the MacBook to resolve the test-app-node-red.cluster.local and console-np-service-kube-system.cluster.local respectively. Browse to http://console-np-service-kube-system.cluster.local/ to see the Dashboard and to http://test-app-node-red.cluster.local to access Node Red on your MacBook Pro.
After you are done testing MicroShift on Multipass, you may exit the VM and delete it as follows:
multipass delete microshift
multipass purge
Conclusion
OpenShift is a bundle of an OS (RHEL Core OS), control plane services and cluster operators that manage from the OS up. MicroShift comes as a single binary with the core components (etcd, kube-{apiserver,controller-manager,scheduler}, openshift-{apiserver,controller-manager}), but requires externally managed OS, FIDO/SDO for onboarding, managed by systemd, install/update/revert with Image builder and GreenBoot for disconnected deployments, may use RedHat Fleet Management, Transmission agent for configuration sets using gitops and Ansible automation.
In this article we saw how to run MicroShift directly on the host VM and the two options to run it containerized. We looked at samples that allows us to use persistent volume, project, helm and odo. Kubernetes works as a common platform for edge as it does for cloud, because of the strengths of its community and because it provides a unified control pane/single pane of glass. MicroShift enables reuse of the established OpenShift ecosystem (consistent application runtimes, devops tools, security, etc).
Hope you have enjoyed the article. Share your thoughts in the comments or engage in the conversation with me on Twitter @aakarve. I look forward to hearing about your MicroShift applications and if you would like to see something covered in more detail. In the following parts of this series, we will look at deploying MicroShift on the Jetson Nano (Part 2, Part 3) and the Raspberry Pi 4 (Part 4).
Abbreviations
- Defense Information Systems Agency (DISA) - DISA oversees the IT and technological aspects of organizing, delivering, and managing defense-related information
- Security Technical Implementation Guides (STIGs) - a configuration standard consisting of cybersecurity requirements for a specific product
- Federal Risk and Authorization Management Program (FedRAMP) - Standardizes security assessment and authorization for cloud products and services used by U.S. federal agencies. The goal is to make sure federal data is consistently protected at a high level in the cloud
- CVE score - Often used for prioritizing the security Common Vulnerabilities and Exposures. CVE is a glossary that classifies vulnerabilities. The glossary analyzes vulnerabilities and then uses the Common Vulnerability Scoring System (CVSS) to evaluate the threat level of a of vulnerabilities
- Remote attestation - Host (client) authenticates it's hardware and software configuration to a remote host (server) to enable a remote system (challenger) to determine the level of trust in the integrity of platform of another system (attestator)
- Multus is a multi-container networking interface (CNI) plug-in designed to support the multi-networking feature in Kubernetes using Custom Resources Definition (CRD)-based network objects
- Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Unlike Pods that are managed by the control plane (for example, a Deployment); instead, the kubelet watches each static Pod (and restarts it if it fails). Static Pods are always bound to one Kubelet on a specific node. The spec of a static Pod cannot refer to other API objects (e.g., ServiceAccount, ConfigMap, Secret, etc). If you are running clustered Kubernetes and are using static Pods to run a Pod on every node, you should probably be using a DaemonSet instead
- Autonomous vehicle (AV) solutions are a tremendous undertaking, far from easy, grandest challenges of our time, data ingestion and labelling of petabytes of data, validating difficult scenarios. Requires GPUs, Cameras, Lidar, Radar, Ultrasonic Sensors, GPS. LIDAR emits light pulses, radar transmits radio waves, and sonar uses sound echo
- The upstream projects for ACM are collectively called the Open Cluster Management project (OCM)
- SOC (system on chip) and SBC (single board computer)
- Red Hat Universal Base Image (UBI) – Lightweight container images can allow you to build, share and collaborate on your containerized application where you want. UBI provides: a set of three base images (ubi, ubi-minimal, ubi-init), a set of language runtime images (nodejs, ruby, python, php, perl, etc.), a set of associated packages in a YUM repository which satisfy common application dependencies
References