It's an odd situation when providing an application with more privileges can actually stop it from doing something, but that's what can happen when using the Red Hat OpenShift
anyuid
SCC with processes designed for the restricted SCC. Working on Cloud Pak for Integration I have seen a number of cases where applying the
anyuid
SCC to a namespace can affect how storage is allocated and prepared, meaning the containers no longer have write access.
For those who don't want the details, here are some recommendations:
- If you need to run a pod as a specific user (not root) using a
securityContext
, use the
nonroot
SCC. Because it doesn't have a priority, it will not affect other pods.
- If you need to run a pod as root using a
securityContext
, apply the
anyuid
SCC very carefully to only affect the pods that need it, or even better, create a custom SCC that has default priority and sets the
runAsUser
strategy to RunAsAny
.
- If you need to run a pod as the user specified in the image, and not include that user in the securityContext
, apply the anyuid
SCC very carefully to only affect the pods that need it.
Why does anyuid cause problems?
Red Hat OpenShift will select an SCC for a pod first by priority, and then by the level of restrictiveness. The anyuid
SCC is the only default SCC that has a priority set, meaning that if a pod is eligible for running using anyuid
, it will use it instead of a more restrictive SCC. Firstly, this removes a layer of security from your system - OpenShift is designed to use the most restrictive SCC it can for security, and using a higher priority one means that pods that can run under tighter security no longer do. Secondly, pods that are designed and tested to run with the restricted SCC will then work differently than they expect when it comes to storage.
[jammy@ibm ~]$ oc get scc -o json | jq -r ".items[] | [.metadata.name, .priority] | @csv"
"anyuid",10
"hostaccess",
"hostmount-anyuid",
"hostnetwork",
"node-exporter",
"nonroot",
"privileged",
"restricted",
The storage problems are caused by the SCC strategies used by the different SCCs, specifically the
fsGroup
strategy. In the
restricted
SCC this is set to
MustRunAs
, which means the value used must be in the range configured for the namespace, and if none is set, the pod will be assigned an
fsGroup
on admission. In the
anyuid
SCC, this is set to
RunAsAny
. The
RunAsAny
strategy does not include a default, so on admission, the pod will not be assigned an
fsGroup
in its
securityContext
if it does not have one.
[jammy@ibm ~]$ oc get scc -o json | jq -r ".items[] | [.metadata.name, .fsGroup.type] | @csv"
"anyuid","RunAsAny"
"hostaccess","MustRunAs"
"hostmount-anyuid","RunAsAny"
"hostnetwork","MustRunAs"
"node-exporter","RunAsAny"
"nonroot","RunAsAny"
"privileged","RunAsAny"
"restricted","MustRunAs"
This affects storage. On mounting storage for a pod, Kubernetes will (caveat: not for all storage classes)
chmod
and
chown
the files on the volume to be suitable for the pod. It does this using the
fsGroup
option from the
securityContext
. If an
fsGroup
is not set, it cannot set the permissions correctly, so the pod may or may not have write access depending on if the user in the image happens to match the permissions of the file system.
Why does this affect Cloud Pak for Integration specifically?
This nuance in the way that SCCs work doesn't only affect Cloud Pak for Integration, but it is worsened because Cloud Pak for Integration is designed for the highest level of pod security.
Cloud Pak for Integration design all pods to be run under the
restricted
SCC. This allows OpenShift to assign project-specific users, fsGroups and selinux labels to provide the highest level of pod security. Because these values are automatically assigned by OpenShift based on the namespace the pod is in, Cloud Pak for Integration cannot include fixed configuration options, because it would no longer match the
restricted
SCC. There is no way for Cloud Pak for Integration to know what SCC will be applied to its pods when they are created, so it cannot provide different values based on the effective SCC.
If you want to know more about the technical details behind this, and explore a potential option to help storage always be writable, regardless of the SCC, see my other blog post:
The OpenShift anyuid SCC and its effects on storage.