Multitenancy Strategies in OpenShift: Balancing Security and Efficiency
OpenShift, provides two main approaches to achieve multitenancy for our deployments:
1. Multiple OpenShift Clusters (Maximum Isolation, Higher Cost):
Each tenant is allocated a dedicated OpenShift cluster. This offers the strongest isolation, as tenants have no visibility or
interaction with other tenants' resources or applications.
- Unmatched Security: Complete separation prevents unauthorized access and minimizes potential security risks between tenants.
- Independent Control: Tenants have full autonomy in managing their cluster configuration, resource quotas, and security policies.
- Increased Cost: Maintaining separate OpenShift clusters can be resource-intensive, especially for small deployments or those with variable workloads.
- Lower Resource Utilization: Individual clusters might not be fully utilized, leading to inefficiency.
- Complex Management: Managing multiple clusters requires more effort and dedicated resources for administration.
2. Single OpenShift Cluster with Isolation (Cost-Effective, Shared Resources):
Tenants share a single OpenShift cluster for their deployments. However, isolation techniques like namespaces, quotas, and
network policies ensure separation between tenants. We will be exploring this approach in this blog.
- Cost Optimization: Sharing a single cluster maximizes resource utilization and reduces costs compared to dedicated clusters.
- Simplified Management: Managing one OpenShift cluster streamlines the overall administration compared to multiple instances.
Disadvantages:
- Moderate Isolation: Limited isolation as workloads share resources and network. Why need Isolation: Must need when tenants should be unaware of each other's workloads.
- Security Considerations: Careful configuration and monitoring are essential to maintain strong security boundaries within the shared cluster.


Multi-tenancy in OpenShift Logging: Built-in Isolation for User Workloads
OpenShift provides inherent support for multitenancy in logging, eliminating the need for additional configuration within the operator context.
When we create a user, two roles are automatically assigned:
- admin: Grants administrative privileges within the user's namespace.
- user-setting: Provides basic user configuration options.
These predefined roles effectively enable multitenancy and isolate users from each other's namespaces. Due to these role restrictions, users cannot view or edit details of other user namespaces, ensuring data privacy and security.
Example: Demonstrating Multi-tenancy with User Workloads
The following scenario illustrates this functionality with two users, User1 and User2. This example demonstrates how OpenShift's built-in
multitenancy allows each user to have their own isolated logging environment within a shared cluster, ensuring data privacy and security.
[root@bastion CBI]# htpasswd -c htppaswd_file user1
Adding password for user user1
[root@bastion CBI]# cat htppaswd_file
user1:$apr1$Ss3k7bzN$xYJE8o3dtyum0O8Dhy0E7.
2. Namespace Creation:
[root@bastion CBI]# oc create namespace user1-namespace
namespace/user1-namespace created
3. Sample Application Deployment:
-
- Defines a YAML file app1.yaml that deploys a simple application:
- Named counter-app1 running in the user1-namespace.
- Uses the busybox image and continuously logs messages with timestamps.
- Applies the YAML file using oc apply to create the deployment.
[root@bastion CBI]# cat app1.yaml
namespace: user1-namespace
image: docker.io/s390x/busybox:latest
'i=0; while true; do echo " This is from sample logger app from user1-namespace counter-test-i-$i:
$(date)"; i=$((i+1)); sleep 3; done']
[root@bastion CBI]# oc apply -f app.yaml
deployment.apps/counter-app1 created
4. Verification:
[root@bastion CBI]# oc get pods -n user1-namespace
NAME READY STATUS RESTARTS AGE
counter-app1-6d65dcdff4-jqnmc 1/1 Running 0 3m
5. Login to KIBANA and verify the logs:
6. Repeat above steps (1-4) for another User, User2:
[root@bastion CBI]# htpasswd htppaswd_file user2
Adding password for user user2
[root@bastion CBI]# cat htppaswd_file
user1:$apr1$Ss3k7bzN$xYJE8o3dtyum0O8Dhy0E7.
User2:$apr1$158QPlmk$kcBVdCfyaP8bP0J..ePuf.
[root@bastion CBI]# oc create namespace user1-namespace
namespace/user1-namespace created
Now, we have 2 users created with same RBAC as below:
Use below yaml to create sample logger app:
[root@bastion CBI]# cat user2.yaml
namespace: user2-namespace
image: docker.io/s390x/busybox:latest
'i=0; while true; do echo " This is from sample logger app from user2-namespace
counter-test-i-$i: $(date)"; i=$((i+1)); sleep 3; done']
[root@bastion CBI]# oc apply -f user2.yaml
deployment.apps/counter-app1 created
[root@bastion CBI]# oc get pods -n user2-namespace
NAME READY STATUS RESTARTS AGE
counter-app1-6ff6ddcf67-cm7b2 1/1 Running 0 33m
7. Verifying Isolation: No Access to Other User Namespaces
Let's confirm if we've succssfully achieved isolation using two approaches:
1. Namespace Visibility: While logged in as one user (e.g., User2), try to view namespaces, deployments, and pods
belonging to another user's namespace (e.g., User1's "user1-namespace"). Ideally, we should not be able to see any resources from User1's namespace.
2. Log Access in Kibana: Access Kibana and attempt to view logs generated by the sample application deployed within
User1's namespace. If multitenancy is working correctly, we should only see logs associated with their own namespace (e.g., the namespace where we are currently logged in).
These checks will help verify that OpenShift's multitenancy is effectively isolating users from accessing resources and
logs belonging to other users.
After login to console using User1, check if we can see user2-namespace :
After login to console using User2, check if we can see user1-namespace :