In the first part of this article series, we explored the benefits of the Apache Flink® Web UI and provided an overview of the high-level architecture, focusing on securing access to the Web UI in enterprise environments.
This part focuses on securing access to the Flink Web UI through OIDC Authentication in Kubernetes (K8s) and OpenShift Container Platform (OCP). We’ll start by outlining the architecture, beginning with the creation of a Route in OpenShift environment (or an Ingress resource in a Kubernetes environment) and a Service to route requests to the Nginx reverse proxy. Then, we’ll dive into the Nginx reverse proxy Deployment, covering related resources such as the Secret for managing certificates, as well as the ConfigMap for configuring the Nginx reverse proxy and OIDC configuration for authentication managing.
If you don’t yet have an environment set up, you can follow the IBM Event Automation tutorial environment, which details how to install Flink with IBM Event Processing. If you already have an environment, you can proceed with the existing setup.
When a user attempts to access the Flink Web UI:
-
User Request: The user's request passes through the Route/Ingress, which exposes the Nginx reverse proxy.
-
Authentication: If the user is unauthenticated, Nginx reverse proxy redirects them to the OIDC provider's login page.
-
Token Validation: After the user logs in, the OIDC provider returns a short-lived authorization code, which is used by Nginx reverse proxy to validate the user's authentication.
-
Request Forwarding: Once validated, Nginx reverse proxy forwards the request to the JobManager service, which routes it to the JobManager, granting access to the Flink Web UI.
1. Secure the internal cluster traffic - Configure Nginx TLS certificates
2. Expose a secure endpoint that the Web UI can use - Configure Route/Ingress
3. Secure the Flink JobManager endpoint with TLS
4. Determine the URL of the JobManager Service
5. Configure the OIDC provider
6. Configure Nginx with OIDC authentication
7. Connect to the Flink WebUI
Conclusion
1. Secure the internal cluster traffic - Configure Nginx TLS certificates
Our reverse proxy requires a TLS certificate to handle incoming requests from the route/ingress, and outgoing requests to the JobManager service.
-
First you need to generate the certificate to be used in Nginx reverse proxy. In our case we generate a self-signed certificate, but in production you want to generate a certificate issued by a trusted certificate authority (CA).
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -subj '/CN=localhost' -keyout nginx.key -out nginx.crt
The above command generates two files in the directory where you execute it: nginx.crt and nginx.key
-
Create a Secret named nginx-tls-secret to store the Nginx reverse proxy’s certificate files nginx.crt and nginx.key created in the previous step.
kubectl create secret generic nginx-tls-secret \
--from-file=./nginx.crt \
--from-file=./nginx.key
Output in the terminal :
secret/nginx-tls-secret created
2. Expose a secure endpoint that the Web UI can use - Configure Route/Ingress
Let’s first define the Nginx reverse proxy service. Our proxy will listen on port 8443. We create a Service which points to the Pod’s selector using the command below:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- protocol: TCP
port: 8443
targetPort: 8443
selector:
app: nginx
EOF
Output in the terminal :
service/nginx-service created
We want all communication between the browser and the environment to be encrypted, which means the Route/Ingress must be secured with TLS.
Depending on whether you're using OCP or Kubernetes, the resource for accessing the cluster differs.
OCP
To expose the Nginx reverse proxy Pod externally from within the OpenShift cluster, you'll need to create a Route.
In this article, we’ll configure re-encrypt termination, where encrypted traffic is first sent to the router. The router then establishes a new TLS connection to the Pod.
This setup requires configuring TLS certificates at both the Route and Pod levels to ensure secure communication.
In our case we rely on the router default certificate but in an enterprise environment, you might want to use a custom CA-signed certificate.
In order for the router to establish a secure connection to the Nginx reverse proxy service, the “destinationCACertificate” property of the route must be configured with the content of the reverse proxy's certificate. The nginx.crt certificate, which you generated in the previous step, should be used for this purpose.
Before running the following command to create the route, ensure you are in the directory containing the file nginx.crt:
cat <<EOF | oc apply -f -
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: flink-ui
spec:
host: flink-ui.$(oc get ingresses.config/cluster -o jsonpath={.spec.domain})
to:
kind: Service
name: nginx-service
weight: 100
port:
targetPort: 8443
tls:
termination: reencrypt
insecureEdgeTerminationPolicy: Redirect
destinationCACertificate: |-
$(cat ./nginx.crt | sed 's/^/ /g')
wildcardPolicy: None
EOF
Output in the terminal :
route.route.openshift.io/flink-ui created
K8s
On K8s, to make our Nginx reverse proxy Pod reachable from outside the cluster we need to create an Ingress.
Prerequisite: the Ingress resource can only be created if the Nginx Ingress Controller is deployed in the Kubernetes cluster. For more information, please refer to the official documentation: Nginx Ingress Controller Deployment.
Determine the hostname of the Kubernetes cluster.
We want to access to the Flink Web UI through dedicated URL of the form: https://flink-ui.mycluster.mydomain. To determine the hostname of your cluster you can inspect the master node using the following command:
kubectl get nodes --selector=node-role.kubernetes.io/control-plane -o yaml
If your cluster is reachable through hostname, the output of the previous command should include the cluster's hostname in the status section. Example:
status:
addresses:
- address: mycluster.mydomain.com
type: Hostname
If you have the hostname of the K8s cluster, create the following Ingress resource, replacing <hostname> placeholder with that hostname.
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flink-ui
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: flink-ui.<hostname>
http:
paths:
- backend:
service:
name: nginx-service
port:
number: 8443
path: /
pathType: Prefix
EOF
Output in the terminal :
ingress.networking.k8s.io/flink-ui created
In our case we rely on the Ingress controller default certificate but in an enterprise environment, you might want to set the tls property to use a custom CA-signed certificate.
If you're unable to determine the cluster's hostname, create the Ingress resource without specifying a host. The user will access the Flink Web UI using the cluster's IP address.
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flink-ui
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: nginx-service
port:
number: 8443
path: /
pathType: Prefix
EOF
Output in the terminal :
ingress.networking.k8s.io/flink-ui created
3. Secure the Flink JobManager endpoint with TLS
In enterprise production environments, all communications must be secured using TLS, and the Flink JobManager is no exception. Flink documentation describes how to Configure SSL. The minimum required configuration property is security.ssl.rest.enabled. You can use the command below to check if the configuration property security.ssl.rest.enabled: true is set in your configuration:
kubectl get configMap flink-config-<flinkDeployment-name> -o jsonpath='{.data.flink-conf\.yaml}' | grep -E "security.ssl.rest.enabled|security.ssl.enabled"
Output in the terminal can be as follows if ssl is enabled, if ssl is not enabled, the command will return nothing (no output).
security.ssl.rest.enabled: true
Use the following command to determine the name of your Flink deployment:
kubectl get FlinkDeployments
The output of this command should be like that:
NAME JOB STATUS LIFECYCLE STATE
my-deployment RUNNING STABLE
4. Determine the URL of the JobManager Service
Once requests are authenticated, our reverse proxy will route them to the appropriate Service. To determine the URL of the JobManager Service, you need:
-
deployment-name: the name of the deployment, as explained in the previous section,
-
namespace: the current namespace
kubectl config view --minify --output 'jsonpath={..namespace}'
This command will output the current namespace
sp
Then, the URL of the JobManager Service is: https://<deployment-name>-rest.<namespace>.svc.cluster.local:8081
5. Configure the OIDC provider
Before configuring Nginx, it’s essential to first set up your OIDC provider. In this guide, we’ll use Keycloak as the OIDC provider, but the steps can easily be adapted for other providers. If you’re already familiar with setting up Keycloak for OIDC, you can skip ahead to the Nginx reverse proxy configuration.
How to deploy Keycloak
If you haven’t deployed Keycloak yet, there are multiple installation methods available. You can follow the official Keycloak installation guide for step-by-step instructions tailored to different environments.
How to configure Keycloak
The Nginx reverse proxy application is going to send requests to the OIDC provider Keycloak to verify user's identities. In Keycloak, we need to configure a client to specify how to handle authentication requests.
To do that, start by navigating to the Manage section, then click on the Clients tab. From there, click the Create client button.
-
In the General Settings section, enter a Client ID. For this example, we've chosen nginx-client as the Client ID.
The client ID is used in OpenID Connect (OIDC) to uniquely identify an application to the OIDC provider, ensuring proper security, configuration, and authorization. It helps link requests and responses, validates the correct client, and ensures the right permissions are granted.
-
In the Capability Configuration section, enable the Client Authentication option. Keep the default Standard flow enabled, which enables support of the OpenID Connect Authorization Code Flow.
-
In the Login Settings section, set the Valid Redirect URIs to https://flink-ui.<hostname>/redirect_uri, replacing <hostname> with the value obtained in the “Expose a secure endpoint that the Web UI can use - Configure Route/Ingress” section.
Note down the Nginx client credentials for use in Nginx configuration
After creating the client, you'll be redirected to the client details page. Here, you can retrieve the necessary information to configure Nginx.
Navigate to the Credentials tab and ensure that the Client Authenticator is set to Client ID and Secret. Then, click the copy link to copy the client's secret.
Make sure to note down the Client ID and Client Secret, as you’ll need them in the next step when configuring Nginx.
Additionally, record the Keycloak Discovery URL (as shown here). With these three pieces of information — the Client ID, Client Secret, and Discovery URL — you’ll be ready to configure Nginx.
6. Configure Nginx with OIDC authentication
Step1 - Configure Nginx OIDC provider SSL certificate
-
You’ll need the OIDC provider’s certificate for Nginx to communicate with it. If you don’t have the certificate, you can extract it using the following command. Be sure to replace the placeholder <oidc-provider-url> with the hostname of your OIDC provider:
openssl s_client -connect <oidc-provider-url>:443 < /dev/null | openssl x509 > oidc.crt
The above command extract the OIDC provider certificate to a file oidc.crt
-
Once you have the certificate, use the following command to create a secret named oidc-tls-secret.
oc create secret generic oidc-tls-secret --from-file=./oidc.crt
Output in the terminal is:
secret/oidc-tls-secret created
Step2 - Get the DNS resolution IP address
We need to determine the IP address for DNS resolution within the cluster, which is necessary for Nginx reverse proxy to communicate with the OIDC provider. Note that using an internal OIDC provider URL will not work, as users won’t be able to access the login page.
Depending on whether you're using OCP or Kubernetes, the resource for getting the IP of the DNS resolution differs.
OCP
You can use the following command to retrieve the IP of the DNS resolution in OCP cluster:
oc describe dns.operator/default | grep "Cluster IP:"
Output in the terminal is:
Cluster IP: 172.30.0.30
The IP is the value of <resolver_ip> used in the step 4.
K8s
You can use the following command to retrieve the IP of the DNS resolution in K8s cluster:
kubectl describe svc kube-dns -n kube-system | grep "IP:"
Output in the terminal is:
IP: 10.96.0.10
The IP is the value of <resolver_ip> used in the step 4.
Step3 - Get the URL of the Route/Ingress Flink Web UI
To access the Flink Web UI, run the command below to retrieve its URL:
OCP
echo "https://$(oc get route flink-ui -o jsonpath='{.spec.host}')"
The output of the previous command should resemble the following:
https://flink-ui-mynamespace.mycluster.mydomain.com
K8s
The output of the previous command should resemble the following:
https://flink-ui.mycluster.mydomain.com
The output of the previous command should resemble the following:
https://9.30.100.30
Step4 - Create the ConfigMap handling Nginx configuration
Now that you have all the necessary information, proceed by creating the Nginx configuration. First, create a ConfigMap to handle the configuration using the command below.
Be sure to replace the following placeholders with the corresponding values:
-
flink-rest-ui: Replace with the value from the "Determine the URL of the JobManager Service" section.
-
redirect_uri: Replace with the value from the "Step3 - Get the URL of the Route/Ingress Flink Web UI", suffixed by "/redirect_uri".
-
discovery: Replace with the value from the "Configure OIDC provider > Getting required client information" section.
-
client_id: Replace with the value from the "Configure OIDC provider > Getting required client information" section.
-
client_secret: Replace with the value from the "Configure OIDC provider > Getting required client information" section.
-
resolver_ip: Replace with the value from the "Step2 - Get the DNS resolution IP address" section.
cat <<EOF | kubectl apply -f -
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
default.conf: |-
server {
listen 8443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/tls/nginx.crt;
ssl_certificate_key /etc/nginx/tls/nginx.key;
access_by_lua '
local opts = {
redirect_uri = "<redirect_uri>",
discovery = "<discovery>",
client_id = "<client_id>",
client_secret = "<client_secret>",
logout_path = "/logout",
accept_none_alg = true,
session_contents = {id_token=true}
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
';
location / {
proxy_pass <flink-rest-ui>;
proxy_set_header Host \$host;
}
}
nginx.conf: |-
#user nobody;
worker_processes 1;
error_log /dev/stderr error;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
lua_package_path '/.luarocks/share/lua/5.1/?.lua;;';
resolver <resolver_ip> valid=1s;
lua_ssl_trusted_certificate /etc/nginx/oidc/tls/oidc.crt;
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /dev/stdout main;
# See Move default writable paths to a dedicated directory (#119)
# https://github.com/openresty/docker-openresty/issues/119
client_body_temp_path /var/run/openresty/nginx-client-body;
proxy_temp_path /var/run/openresty/nginx-proxy;
fastcgi_temp_path /var/run/openresty/nginx-fastcgi;
uwsgi_temp_path /var/run/openresty/nginx-uwsgi;
scgi_temp_path /var/run/openresty/nginx-scgi;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
EOF
Output in the terminal :
configmap/nginx-config created
Step5 - Create the Deployment
Now that you have all the required resources, you can proceed to create the Nginx deployment by running the command below:
cat <<EOF | kubectl apply -f -
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
command:
- /bin/sh
- '-c'
- |
luarocks config local_by_default true
luarocks install lua-resty-openidc
/usr/local/openresty/nginx/sbin/nginx -g "daemon off;"
ports:
- containerPort: 8443
protocol: TCP
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
- name: nginx-config
mountPath: /usr/local/openresty/nginx/conf/nginx.conf
subPath: nginx.conf
- name: nginx-tls
mountPath: /etc/nginx/tls
- name: oidc-tls
mountPath: /etc/nginx/oidc/tls
- name: shared-volume
mountPath: /.luarocks
- name: shared-volume
mountPath: /.cache
- name: shared-volume
mountPath: /usr/local/openresty/nginx/logs
- name: shared-volume
mountPath: /usr/local/openresty/luajit/lib/luarocks
- name: shared-volume
mountPath: /var/run/openresty
image: 'openresty/openresty:alpine-fat'
volumes:
- name: shared-volume
emptyDir: { }
- name: nginx-config
configMap:
name: nginx-config
- name: oidc-tls
secret:
secretName: oidc-tls-secret
- name: nginx-tls
secret:
secretName: nginx-tls-secret
EOF
Output in the terminal :
deployment.apps/nginx-deployment created
Verification
You can do a quick check for the state of the deployment using the command below to ensure that the deployment is ready, note that it may take several minutes to be available:
kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 50s
SecurityContext
In an enterprise environment, particularly in production, you should configure the securityContext of the Deployment to align with your organization's security policies.
7. Connect to the Flink WebUI
Now that you have all the pieces in place, visit the link from “Step3 - Get the URL of the Route/Ingress Flink Web UI” part with your browser.
Since we’re using a self-signed certificates you might get a browser error page which tells you that your connection is not private
You can proceed, as the errors are likely caused by the self-signed certificate and the CA not being recognized. In this case, the browser will be redirected to the OIDC provider's login page (see figure below), where the user will be prompted to enter a username and password, which will then be validated against the registered users.
Finally you’ll land on the Flink Web UI being authenticated as shown in the figure below and the experience remains the same as without authentication.
Conclusion
Securing the Flink Web UI allows it to be used in Enterprise Flink deployments is both protected and accessible from outside the cluster. Whether you are using OpenShift or Kubernetes, the setup involves creating the appropriate resources (Secrets, ConfigMaps, Deployments, Services, Routes/Ingresses) and configuring NGINX as a reverse proxy.
If you’ve already secured IBM Event Processing using an OIDC provider, you can leverage the steps in this blog to secure the Flink Web UI with the same OIDC provider. This will enable Single Sign-On (SSO) between the IBM Event Processing authoring application and the Flink Web UI.
All the resources and configurations mentioned in this blog will be made available on GitHub for your convenience.
Contributors
- Mehdi Deboub
- Anu K T
- Sebastien Pereira
- David Radley
#Featured-area-2-home