In the first part of this article series, we explored the benefits of the Apache Flink® Web UI and provided an overview of the high-level architecture, focusing on securing access to the Web UI in enterprise environments.
This part focuses on securing access to the Flink Web UI through Basic Authentication in Kubernetes (K8s) and OpenShift Container Platform (OCP). We’ll start by outlining the architecture, beginning with the creation of a Route in OpenShift environment (or an Ingress resource in a Kubernetes environment) and a Service to route requests to the Nginx reverse proxy. Then, we’ll dive into the Nginx reverse proxy Deployment, covering related resources such as the Secret for managing certificates, account configuration (.htpasswd), as well as the ConfigMap for configuring the Nginx reverse proxy.
If you don’t yet have an environment set up, you can follow the IBM Event Automation tutorial environment, which details how to install Flink with IBM Event Processing. If you already have an environment, you can proceed with the existing setup.
When a user attempts to access the Flink Web UI:
-
Their requests passes through the Route/Ingress, which exposes the Nginx reverse proxy,
-
The Nginx reverse proxy then authenticates the user,
-
The Nginx reverse proxy forwards the requests to a Service, which then routes them to the JobManager.
1. Expose a secure endpoint that the Web UI can use - Configure Route/Ingress
2. Secure the internal cluster traffic - Configure Nginx TLS certificates
3. Secure the Flink JobManager endpoint with TLS
4. Determine the URL of the JobManager Service
5. Configure Nginx with basic authentication
6. Connect to the Flink WebUI
Conclusion
1. Expose a secure endpoint that the Web UI can use - Configure Route/Ingress
Let’s first define the Nginx reverse proxy service. Our proxy will listen on port 8443. We create a Service which points to the Pod’s selector using the command below:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- protocol: TCP
port: 8443
targetPort: 8443
selector:
app: nginx
EOF
Output in the terminal :
service/nginx-service created
We want all communication between the browser and the environment to be encrypted, which means the Route/Ingress must be secured with TLS. We can manage the traffic using either passthrough termination or re-encryption:
-
Re-encrypt: Encrypted traffic is sent directly to the router, where a new TLS connection is established between the router and the Pod. With this configuration, TLS certificates must be configured at both the Route and Pod levels.
-
Passthrough: Encrypted traffic is sent directly to the destination without the router performing TLS termination. As a result, no key or certificate is needed at the Route level, and the TLS certificates must be configured at the Pod level instead.
Depending on whether you're using OCP or Kubernetes, the resource for accessing the cluster differs.
OCP
In this article, we’ll configure passthrough termination with a self-signed certificate. However, in an enterprise environment, it's advisable to use re-encrypt termination with CA-signed certificates for better security. The last part of this series of articles will explain how to configure re-encryption termination.
On OCP, to make our Nginx reverse proxy Pod reachable from outside the cluster we need to create a Route using the command below:
cat <<EOF | oc apply -f -
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: flink-ui
spec:
to:
kind: Service
name: nginx-service
port:
targetPort: 8443
tls:
termination: passthrough
EOF
Output in the terminal :
route.route.openshift.io/flink-ui created
K8s
On K8s, to make our Nginx reverse proxy Pod reachable from outside the cluster we need to create an Ingress.
Prerequisite: the Ingress resource can only be created if the Nginx Ingress Controller is deployed in the Kubernetes cluster. For more information, please refer to the official documentation: Nginx Ingress Controller Deployment.
Determine the hostname of the Kubernetes cluster.
You can inspect the master node of the cluster to identify the hostname. The following command gather details from the master node:
kubectl get nodes --selector=node-role.kubernetes.io/control-plane -o yaml
The output of the previous command should include the cluster's hostname in the status section. Example:
status:
addresses:
- address: mycluster.mydomain.com
type: Hostname
If you have the hostname of the K8s cluster, create the following Ingress resource, replacing <hostname> placeholder with that hostname.
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flink-ui-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- host: flink-ui.<hostname>
http:
paths:
- backend:
service:
name: nginx-service
port:
number: 8443
path: /
pathType: Prefix
EOF
Output in the terminal :
ingress.networking.k8s.io/flink-ui-ingress created
If you're unable to determine the cluster's hostname, create the Ingress resource without specifying a host. The user will access the Flink Web UI using the cluster's IP address.
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flink-ui-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: nginx-service
port:
number: 8443
path: /
pathType: Prefix
EOF
Output in the terminal :
ingress.networking.k8s.io/flink-ui-ingress created
2. Secure the internal cluster traffic - Configure Nginx TLS certificates
Our reverse proxy requires a TLS certificate to handle incoming requests from the route/ingress, and outgoing requests to the JobManager service.
-
First you need to generate the certificate to be used in Nginx reverse proxy. In our case we generate a self-signed certificate, but in production you want to generate a certificate issued by a trusted certificate authority (CA).
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -subj '/CN=localhost' -keyout nginx.key -out nginx.crt
The above command generates two files in the directory where you execute it: nginx.crt and nginx.key
-
Create a Secret named nginx-tls-secret to store the Nginx reverse proxy’s certificate files nginx.crt and nginx.key created in the previous step.
kubectl create secret generic nginx-tls-secret \
--from-file=./nginx.crt \
--from-file=./nginx.key
Output in the terminal :
secret/nginx-tls-secret created
3. Secure the Flink JobManager endpoint with TLS
In enterprise production environments, all communications must be secured using TLS, and the Flink JobManager is no exception. Flink documentation describes how to Configure SSL. The minimum required configuration property is security.ssl.rest.enabled. You can use the command below to check if the configuration property security.ssl.rest.enabled: true is set in your configuration:
kubectl get configMap flink-config-<flinkDeployment-name> -o jsonpath='{.data.flink-conf\.yaml}' | grep -E "security.ssl.rest.enabled|security.ssl.enabled"
Output in the terminal can be as follows if ssl is enabled, if ssl is not enabled, the command will return nothing (no output).
security.ssl.rest.enabled: true
Use the following command to determine the name of your Flink deployment:
kubectl get FlinkDeployments
The output of this command should be like that:
NAME JOB STATUS LIFECYCLE STATE
my-deployment RUNNING STABLE
4. Determine the URL of the JobManager Service
Once requests are authenticated, our reverse proxy will route them to the appropriate Service. To determine the URL of the JobManager Service, you need:
-
deployment-name: the name of the deployment, as explained in the previous section,
-
namespace: the current namespace
kubectl config view --minify --output 'jsonpath={..namespace}'
This command will output the current namespace
sp
Then, the URL of the JobManager Service is: https://<deployment-name>-rest.<namespace>.svc.cluster.local:8081
5. Configure Nginx with basic authentication
Basic authentication is natively supported by Nginx, which secures access using a .htpasswd file to manage user accounts (username/password) for the Flink Web UI. We need to create this file and store it as a Secret. The Nginx configuration is defined in a ConfigMap. Once these resources are created, we can deploy the reverse proxy.
Step1 - Generate htpasswd
From a working directory generate the account config .htpasswd carrying the set of user/password since that the authentication is managed by Nginx.
The command below creates a user named shen with a password shenPassword9!. The value of the password provided here is just an example and should not be used as an actual password.
htpasswd -bc .htpasswd shen shenPassword9!
Output in the terminal :
Adding password for user shen
Step2 - Create Secret
Next, you’ll need to create a secret named htpasswd-secret carrying this .htpasswd file to be mounted in the Nginx Deployment, you can use the command below (while being in the directory containing the .htpasswd file).
kubectl create secret generic htpasswd-secret --from-file=.htpasswd
Output in the terminal :
secret/htpasswd-secret created
Step3 - Create ConfigMap
You’ll need to create the ConfigMap named nginx-config carrying the Nginx configuration.
In this config we create a server using directives:
-
listen: the port and protocol to use
-
server_name: this is the primary server name.
-
ssl_certificate/ssl_certificate_key: to provide set of certificate/key for SSL encryption.
-
location: represents the configuration set for a requested URI, for our case it’s a reverse proxy with basic authentication so we need to specify the URL of the Flink Web UI for proxy_pass directive and both directives auth_basic to enable basic authentication and auth_basic_user_file to specify the file that keeps user names and passwords.
Proceed with the creation of it using the command below with replacing flink-rest-ui-url placeholder by its value from the part “Configure Route/Ingress”:
cat <<EOF | kubectl apply -f -
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 8443 ssl;
server_name localhost;
ssl_certificate /etc/nginx/tls/nginx.crt;
ssl_certificate_key /etc/nginx/tls/nginx.key;
location / {
proxy_pass <flink-rest-ui-url>;
auth_basic "Flink-UI";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
EOF
Output in the terminal :
configmap/nginx-config created
Step4 - Create the Deployment
Now that you have all the elements needed to create the deployment, proceed with the creation of the Nginx Deployment using the command below:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: shared-volume
mountPath: /var/cache/nginx
- name: shared-volume
mountPath: /var/run
- name: nginx-config
mountPath: /etc/nginx/conf.d
- name: htpasswd-secret
mountPath: /etc/nginx/.htpasswd
subPath: .htpasswd
- name: nginx-tls
mountPath: /etc/nginx/tls
volumes:
- name: shared-volume
emptyDir: {}
- name: nginx-config
configMap:
name: nginx-config
- name: nginx-tls
secret:
secretName: nginx-tls-secret
- name: htpasswd-secret
secret:
secretName: htpasswd-secret
EOF
Output in the terminal :
deployment.apps/nginx-deployment created
Verification
You can do a quick check for the state of the deployment using the command below to ensure that the deployment is ready, note that it may take several minutes to be available:
kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 50s
SecurityContext
In an enterprise environment, particularly in production, you should configure the securityContext of the Deployment to align with your organization's security policies.
6. Connect to the Flink WebUI
To access the the Flink Web UI, you’ll need to run the command below to retrieve it's URL:
OCP
echo "https://$(oc get route flink-ui -o jsonpath='{.spec.host}')"
The output of the previous command should resemble the following:
https://flink-ui-mynamespace.mycluster.mydomain.com
K8s
echo "https://$(kubectl get ingress flink-ui-ingress -o jsonpath='{.spec.rules[0].host}')"
The output of the previous command should resemble the following:
https://flink-ui.mycluster.mydomain.com
echo "https://$(kubectl get ingress flink-ui-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
The output of the previous command should resemble the following:
https://9.30.100.30
Now that you have all the pieces in place, visit the link from “Determine the URL of the JobManager Service” part with your browser.
Since we’re using a self-signed certificates you might get a browser error page which tells you that your connection is not private
You can proceed as the errors are due to the self-signed certificate, due to the CA not being known. In this case the browser will show a dialog (see figure below) prompting for a username and password; which will be validated against the content of the .htpasswd file.
And finally you’ll land on the Flink Web UI being authenticated as shown in the figure below and the experience remains the same as without authentication.
Conclusion
Securing the Flink Web UI with Basic Authentication ensures that your Flink deployment is both protected and accessible from outside the cluster. Whether you're using OpenShift or Kubernetes, the setup involves creating the necessary resources (Secrets, ConfigMaps, Deployments, Services, Routes/Ingresses) and configuring NGINX as a reverse proxy.
By following the steps outlined in this article, you'll be able to secure your Flink Web UI using Basic Authentication.
Basic authentication is not the ideal solution for production environments. In the last part of this blog series, we will demonstrate how to secure the Flink Web UI using OIDC authentication
Updates
- 2025/01/24 remove unnecessary property in K8s yaml file in section 1.
Contributors
- Mehdi Deboub
- Anu K T
- Sebastien Pereira
- David Radley