Cloud Pak for Business Automation

 View Only

Recreating legacy routes in IBM Cloud Pak for Business Automation 21.0.3 and later

By Jens Engelke posted Fri July 01, 2022 08:17 AM

  
When you upgrade to from an earlier version to IBM Cloud Pak for Automation 21.0.3 or later, you notice that your end users now interact with Cloud Pak services using new URLs: All Cloud Pak services share a single hostname and path prefixes.

For example, in a test system with namespace cp4ademo and default router configured for *.apps.jesample.cp.fyre.ibm.com, your end users were accessing Process Admin Console of IBM Business Automation Studio at https://bas-cp4ademo.apps.jesample.cp.fyre.ibm.com/ProcessAdmin. In 21.0.3 and later, the same application is available at the common hostname with a path prefix of /bas, for example https://cpd-cp4ba-demo.apps.jesample.cp.fyre.ibm.com/bas/ProcessAdmin.

The example of Process Admin Console demonstrates well, why the path prefix is required when sharing a common hostname: Multiple service provide an application /ProcessAdmin: Business Automation Studio and possibly multiple instances of Workflow Server. These applications must be disambiguated.

Assuming your end users bookmarked the previous URL, this change is inconvenient: You need to send out mass email to make all users aware. Some will miss it and raise support tickets.

As a simple work-around, you can expose your own web server at the previous URL and either
  • Let it redirect to the new URL right away or 
  • Let it display a HTML page announcing the URL and requiring the user to click a link or wait for a few seconds before an automatic redirect navigates to the new URL
This blog post provides a simple example of the first option:

nginx can serve as your own web server in this example. It is freely available and many administrators are familiar with its configuration. 

Create nginx configuration for multiple redirects

Before version 21.0.3, the many services in Cloud Pak for Automation were exposed on dedicated hostnames. The exact number depends on the capabilities you chose to select. Each of these hostnames may have served an application that your users have bookmarked, so you need a server that can listen to multiple hostnames and redirect to new target URLs based on the incoming server name.

Because this configuration must be accessible from an nginx pod running in your cluster, it should be created as a config map:
Create a file 01_configmap.yml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-conf
data:
  nginx.conf: |
    worker_processes  3;
    error_log  /var/log/nginx/error.log;
    events {
      worker_connections  10240;
    }
    http {
      server {
          listen 1080;
          server_name bas-cp4ademo.apps.jesample.cp.fyre.ibm.com;
          return 301 https://cpd-cp4ba-demo.apps.jesample.cp.fyre.ibm.com/bas$request_uri;
      }
      server {
          listen 1080;
          server_name icn-cp4ademo.apps.jesample.cp.fyre.ibm.com;
          return 301 https://cpd-cp4ba-demo.apps.jesample.cp.fyre.ibm.com/icn$request_uri;
      }
    }
​

and apply it to your cluster:
oc apply -f 01_configmap.yml​

Reviewing the nginx.conf section above, it configures multiple server definitions that listen to port 1080 on either

  • server_name=bas-cp4ademo.apps.jesample.cp.fyre.ibm.com or
  • server_name=icn-cp4ademo.apps.jesample.cp.fyre.ibm.com 
TLS configuration is skipped for simplicity of the sample. Of course, can easily create a signer and certificate using cert-manager and later mount key and cert from a secret.
The only job for these server definitions is to return a HTTP 301 redirect (Moved Permanently) with the new host name cpd-cp4ba-demo.apps.jesample.cp.fyre.ibm.com, the path prefix (icn or bas in the sample above) and whatever came in as request URI.

The final result will be visible in the location response header via curl:
$ curl -sik https://bas-cp4ademo.apps.jesample.cp.fyre.ibm.com/ProcessPortal
HTTP/1.1 301 Moved Permanently
server: nginx/1.23.0
date: Fri, 01 Jul 2022 11:34:13 GMT
content-type: text/html
content-length: 169
location: https://cpd-cp4ba-demo.apps.jesample.cp.fyre.ibm.com/bas/ProcessPortal
set-cookie: c58f608a43f2ef2ed29751239f24252a=a68812c3d3d54aadda763c24d52c24fb; path=/; HttpOnly; Secure; SameSite=None
cache-control: private

<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.23.0</center>
</body>
</html>
​

Create nginx pod

There is a community image of nginx available on Docker Hub, however, this image requires elevated privileges. To conveniently run in OpenShift, the image must support non-root operation in the way as docker.io/nginxinc/nginx-unprivileged:latest does.

Create a file 02_deployment.yml
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: nginx-conf
          configMap:
            name: nginx-conf
            items:
              - key: nginx.conf
                path: nginx.conf
            defaultMode: 420
        - name: log
          emptyDir: {}
        - name: varrun
          emptyDir: {}
      containers:
        - name: nginx
          image: 'docker.io/nginxinc/nginx-unprivileged:latest'
          ports:
            - containerPort: 1080
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: nginx-conf
              readOnly: true
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
            - name: log
              mountPath: /var/log/nginx
            - name: varrun
              mountPath: /var/run
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler


and apply it to your cluster:
oc apply -f 02_deployment.yml​

Reviewing the deployment configuration above, 
  • The config map is mounted as volume nginx-conf at /etc/nginx/nginx.conf
  • Ephemeral storage is available for logging and other temporary files: volumes log and varrun
  • For simplicity, there is a single replica, no readiness or liveness probes etc.
  • The non-privileged image exposes port 1080 for plain text HTTP

Expose the redirect to users

End users cannot access the new pod directly. Its port 1080 is exposed to other resources in the same cluster as a Kubernetes service, which in turn can be exposed as an OpenShift route to outside users.
Create a file 03_service.yml
kind: Service
apiVersion: v1
metadata:
  name: nginx-redirect
spec:
  ports:
    - protocol: TCP
      port: 1080
      targetPort: 1080
  type: LoadBalancer
  sessionAffinity: None
  selector:
    app: nginx
​

and apply it to your cluster:
oc apply -f 03_service.yml​

Once the service is available, there must one route per hostname that is exposed - basically recreating the hostnames that existed in the earlier version. Multiple resources can be concatenated in the same YAML file, delimited by a --- line.
Create a file 04_routes.yml
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: bas-redirect
spec:
  host: bas-cp4ademo.apps.jesample.cp.fyre.ibm.com
  to:
    kind: Service
    name: nginx-redirect
    weight: 100
  port:
    targetPort: 1080
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None
---
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: icn-redirect
spec:
  host: icn-cp4ademo.apps.jesample.cp.fyre.ibm.com
  to:
    kind: Service
    name: nginx-redirect
    weight: 100
  port:
    targetPort: 1080
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None
​

and apply it to your cluster:
oc apply -f 04_routes.yml​

Because the nginx pod in this simple configuration exposed unencrypted HTTP only, TLS termination at the route (edge) with insecureEdgeTerminationPolicy: Redirect is a perfect choice for this scenario. Users attempting http:// URLs are redirected to the secure port on the same host before traffic is allowed into the cluster. Only then will nginx respond with its redirect. Omitting TLS configuration at the route exposed the hostname using the default router's wildcard certificate.

End users can continue to use bookmarked URLs and be redirected to the current application end point. It is probably worthwhile to add an access log to the nginx configuration that allows reviewing the volume of redirects. It may decrease over time and this "migration help" can be deprovisioned eventually.

It is also worth noting that this redirect works well for end users and their bookmarks. If you have custom applications that consume REST APIs, it depends on the exact REST client whether the redirect is followed. In particular when POSTing data to APIs from custom browser side user interface applications, the redirect may be insufficient.
0 comments
64 views

Permalink