BPM, Workflow, and Case

 View Only

Federating on-prem BAW from PFS running on containers

By Julien Carnec posted Fri March 17, 2023 10:35 AM


SSO and Federation of on-premise BAW from PFS running on containers

You have these 2 options to run IBM Business Automation Workflow (BAW) on containers:

  • on OCP, using IBM Cloud Pak for Business Automation (CP4BA)
  • on CNCF Kubernetes, using standalone BAW deployment on containers (BAW-on-container)

With both options, a Kubernetes statefulset will be created to run IBM Process Federation Server (PFS). The pods of this statefulset are forming a PFS cluster which will out-of-the-box federate the BAW instances running in the same namespace. When using IBM Workplace, this PFS cluster will be used as a backend.

It is also possible to use this PFS running on Kubernetes to federate BAW instances which are running on-premises. Doing so allows you to see in Workplace tasks and process instances which are coming from both the BAW instances running in the same Kubernetes namespace and the federated on-premise BAW. 

Here is a high level view of the following actions required to achieve this:

  • configure the on-premise BAW so that it can be federated
  • configure PFS to declare the on-premise BAW as a federated system
  • configure Workplace so that it can work on tasks and process apps from the on-premise BAW
  • ensure that Single-Sign-On (SSO) is effective between PFS, Workplace and the on-premise BAW

An in-depth documentation of these 4 actions can be found here: https://github.com/icp4a/process-federation-server-containers/blob/22.0.2/documentation/Federating-on-premises-BAW.md . This documentation applies to both CP4BA and BAW-on-containers, and the actions mentioned in the 3 first bullets must be taken by following this documentation.

The purpose of the current blog post is to focus on the 4th bullet: "SSO"!

Let's dig into it!

Standalone BAW deployment on containers (CNCF Kubernetes)

Fist of all, let's talk about BAW-on-containers... 

BAW-on-containers also comes with a deployment of IBM User Management Service (UMS) which is there to provide user authentication/authorization and SSO within the BAW-on-containers deployment. As stated in the documentation, to federate an on-premise BAW from a PFS running as part of BAW-on-containers you must ensure that the on-premise BAW gets configured to also rely on the same UMS instance as the one deployed as part of BAW-on-containers. As both the BAW instances running on Kubernetes and the federated on-premise BAW are using the same UMS, SSO is guaranteed. Fine.

IBM Cloud Pak for Business Automation

Now, let's have a closer look at CP4BA....

On CP4BA, authorization/authentication and SSO are handled through the usage of  Platform UI Zen tokens. These tokens are JWT auth tokens which:

  • are issued - when a user logs in - by the Zen instance which is deployed as part of the CP4BA deployment, 
  • are validated against this same Zen instance which issued it each time one of the components of CP4BA receives an incoming query with an Authorization HTTP header which contains this JWT token (Bearer).

When you federate an on-premise BAW, the queries originating from CP4BA PFS and CP4BA Workplace will be sent to the on-premise BAW with the Authorization HTTP header containing a Zen JWT auth token like:  Authorization: Bearer <token>

You must configure your on-premise BAW so that such queries from CP4BA are caught by an OpenId Connect Trust Association Interceptor (OIDC TAI) in order to validate the Zen JWT token from the Authorization header against the CP4BA Zen instance who issued it. This implies that you provide the Trust Association Interceptor with a filter which will only match for queries coming from CP4BA.

Case 1: the existing on-premise BAW uses LDAP for authentication

If your existing on-premise BAW relies on LDAP for user authentication, then the situation is simple:

  • the existing on-premise BAW is configured to validate queries containing basic authentication. When a user logs in, the incoming request will contain an Authorization HTTP header which starts with Basic like: Authorization: Basic <encoded_credentials> . And then, subsequent queries performed by the same logged in user will contain a LTPA token which takes care of SSO.
  • once this on-premise BAW gets federated by CP4BA, CP4BA PFS and CP4BA Workplace will perform requests against the on-premise BAW with a Authorization HTTP header which starts with Bearer like: Authorization: Bearer <token> .

The filter configured on the OIDC TAI will be as follows: Authorization%=Bearer, meaning that all queries which holds a bearer token will be processed by the OIDC TAI, which in this situation will correspond to all queries from CP4BA. This approach is documented in this technote.

Case 2: the existing on-premise BAW already relies on OpenId Connect

If your existing on-premise BAW relies on OpenId Connect for user authentication, then the situation is more complex, because:

  • the existing on-premise BAW is already configured to validate queries containing a bearer token in the Authorization header
  • once this on-premise BAW gets federated by CP4BA, CP4BA PFS and CP4BA Workplace will perform requests against the on-premise BAW which will also contain a bearer token in the Authorization header

The complexity is as follows: how can we configure TAIs in order for Webshere to discriminate incoming queries coming from CP4BA from other queries which are also containing a bearer token but which should be validated against the original OpenId Connect provider? 

The solution will involve 2 TAIs and 1 NGINX server.

1st OIDC TAI: leverage the 'Referer' HTTP header

Queries from CP4BA Workplace will contain a Referer HTTP header which holds the hostname used to access Workplace. Therefore, a dedicated OIDC TAI should be created similarly as in this technote, except the OIDC TAI filter which should be set to Referer%=<workplace_hostname> .

Don't forget to also add the signer certificate of the Portal UI entry point to the truststore of the on-premise BAW as described in the technote.

2nd OIDC TAI: setup Nginx to inject a custom HTTP header

PFS will also send queries to the on-premise BAW, but these queries will not contain any Referer HTTP header, and the existing OIDC TAI will not catch them. So for this purpose, we must:

  • create an Nginx proxy which will be used by PFS when PFS sends requests to the on-premise BAW,
  • this Nginx proxy will inject a custom HTTP header named X-PFS-ID,
  • a new configuration for an OIDC TAI will be created on the on-premise BAW so that queries containing this X-PFS-ID header are filtered in.
Configuring the NGINX proxy

You can choose to run the NGINX proxy on Kubernetes, in the same namespace as the CP4BA deployment. But you can freely choose any possible option to deploy the NGINX proxy, as long as PFS pods can reach it, and the NGINX proxy can reach the on-premise BAW.

Here is the configuration to use for the NGINX proxy:

events { }

http {
  upstream baw_onprem {
  # Replace the following hostname and port number with the actual on-premise BAW hostname and port
  server onprembaw.mydomain.com:9443;

  server {
    # The NGINX proxy is exposed on port 1080
    listen 1080 ssl;

    # Replace with the domain name
    server_name mydomain.com;

    # Example with self signed certificates, use genuine certificates in production
    # To generate the self signed cert and key: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ssl/nginx-selfsigned.key -out ./ssl/nginx-selfsigned.crt
    ssl_certificate /ssl/nginx-selfsigned.cert;
    ssl_certificate_key /ssl/nginx-selfsigned.key;

    location /
      proxy_set_header X-PFS-ID DefaultPFS;
      proxy_set_header Host $host;
      rewrite ^ $request_uri;
      rewrite ^/(.*) $1 break;
      return 400;
      proxy_pass https://baw_onprem/$uri;
      # Replace:
      # 'onprembaw.mydomain.com:9443' with the actual on-premise BAW hostname and port
      # 'nginx-service' with the hostname to use to reach the Nginx proxy
      proxy_redirect https://onprembaw.mydomain.com:9443/ https://nginx-service:1080/;
      proxy_ssl_verify off;

When configuring PFS to federate the on-premise BAW, you will have to reference the NGINX proxy URL in the internalRestUrlPrefix property of the <ibmPfs_bpdRetriever> configuration tag. ex: internalRestUrlPrefix="https://nginx-service:1080/rest/bpm/wle"

Configuring the OIDC TAI

Once NGINX proxy is up and running, and PFS configured to send requests against it, you can now easily create a new configuration for an OIDC TAI which will be identical to the first one, except for the filter which will be as follows: X-PFS-ID==DefaultPFS


At this stage, if you followed the guidance from the blog post, SSO should be effective between your on-premise BAW and PFS running on Kubernetes or OCP. You should now double check with the official documentation that you did not miss any other config action. And if you are all set:

  • tasks and process instances from the on-premise BAW should now be visible in Workplace,
  • you should be able to start workflow applications which are hosted by the on-premise BAW from Workplace.