Decision Management (ODM, ADS)

 View Only

How to configure ODM on K8s OIDC with Azure AD

By XIAO HUA LIU posted Mon April 18, 2022 01:27 PM


For an ODM deployment in a public cloud environment, many companies will not expose their LDAP service out of the enterprise firewall for security reasons. The user authentications are managed through Identity providers such as Okta or Azure AD. ODM can be configured to authenticate user through these servers using OpenID Connect (OIDC) protocol. This feature is only available if ODM is running on WebSphere Liberty Server (WLP).

As a Java Web application, ODM uses the OpenID Connect Client feature provided by WLP to authenticate a user, and it acts as a relying party as defined in the OIDC standard.  So the OIDC support in ODM is mostly centered on WLP server configuration. At the same time, when one ODM component needs to authenticate to another ODM Component (such as RD to RES, or DC to RES) for rest API calls, the first ODM components will act as a OIDC client, so there is also some non Liberty configurations to be set on ODM side.

Although the concept generally applies to all OIDC Identity providers (IDP), there can be specific setup to each provider.  In this document, we will be focus on Azure AD.

Configuration Steps

The integration will involve set up on both IDP and ODM (includes WLP) side, to enable user to work with services deployed configuration.  You also need to do additional setting in Rule Designer side for using OpenID, the doc does not cover that part and you can find more info in the product doc.

Configuration in Azure AD

The way to prepare IDP varies, most of them will support both UI and API to do so. Generally, it involves the following parts:
1) Register ODM app into your IDP, so the IDP will provide the authentication service for ODM. 

App registration in  Azure AD:
you can get the endpoint URLs from:
After this step:
  You have
  • Client ID string and 
  • a Client Secret string and 
  • a list of Azure end point URLs that will be used by ODM configuration.
2) Specify a list of redirect URLs, so the IDP will only do the authentication if the authorization request's redirect URL match one of the registered redirect URLs.
The ODM redirect URLs are:
Note you should register several URLs for Rule Designer to avoid having a conflict in the ports on the desktop that runs Rule Designer.
3) Configure the attributes ( it is called claim in OIDC term) included in an ID token.  Besides the default claims, it should contain two important claims that contains the user identity and group info, the information will be used by ODM to identify a user.  
Also, Azure AD can return different versions of token, please make sure you configure it to always return V2 token, as the V1 token will use a different issuer name in the token.

Configuration for ODM Containers

For ODM side configuration, you can read the ODM on premise documentation on OpenID Connect to understand the complete ODM OIDC configuration.
Most of the the setup is provided in the ODM container,  you will need to provide the following information to complete it.
All the content with <> are placeholders that be replaced with the actual values.
1) Create a secret to store Client Id and Client Secret you get from Azure ODM app registration in previous step.
kubectl create secret generic <my_openid_clientid_secret> 
This secret should be passed as oidc.clientRef parameter for Helm installation.
2) Create the web secret which contains majority of info for OpenId setup: 
this steps require to create 3 file to store in the secret 
a) Create openIdWebSecurity.xml which defines two openidClientSecurity elements:
 <variable name="ServerHost" value="<YOUR_TENANT_ID>"/>;
 <!-- Open ID Connect -->
 <!-- Client with inbound propagation set to supported --> 
 <openidConnectClient authFilterRef="browserAuthFilter" id="odm" scope="openid" accessTokenInLtpaCookie="true"
                      clientId="__OPENID_CLIENT_ID__" clientSecret="__OPENID_CLIENT_SECRET__"
                      signatureAlgorithm="RS256" inboundPropagation="supported"
                      userIdentifier="email" groupIdentifier="groups" audiences="ALL_AUDIENCES"/>

 <!-- Client with inbound propagation set to required -->
 <openidConnectClient authFilterRef="apiAuthFilter" id="odmapi" scope="openid"
                      clientId="__OPENID_CLIENT_ID__" clientSecret="<clientSecret>"
                      signatureAlgorithm="RS256" inboundPropagation="required"
                      userIdentifier="userid" groupIdentifier="groups" audiences="ALL_AUDIENCES"/>
In the above example config, we expect your Azure ID token containing two claims: userid & group.  userid is used as user identifier and groups as group identifier. You can change the two attributes here if your Azure ID token use other claims.
The two openidConnectClient elements are very similar, except the inboundPropagation value,  when this value is supported,  OAuth 2.0 bearer access token in the request is optional, if the request does not contain an access token or the access token is invalid, then WLP continues to redirect the user to the IDPSo  this configuration applies to all ODM browser URLs in the predefined browserAuthFilter authFilter element.  When this value is required, the request must contain a bearer access token, as WLP will not redirect user to IDP.  This configuration applies all ODM REST API URLs in the predefined apiAuthFilter authFilter
b) Create for the ODM REST API configuration :
c) Create webSecurity.xml for user &and group mapping
User/role mapping defines how to map the Azure users and groups to ODM roles. Note you can define at most 6 users and 6 groups per ODM role.
A sample is:
      <basicRegistry id="basic" realm="basic">
        <user name="resExecutor" password="resExecutor"/>
        <group name="basicResExecutors">
          <member name="resExecutor" />

      <variable name="odm.resExecutors.group1" value="group:basic/basicResExecutors"/>
      <variable name="odm.resExecutors.group2" value="group:azure/TaskAdmins" />
      <variable name="odm.resAdministrators.group1" value="group:azure/resAdmins" />
      <variable name="odm.resAdministrators.user1" value="user:azure/uid=user1,dc=example,dc=com" />
      <variable name="odm.rtsConfigmanagers.group1" value="group:azure/rtsConfigs" />
    <variable name="odm.rtsAdministrators.group1" value="group:azure/rtsAdmins" />
Here the users and groups from Azure are referred using user:azure or  group:azure, they are corresponding to the realmName="azure" attribute we defined in openidConnectClient elements in file OpenIdWebSecurity.xml.
Including basicRegistry definition is optional, it will allow you to avoid openid token validation when invoking ODM DecisionService REST API, and can be used when rule execution performance is a concern.
d)  Create the secret with the following command
kubectl create secret generic <azure-websecurity-secret> --from-file=webSecurity.xml=webSecurity.xml 
  --from-file=openIdWebSecurity.xml=openIdWebSecurity.xml  \
This secrete should be passed to customization.authSecretRef parameter for Helm installation.
3) Import the Azure AD server certificate

We use keytool to get the Azure HTTPS certificate and create a secrete from it. At the time of this writing, the certificate used by Azure AD https communication varies depending on which actual server you are redirected to, but since they all use the same CA certificate. so we will use the CA  certificate of the Azure AD certificate chain instead. 
a) extract the root certificate from  Azure AD
keytool -printcert -sslserver -rfc > azure.crt
awk 'BEGIN {c=0;} /BEGIN CERT/{c++} { print > "azure-cert." c ".crt"}' < azure.crt
The azure-cert.2.crt file contains the CA cert.

b) create a secret to store the certificate
kubectl create secret generic azure-root-cert --from-file=tls.crt=azure-cert.2.crt
Then you need to add the secret to customization.trustedCertificateList (this parameter accepts a list of certificate secrets) when running helm install.
4) set OIDC tags
Last but least, you should set the oidc.eabled to true and oidc.provider="azure" when running helm install.

Helm parameters to set for OIDC enablement

Below is a summary of the helm parameters you need to set when run helm install:
helm install $RELNAME ibmcharts/ibm-odm-prod --version $VERSION 
--set oidc.enabled="true"
--set oidc.clientRef="<my_openid_clientid_secret>"
--set oidc.provider="azure"
--set customization.authSecretRef="<azure-websecurity-secret>"

--set customization.trustedCertificateList={"<azure-root-cert>"}

Using http(s) proxy

If you ODM runs in a private VPC and need to talk to any external network via https proxy, You need to 
1) Define http proxy in JVM options 
You can create a file with the following content:  #your http proxy server host name
-Dhttps.proxyPort=443  #your proxy server port
Create a config map with the file content.
kubectl create configmap odm-dsc-jvm-options-configmap
Then you will need to set the config map to decisionCenter.jvmOptionsRef parameter when running helm install. 
Similarly, you should create config map and set  jvmOptionsRef parameter for all other ODM components.

2) add the following attribute inside both openidConnectClient elements in OpenIDWebSecurity.xml :


For any issue in the login process, the following info can help you to diagnose the cause. The WAS SSO must gather doc (select Liberty and OpenID Connection in the doc) contains more details.
1) Use the chrome dev tool network tab to valid the authorization requests are propagated to the right URL.
2) You can use to decode the ID token that is returned by OIDC provider.

3) Gather server side OIDC trace info. Here we use DC pod as an example, it also applies to other ODM pods.
During the helm setup stage, you can enable OIDC security trace
First create logging.xml file with the following content:


  <logging traceFileName="trace.log" traceFormat="BASIC"





Then create a config map from logging.xml
kubectl create configmap odm-dc-logging-configmap --from-file=dc-logging=logging.xml
This secrete should be passed to decisionCenter.loggingRef parameter for Helm installation.
As the OIDC trace string will quickly generate lots of output during the login process, the above server config will set the destination of the trace to a normal folder inside the pod instead of the Kubernetes log in standard setting. You should revert it back to the default setting afterwards.
You can use the following commands to get the logs and server configs from an ODM pod:
rm -rf logs ||true
mkdir logs
kubectl cp $POD:/logs ./logs
ls -la ./logs
tar cvzf logs.tar.gz ./logs
rm -rf config ||true
mkdir config
kubectl exec $POD  -- tar cf - --exclude='apps' --exclude='resources' /opt/ibm/wlp/usr/servers/defaultServer/ | tar xf - -C ./config

ls -la ./config/opt/ibm/wlp/usr/servers/defaultServer
tar cvzf config.tar.gz ./config/opt/ibm/wlp/usr/servers/defaultServer
If you still cannot figure out the issue, you can open a support ticket with us and provide the above info, as well as a few screen captures covering the login process.


OpenID Connect standard:
OpenLiberty openidclient configuration
Element openidconnectclient in WLP server.xml
ODM on premise documentation on OpenID 
WebSphere SSO mustgather: