Decision Management (ODM, ADS)

 View Only

Building Automated CI/CD Pipelines for IBM ODM

By Peter Warde posted Wed October 05, 2022 07:34 AM

  

IBM Operational Decision Manager (ODM) is one of the best products on the market today because it allows you to easily change the business logic of your applications and deploy it rapidly into Production.

In this article I look at how you can improve those changes with Continuous Integration and Delivery (CI/CD). In the first part I give a brief overview of CI/CD and in the second outline an ODM pipeline. Finally I show why you should use ODM Decision Center with CI/CD before wrapping it up with some recommendations.

CI/CD Overview

Continuous Integration and Continuous Delivery (CI/CD) is a widely accepted practice that enables DevOps teams to assemble a pipeline of build, test and deployment stages, and deliver software in shorter cycles and quicker turn-around times to end-users.



A CI/CD pipeline typically consists of the following stages:

  • Continuous Integration - developers frequently merge and commit code on branches to a shared mainline in a Source Control System (SCS).
  • Continuous Build - a change to the mainline is detected by a CI Server and compiled and packaged using a build tool.
  • Continuous Test - after every successful build the compiled code is tested for bugs and quality.
  • Continuous Deployment - the build is deployed and tested across environments as part of a release plan.

When the CI part of the pipeline is automated, builds and tests happen very quickly. When a failure occurs, all relevant parties are notified via dashboards and messages. Developers then must use reports and logs to identify the problem and carry out all fixes.

When a build and test is successful, the compiled code is uploaded to a Build Repository as a binary. Testers then use tools to deploy the build as a release. At each environment it must pass tests to be promoted to the next and each promotion increases the likelihood it will be put into Production.

At any point in the pipeline a build must be traceable back to its source in the SCS. The below Manifest file was created when packaging a Java archive (JAR). The manifest entries reference its history and source code in the SCS.


Traceability 
is an essential feature of code development. For instance without it, when things go wrong, it is not possible to know where in the code to start an investigation and find the bugs to fix. Without traceability you cannot compare two versions of the code in production.

ODM and CI/CD

So can ODM ruleset extraction and deployment to Rule Execution Server be automated? And where and how does ODM fit into the CI/CD picture?


The good news is CI/CD can be used to deliver ODM rulesets. You can design and build pipelines to automate ruleset extraction and deployment, but to do so, you should first take into consideration the following features of ODM as part of your approach.

ODM rules are not code


ODM sources are ruleflows, business rules, business vocabulary and business models which are extracted from a rule repository as rulesets and then packaged as ruleApps.

It is hard for DevOps teams to get their heads round this as they are habituated to dealing with code, but to be absolutely clear, ODM RULES ARE NOT CODE. They are concise declaratory statements about the business, written in business language using terms and facts, authored by rule authors, not developed by developers.



Only rules written in business language can be managed and approved by business users, making them business rules and vocabulary “under business jurisdiction" (OMG SBVR) and not code under IT jurisdiction.




ODM operating model


ODM has its own operating model with defined roles and activities:

  • Business Users - rule authors and policy managers who create, maintain and test business rule logic. They are the owners of the rules and deliver rulesets.
  • IT Users - architects and developers who create ODM rule projects and deliver rule applications.



IT users use the Eclipse-based development environment Rule Designer to create and publish rule projects, and deliver supporting applications. Business users use the online web application Decision Center to author and manage the rules, and Decision Runner to test them.



ODM rule changes


Business rules change over time in 2 ways:

  • Changes to rules, including creation and retirement.
  • Changes to data resulting from the creation of new Terms and Facts in the rules.

Changes involving only rules can be carried out in Decision Center and deployed as a business release. They have no impact on clients.


Changes to data impact all clients as they provide the data to invoke the rules. These changes must be carried out by developers using Rule Designer and deployed as an IT release. They must also be synchronized with Decision Center by merging and resolving all differences in the rule repository.



ODM and CI/CD Design


A good starting point for your CI/CD pipeline design is to follow the operating model and create 2 pipelines:

  • A business release pipeline for the extraction, test and deployment of rulesets from Decision Center.
  • An IT release pipeline for the build and test of all application code, the extraction, test and deployment of rulesets and the synchronization between Rule Designer and Decision Center.

With the business pipeline rulesets can be deployed directly to Staging as there is no impact clients. With the IT pipeline all builds must be deployed to SIT.

The Business Release Pipeline

Decision Center and Rule Execution Server have the following out-of-the-box (OOTB) features you can use to help you create the business pipeline.

Continuous Integration

For continuous integration Decision Center has an in-built rule repository with full version control. It allows Rule Authors and Policy Managers to safely plan and carry out changes to rules on separate branches before integrating them with a release branch.




Decision Center also has an OOTB automated governance framework to control and govern rule changes and releases.



To build each stage of the pipeline ODM offers a number of technologies. You can use the Decision Center Ant tasks or Java APIs, but Ant contains a lot of boiler plate and the Java APIs requires specialist knowledge. The best choice is to use the Decision Center and Rule Execution Server REST APIs. They provide most of the functionality you need to build the pipeline.


Extract and Test Stage


To extract and test an ODM ruleset you can use the test suites operation of the Decision Center Build REST APIs.


The operation uses an existing Decision Center Test Suites configuration, which defines the ruleset to test, the test scenarios to run and the Decision Runner to run the tests.




In one request it extracts the ruleset, runs the setup, test and teardown cycle, and creates a test report.




You can parse the response to show the status of the operation on dashboards and as the input to subsequent actions. The test report can be used as the starting point for investigations and carrying out bug fixes.

Deployment Stage


To automate the build and deployment of a ruleset from Decision Center to Rule Execution Server you can use the deployments operation of the Decision Center Build REST APIs.


The operation uses an existing Deployment Configuration which defines the ruleset to be extracted and one or more target Rule Execution Servers.


Decision Center provides permissions at server level so you can safely control who can deploy where and deployments are hot so there is no need to restart anything.



You can parse the response to show the status of the deployment on dashboards and as the input to subsequent actions. The Deployment Report can be used as the starting point for investigations and carrying out fixes.

Note: For situations where a firewall blocks an HTTP request, you can use the download operation to get the ruleset on the file system


and then move it across the firewall and deploy as shown in the Re-Deploy Stage.



Re-Deploy Stage


Extracting a ruleset from Decision Center for each Rule Execution Server in the pipeline is not only inefficient, it also means creating a ruleset that is potentially each time different. It is therefore better to redeploy a ruleset from one Rule Execution Server to another.


You can achieve redeployment from one Rule Execution Server to another in two steps. First use the getRuleAppArchive operation of the Rule Execution Server REST APIs to download the ruleApp archive to the file system 


or for a ruleset getRulesetArchive


and then deploy the archive to the next Rule Execution Server using either the deployRuleApps or addRuleset operation.



Undeploy Stage


You will need the capability in your pipeline to rollback a deployment for the situation where something goes wrong. For this you can use either the deleteRuleApp or deleteRuleset operation.


The IT Pipeline


The IT pipeline is employed for all changes to the business rules application. The most common changes are to rule projects artifacts such as the BOM, ruleflows, ruleset variables and parameters. They may also involve a modification to a Java code component. Only Rule Designer can do these kind of changes, and they must be designed by architects and carried out by developers.

A change to the BOM will typically result from a change to the XOM. Changes to the XOM impacts all integrations as clients interact at runtime with a ruleset using either Hosted Transparent Decision Services (HTDS) or the IlrSessionRequest and IlrSessionResponse Java classes.




Synchronization Stage


When rule projects and the artifacts they contain are changed in Rule Designer they must be synchronized with Decision Center repository. Synchronization merges and resolves the differences between the two environments.

You can automate synchronization stage using the following ODM Ant Task:

ant synchronize -Ddata=<C:/workspace> -DprojectName=<Project name> -Daction=<publish|retrieve|disconnect> [-Doverride=true|false] [-Dselector=query] [-Dbranch=branch]

However semantic conflicts, such as a renamed rule or business term, cannot be resolved by automation so it is perhaps better done manually.

After synchronization deployment can be made from Decision Center together with all Rule Designer code to the SIT/UAT environments for testing as an IT release.


Application Stage


The business rules application stage handles all code changes. Depending on your application design, these may include Java XOMs with helper methods and JAXB annotations to mould and shape the HTDS webservice interfaces.

One of the great things about ODM is that with the right skills you can build rich high quality business rule applications. When you so you will use more advanced ODM features such as exception handlers, interceptors, authoring extensions and asynchronous integration with Decision Warehouse. If you have runtime versioning requirements for ruleset execution you will almost certainly want to build a loosely-coupled rule integration component.

All these will be part of the application stage of your IT pipeline. And as they are code a CI Server can build them using standard Java tools such as Maven Plugins, and test them using JUnit. Where you have services and database connections they can be mocked and stubbed. You can use a Maven repository for dependencies and releases.

If you are already doing CI/CD then this should all be fairly standard to you so I won't go any further with this here.

Ruleset Deployment Stage


After synchronization the ruleset must be deployed for integration testing with business rules application. For this you can re-use the Business Release pipeline with the deployment target environment as SIT and not Staging.


Putting it All Together

Building a pipeline is a development activity. You will need to create configurations, write scripts, invoke services, produce code to parse responses, handle all dependencies and test everything. You will need to install software across multiple environments. It is resource intensive and will take considerable effort to get it going.

For ease of use you should develop some tools. You should mavenize ODM from the outset by placing ODM dependencies in a Maven repository. You should wrap all REST invocations with Maven plugins or command lines (CLI) so that you integrate each stage with a CI Server and deployment tools. ODM Accelerator provides a framework and set of DEV-OPS Tools and Maven Plugins for mavenizing ODM.



and creating ODM Maven plugins and CLI tools.



It will greatly accelerate the development of your pipeline.

When designing your pipeline you should be aware of some of the pitfalls that can occur. One of the many moving parts of an ODM pipeline is ruleset versioning. How ruleApps, rulesets, resources, and libraries are versioned on deployment is determined by the Versioning Policy. At runtime the ruleset version selected for execution is determined by the ruleset path . The interplay between the two can make deployment and re-deployment particularly complex when a ruleset has many versions and the client request uses a dynamic ruleset path i.e. empty elements or a wildcard (*).

For ruleset traceability of you should use the OOTB Decision Center snapshot facility. With each deployment a snapshot URL is created and added to the ruleset property as a property (with a test it is added to the report).



With these you can find the rules that were tested or the rules of a ruleset with failures or errors in the pipeline. You can also produce a Business Rule Report and audit a ruleset in production.

Finally you can improve your pipeline by using the Decision Center REST API webhooks. For example you can use a webhook to detect a merge between branches and trigger tests, deployments and notifications. Using webhooks you can create a highly automated pipeline.


Other Approaches

Over the years IBM has offered a number of ways of ODM automation outside of Decision Center. For example there is the Rule Designer build automation tool and automated deployment of decision services. The most recent is the Build Command which builds a ruleset directly from rule projects in a GIT repository.

Side-stepping the ODM operating model as defined in the documentation is tempting to DevOps teams. They are habituated to coding in integrated development environments (IDE) such as Eclipse, and using GIT and JUnit. They want to standardize builds and deployments.

However these options are not good choices.

  • They do not support traceability. The Build Command does not add the GIT SCM revision number to a ruleset. When things go wrong you will not be able to track down bugs in the rules and know what to fix.
  • There is no tool for automating Decision Runner tests outside of Decision Center.
  • If you store your rulesets in a Maven repository there are incompatibilities with Rule Execution Server.
  • Building from Rule Designer using tools such as the Build Command smells of using rules as code.


The last point is perhaps the most compelling reason not to use any of these approaches. As has previously been said, rules are authored by rule authors, not developed by developers, and if you using Rule Designer you will be sidestepping the ODM operating model. Put simply, if your developers are creating the rules in Rule Designer then you are not using ODM correctly.

This might sound controversial because many organizations are doing just that. But it is simple fact that rules created and managed in Rule Designer by developers are under IT jurisdiction. They are IT rules, not business rules, and as such should be implemented in code. Only rules in ODM Decision Center can be "under business jurisdiction". It is online and available to the business community.


Note


IBM recommend that you set traceability either using a property in Decision Center or using in Rule Execution Server using the UI or the RuleApp management API.


Whilst Decision Center is a very good option as the ruleset and traceability are done as a single action, using Rule Execution Server or RuleApp Management API is definitely not and should not be done. Creating the ruleset in Rule Designer and subsequently adding the SCM revision number in Rule Execution Server becomes a 2 step process. Omission or failure to add the revision number will result in a ruleset going into production without traceability.

Whilst Decision Center is a very good option as the ruleset and traceability are done as a single action, using Rule Execution Server or RuleApp Management API is definitely not and should not be done. Creating the ruleset in Rule Designer and subsequently adding the SCM revision number in Rule Execution Server becomes a 2 step process. Omission or failure to add the revision number will result in a ruleset going into production without traceability. And without traceability you cannot provide an audit of the rules in any stage of the pipeline, including production, and you cannot compare a rule or a decision table in different versions of a ruleset.

If you really need to create your ruleset from Rule Designer use the ODM Accelerator Maven Build Command. It adds the GIT SCM revision number to a ruleset as part of the ruleset creation in Rule Designer. It provides the missing piece of deployment puzzle.

Recommendations

Use the standard ODM operating model

Use the standard operating model as defined in the documentation as the basis for your ODM CI/CD with Decision Center for business users and Rule Designer for IT users. Empower the business to directly manage their business logic in Decision Center. They after all understand it best.


Build a set of tools

Do mavenize ODM from the outset by placing ODM dependencies in a Maven repository. Build all your tools around the ODM REST APIs. Future proof everything by versioning it with the ODM version. Doing so will enable you to safely upgrade.


Get the right expertise in

ODM is not just another technology. It requires expertise and experience. You will need good ODM experts who can guide your DevOps teams through the process of building, testing and deploying rulesets, and building business rule applications.


Be realistic about the effort required

Building a pipeline is a development activity. You will need to write scripts, invoke services, produce code, handle dependencies and test everything. It is resource-intensive and will take some considerable effort to get it going.

About

Peter Warde has created the ODM Accelerator for IBM Operational Decision Manager - a collection of resources for designing and building richer, better, more maintainable and higher quality ODM rule applications.

If you like this article, you can read his other ODM articles:


You can also find this article on LinkedIn with viewer comments.

0 comments
166 views

Permalink