Hi Peter
Here is something I produced for a client recently that you might find useful.
-----------
Execution Trace and Decision Runner are the features that in ODM are used to test and validate a ruleset.
The Execution Trace is the information dynamically recorded as a result of a ruleset execution. This information can include:
-
All execution events such as the executed rule task and rule instances, and statistics about the total number or rules executed / not executed;
-
Information such as execution start date and execution duration;
-
Ruleset meta-data such as the ruleset path and internet address;
You can find a full list of what can be recorded here. The trace can be used to understand how the rule engine computed the result of a ruleset execution.
ODM Decision Runner enables testers in Decision Center to create business-friendly test files in Excel format for a given ruleset. A test file is used to create sufficient scenarios to test combinations of rules for given set of input and expected result values. The result can also include the values of an execution trace. Running the test in Decision Center produces a report showing any test errors and failures. If you need to test rule a stand-alone i.e. unit test a rule, you can follow the post How to unit test individual rule to understand how to do so.
To understand when testing and tracing how combinations of rules work together, you have to have a good understanding of the ODM rule engine mechanics such as the execution algorithms RetePlus, FastPath, Sequential, and ruleset design. The order in which the rules are evaluated and executed can determine the final result and this can be complex where there are dependencies between rules i.e. the action part of one rule changes the state of data and variables in such a way that it alters the value of the conditional parts of other rules. It can end up being very hard to reason about and debug the ruleset execution. Rules that are fully declarative are always easier to understand and maintain and good design hides these complexities from business users.
There are a number of other ways other than Decision Runner to get execution results and traces including:
-
Java APIs – you can use the ODM Java API's to programmatically execute a ruleset and get the execution result and trace using an IlrSessionRequest, IlrExecutionTrace, IlrSessionResponse. There is a sample here.
-
REST APIs - you can use the ODM REST APIs to execute a ruleset and get the execution result and trace. There is a sample here and REST API Test Tool which gets a simplified trace.
-
Decision Warehouse - Decision Warehouse can monitor ruleset executions in Rule Execution Server and stores the execution trace in a database. This can be useful for not only debugging but also for decision auditing. Using ruleset meta-data you can trace the deployed rules back to Decision Center and produce a report of natural language rules used in the ruleset. You can use the Decision REST API to get the execution trace from Decision Warehouse.
-
Logs - you can also set the log level of Rule Execution Server to provide a trace of execution messages and events. You add logging on Java XOM using the JDK logging API and integrate the output into the Rule Execution Server console.
Both ODM supports DMN using decision model services. It is a "top-down decision-first" approach that decomposes a decision into sub-decisions, and implements the decision in Decision Center using the DMN notation. However rules can also be organized and structured using ODM rule projects and packages, and orchestrated at runtime with ruleflow and tasks. ODM calls this the "standard" approach.
With decision model services you get a repository of decisions. With the standard approach you get a repository of rules. The disadvantage of the decision model services is that each rule is tightly coupled to a decision and the rule cannot be re-used, analysed and refactored outside of the decision that contains it.
A better approach is to create a repository of rules with a single business vocabulary, and use the ODM feature of Ruleset Extractors, which are akin to getting a subset of data with a SQL Select with a WHERE clause, to get the rules to build rulesets. When this approach is used rules are loosely coupled to decisions, business vocabulary is consistent across decisions, and ODM analysis and refactoring tools work correctly.
--------
If you need any help do let me know. You can find me on LinkedIn together with articles that I have written on ODM.
------------------------------
Peter Warde
peterwarde@rulearchitect.com
------------------------------
Original Message:
Sent: Wed May 17, 2023 03:10 AM
From: Peter Brennan
Subject: Decision Assurance-Effectiveness Measurement Approaches
I'm looking to possibly determine what approaches may be available or applicable to provide assurance or measure the effectiveness of a rule or combination of rules within a ruleset in ODM. For some background, we have an existing bespoke decision management tool that allows our analysts to see what rules matched based on which specific rule criteria and what values exist within that rule criteria matched the values in input document ( i.e. the input BoM ). This visibility allows our analysts to determine the effectiveness/assurance of a specific rule in determining the correct/desired outcome for which the rule was developed and hence its effectiveness.
We are looking to replace the bespoke decision management system with an ODM based solution. As such we need to be able to deliver and demonstrate a similar level of visibility of rule execution to determine effectiveness/ assurance to our analysts. It's our understanding that this type of rule, rule criteria and data matching introspection granularity is not available from within ODM. As such, I'm interested in what approaches other ODM users have in place to enable Decision/Assurance/Effectiveness Measurement?
Thanks
Peter
------------------------------
Peter Brennan
------------------------------