Using the decision engine, setting too many rules in a single rule task can cause slow performance when the task is executed for the first time. During the first execution, the classes that contain the bytecode associated with the rules are loaded. Once the classes have been loaded, the execution becomes very efficient. This is most noticeable when migrating dynamic selection with no scope used in the classic rule engine.
In the classic rule engine, when using dynamic selection with the sequential or Fastpath execution modes, rule selection is performed when entering the rule task. At runtime, the rule task compiles the selected rules and creates an engine with those rules. This results in the creation of a small engine that is used for execution.
In the decision engine, the compilation is done when the ruleset is deployed and there is no compilation at runtime. If the scope of a rule task includes all the rules of the ruleset, an engine with all the rules is created. This can lead to the generation of large tasks.
It is recommended to have a maximum of 5000 rules per rule task. If the number of rules in your project exceeds this threshold, it is recommended to move part of your logic to the ruleflow.
If you use a variable to filter rules based on their rule package name, you might need to group your rules by rule tasks according to their package name. For example, take the following packages:
For the following packages
They are part of a single rule task and selected with a rule selection expression as shown below:
You could update the ruleflow so that it looks like the following:
Note that you can continue to work with dynamic selection on the tasks that have a smaller scope.
Alternatively, you could keep the same dynamic filter and split the rules between tasks as shown below. For example, the rule task pck_A could contain only the rules of pck_A. If you only need to work on pck_B for the input data, no rule would be activated in pck_A.