Symptom |
Potential Reasons |
How to Determine the cause? |
Possible Actions or remediation |
Low Message Throughput |
Inadequate allocation of CPU limits to the pod |
Openshift metrics, pod metrics |
Check the pod metrics for CPU utilization and if it is near upper limit, consider allocating more CPU core to the pod |
Poorly designed messageflows |
Collect Message flow Acc & Stats and look for potential hotspots from flow and node stats |
refer to KC for code design tips -https://www.ibm.com/docs/en/app-connect/12.0?topic=performance-code-design |
Large no. of message flows deployed to a single pod |
By looking at the no. of message flows |
It is recommended to deploy small no. of message flow to a container , typically 1 application or the related applications |
Inadequate no. of replicas of pod |
See if container CPU usage is getting closer to its limits |
Increase the no. of replicas of the pod for horizontal scaling |
Insufficient JVM heap size tuning for java based workload |
Observe JVM Resource stats to see if there is excessive Garbage Collection or heap usage reaching JVMMaxHeapSize |
Increase the JVM maxHeapSize if you find that Garbage Collection is getting kicked in too often |
Inadequate no. of additional instances on BAR/MessageFlow |
Collect Message flow Acc & Stats and check for Times max instances are hit |
Tune the additional instances in combination with other parameters described above |
|
|
|