Authors:
Jayasankar Sreedharan, Software Engineer
Ghada Obaid, Senior Software Developer
Introduction: Transforming Risk Reporting with AI in IBM OpenPages
In today’s fast-paced regulatory landscape, organizations must continuously assess and manage risks to stay compliant and resilient. Risk assessments are foundational to Governance, Risk, and Compliance (GRC) programs—but distilling these complex evaluations into clear, actionable executive summaries remains a time-consuming challenge.
What if AI could do the heavy lifting?
By integrating Large Language Models (LLMs) into IBM OpenPages, organizations can now automate the generation of executive summaries—turning structured risk data into polished, decision-ready insights in seconds. This blog explores how this innovative solution works, why it matters for the industry, and how you can set it up using OpenPages and Watsonx.
Why Automating Risk Assessment Summaries Matters
Risk assessments are the backbone of any effective GRC (Governance, Risk, and Compliance) program. They help organizations identify, evaluate, and mitigate threats that could impact operations, compliance, or strategic goals. But while the assessments themselves are rich in detail, the real challenge lies in communicating their essence to senior stakeholders—quickly and clearly.
That’s where Executive Summaries come in.

Figure 1 - LLMs for Automating Risk Assessment Summaries in OpenPages
These high-level overviews distill complex risk data into concise, actionable insights tailored for decision-makers. However, creating them manually is often:
- Time-consuming for risk managers juggling multiple assessments.
- Inconsistent in tone, structure, and completeness.
- Delayed, slowing down reporting and executive action.
Enter Large Language Models (LLMs).
By leveraging LLMs—such as those available through IBM Watsonx—organizations can automate the generation of executive summaries. These models:
- Understand the context and structure of risk data.
- Generate clear, coherent summaries in seconds.
- Ensure consistency across reports.
- Free up risk professionals to focus on strategy, not formatting.
This use case offers more than just a productivity boost—it provides a strategic advantage.
Risk Assessment Executive Summary in OpenPages
IBM OpenPages serves as a powerful platform for orchestrating Governance, Risk, and Compliance (GRC) processes. Its extensible APIs and seamless integration capabilities make it an ideal foundation for embedding AI-powered enhancements, such as large language models (LLMs), to streamline and elevate risk management workflows.
Integration Architecture Overview
The integration of LLMs for executive summary generation involves the following components:
- Risk Assessment Data Source: OpenPages APIs generates structured JSON data for the Risk Assessment object.
- LLM Prompting Layer: A tailored prompt template is crafted to guide the LLM in interpreting and summarizing the structured data
- Watsonx LLM Endpoint: The prompt and corresponding data are transmitted to a deployed LLM hosted on IBM Watsonx.
- Summary Output: The generated summary is either persisted back into OpenPages or rendered in the user interface for review and validation

Figure 2 - OpenPages - LLM Integration
Workflow Steps
- Model Registration: A custom model for executive summary generation is registered within OpenPages, pointing to the Watsonx deployment.
- UI Integration: A dedicated button is added to the Risk Assessment view, enabling users to invoke the summary generation process
- Trigger: The process is initiated through a user action in the UI.
- Data Retrieval: The relevant Risk Assessment object is generated in JSON format using OpenPages APIs.
- Prompt Construction: The JSON data is embedded into a carefully designed prompt to ensure contextual accuracy.
- LLM Invocation: The prompt is sent to Watsonx for processing.
- Summary Generation: The LLM returns a concise executive summary.
- Integration: The summary is saved or displayed in OpenPages.
This end-to-end integration empowers risk professionals to focus on strategic analysis and decision-making, while the LLM automates the generation of high-quality summaries—enhancing both efficiency and insight.
Figure 3 - Demo Video for the Risk Assessment Executive Summary Generation in OpenPages
Input payload JSON Data Model & Summary Generation Prompt
To enable AI-driven executive summary generation, the data rendered in the OpenPages GRC object view is transformed into a structured JSON payload. This payload is designed to capture the hierarchical and nested nature of the GRC data model, making it suitable for consumption by a large language model (LLM).
JSON Data Model Structure
The JSON follows a clean key-value schema, where each section of the OpenPages view is serialized as a JSON object. Nested grids—used to represent hierarchical relationships such as Process → Risk → Control → Issue—are recursively modelled using the key "nestedObjects".

Figure 4 - Sample nested structure in input payload
Figure 4 above illustrates a process-risk-control hierarchy where relatedObjects encapsulates the nested GRC entities.
Each section in the view is represented as a JSON object, with the section name as the key. Subsections, including grids, are embedded as nested objects. This structure allows for a faithful representation of the UI layout in a machine-readable format.
Nested Grid JSON Structure

Figure 5 - Nested Grid in View
Figure 5 above depicts the “Process, Risk, Control Information” section with a nested grid titled “Process-Risk_Control-Issue”
The nested grid captures a multi-level hierarchy:
Process → Risk → Control → Issue
To represent this UI structure in JSON, each level in the hierarchy is modelled using a recursive schema called $gridHierarchyLevel. This schema defines how each object relates to its parent and how nested relationships are maintained.
gridHierarchyLevel Schema
{ "relationshipType": "children", "objectTypeName": "SOXProcess", "relatedObjects": [ { "name": "Transaction Processing", "nestedObjects": { "relationshipType": "children", "objectTypeName": "Risk", "relatedObjects": [// $relatedObject ] } } ] }
|
relationshipType
: Describes the relationship to the parent object (e.g., "children"
).
objectTypeName
: Specifies the type of GRC object (e.g., "SOXProcess"
).
relatedObjects
: An array of objects at the current hierarchy level. Each item in the relatedObjects
array follows the $relatedObject
schema:
relatedObject Schema
{
"PropertyName": "PropertyValue",
"nestedObjects": {
// Follows $gridHierarchyLevel schema
}
}
|
PropertyName
: Represents a property of the GRC object (e.g., "name": "Transaction
Processing"
).
nestedObjects
: Contains the next level of nested data, adhering to the $gridHierarchyLevel schema.
This recursive nesting allows the JSON structure to represent complex, multi-level relationships in a way that is both machine-readable and semantically rich for LLM processing.
Figure 6 below, shows how the nested grid in the view is mapped to its JSON

Figure 6 - Nested Grid and its JSON representation
Prompt Template
To guide the LLM in generating a context-aware executive summary, a prompt template is used. This template includes a placeholder {objectJson}
, which is dynamically populated with the serialized JSON payload of the GRC object:

Figure 7 – Risk Assessment Prompt
This structured prompt ensures that the LLM receives all relevant context—including nested relationships—enabling it to produce accurate, insightful, and human-readable summaries.
Logging & Troubleshooting
To effectively monitor and debug the integration between IBM OpenPages and the LLM-based executive summary generation, OpenPages provides built-in logging capabilities. These logs can help trace the request and response payloads exchanged with the LLM, which is essential for troubleshooting and validation.
Steps to Enable Logging for LLM Integration
1. Enable System Trace for Machine Learning
Logging for the LLM integration is managed under the System Trace category labeled “Machine Learning.”
- Navigate to the System Trace settings in OpenPages and ensure this option is enabled.
Figure 8 - Enable Machine Learning Tracing
2. Disable Payload Obfuscation
By default, OpenPages obfuscates the request and response payloads for security. To view the actual JSON data in the logs:
a. Update the registry setting at:
/OpenPages/Applications/Common/Administration/Integrations/Logs/Obfuscation Disabled
b. Set the value to true.
3. Access and Review Logs
Once logging is enabled and obfuscation is disabled:
- Trigger the AI button to run the model
- Launch the log capture and download the generated ZIP archive.
- Navigate to the following path within the archive to locate the relevant log file:
LogCollector_{date}_OPNodeServer1/OpenPages/aurora/logs/debug/opapp-OPNode1Server1-machinelearning.log
- This file contains detailed entries for each AI interaction, including the constructed prompt, the JSON payload, and the LLM’s response.
This logging setup provides transparency into the AI integration and is invaluable for debugging issues, validating prompt construction, and ensuring the quality of generated summaries.
Conclusion
Integrating large language models (LLMs) into IBM OpenPages through Watsonx unlocks significant value for enterprise risk and compliance functions. This approach enables organizations to:
- Enhance the efficiency and quality of risk reporting by automating the generation of executive summaries.
- Empower decision-makers with timely, context-aware insights derived from structured GRC data.
- Reduce operational overhead by minimizing manual effort in interpreting and summarizing complex risk hierarchies.
- Streamline compliance workflows, demonstrating a practical application of AI in enterprise governance.
This use case exemplifies how AI can be seamlessly embedded into existing GRC platforms to drive smarter, faster, and more informed decision-making across the organization