Starting from BAMOE 9.3.0, we've added a generative AI task to our workflow tooling and runtime. It’s a task node you can drop into a BPMN model like any other. It runs prompts against an LLM and reads/writes to your process variables. No side services or custom glue code needed.
What it is
BAMOE Gen AI Task is a regular task in the process. It takes inputs from your existing variables, runs a prompt on a model and AI provider you pick, and stores the result where you tell it. You can use it for drafting replies, summarizing documents, or lightweight decision support. It won’t replace your logic, but it will handle the natural language parts following your instructions.
The BAMOE Gen AI Task is supported on the runtime, via the custom Gen AI Task WorkItemHandler implementation, and on the BPMN Editor in both Canvas and the Developer Tools for VSCode.
Providers and models
When authoring BPMN models, you can connect to your desired AI provider to list available models and preview prompts with simulated variables. We currently support IBM's watsonx.ai, OpenAI, and Ollama as providers.
Connecting to AI Providers from Canvas
- Click on the profile icon in the top right corner.
- Select "Connected accounts" from the pop-up menu.
- Click on "Connect to account".
- Choose your preferred AI provider from the list.
- Fill the required parameters for each provider (API key, service URL, project ID, etc).
Connecting to AI Providers from VS Code
- Click on the profile menu in the bottom left corner.
- Select your preferred AI provider from the list.
- Enter the required details (same as mentioned above for each provider).
Adding a Gen AI Task node to your BPMN model
While using the BPMN Editor, click on the Custom Nodes icon in the palette, select the Gen AI Task, and drag it to your workflow, connecting to the existing nodes.
Select the desired AI provider and pick a model from the list (if you are connected to the provider) or type the model’s name. Other parameters, such as Temperature and Token limit, can be set to customize your task further.
In the Data mapping modal, add the process variables that should be available to your prompt during task execution. These variables can be mentioned in the prompt with double curly brackets, like “{{variableName}}”. The output can also be assigned to a process variable for later use.
Note: Only the variables in the Data mapping are available in the prompt.
Finally, type in a prompt, for example:
Generate an offer letter to {{candidate}} specifying his position
as {{position}}, his base salary as {{baseSalary}} and a bonus
with the value of {{bonus}}.
Format these values in a table, using $ as the currency symbol.
Conclude the letter with a congratulatory and welcoming message.
Preview and testing
If you are connected to the selected AI provider, the Preview button should be enabled. The preview allows you to mock data for the process variables and check the result of the prompt, providing insight into how it would behave during execution.
Fill in temporary values for the variables in your prompt, run it, and see the actual output from the provider and model you selected. This is useful for checking length, tone, and structure before deploying. Adjust the prompt, temperature, or token limit as needed, then re-run.
Why we built it this way
We wanted the Gen AI Task to behave like any other task: configurable, debuggable, and wired to process data. Being able to swap models, keep prompts next to the flow, and preview with sample values turned out to be the pieces that mattered. If you’re already modeling processes, this should feel familiar, just with a task that’s good at natural language.