We are excited to share a proof-of-concept demo on how IBM MQ can be integrated into Large Language Model (LLM) agentic applications and Distributed Multi-Agent Systems (DMAS). Read more about what AI agents are here.
At the moment, we have only integrated with LangGraph, a popular open-source agentic AI framework, to demonstrate our vision for MQ as async message broker. However, we hope to see this work lead to integrations with existing protocols (A2A, ACP, MCP), other agentic frameworks, and use cases.
Why IBM MQ and Agentic AI?
The goal is not to present a finished solution, but to explore ideas, gather feedback, and engage with organizations interested in how IBM MQ might support asynchronous, agent-based architectures.
Existing agent protocols like ACP and A2A typically use JSON-RPC over HTTP or Server-Sent Events (SSE). These transports are effective for many agentic use cases, such as simple synchronous request/response or event streaming. However, as agentic systems grow in complexity—spanning distributed, asynchronous environments—certain limitations emerge:
- Limited support for retries, acknowledgments, or flow control.
- No built-in persistence or queuing.
- Best-effort delivery without durability guarantees.
- Potential for silent failures without additional engineering.
These protocols generally assume relatively simple, point-to-point agent conversations. Features like distributed coordination, multistreaming, backpressure, and resilience under high load require developers to extend the core protocols or add application-level solutions. In contrast, IBM MQ offers:
- Reliable state updates using Publish/Subscribe patterns.
- Durable asynchronous messaging for distributed agents.
- Proven security and transactional integrity.
Our goal is not to replace these approaches but to complement them. At IBM, we believe that as agent systems become more mission-critical, there is value in bringing proven messaging patterns—durability, retries, monitoring, and secure queuing—into the agentic space. IBM MQ delivers these foundational features as part of its core design, tested over decades in enterprise environments.
To support experimentation and development, IBM MQ is available free for developers (Developer Edition). It can also be easily deployed in containers, making local and cloud-based testing straightforward (Container tutorial).
The Proof of Concept
This proof of concept shows how those capabilities can support real-world agent systems.
This sample is just to show the proof of concept of IBM MQ as the async message broker for llm agents, several improvements and enhancements can start from this repo
The repo includes two core scenarios:
- Agent State Updates via Pub/Sub: The primary agent uses MQ to receive real-time state updates from external events. A price emitter demonstrates how an agent can adjust its responses dynamically as its state changes mid-conversation.
- Distributed Multi-Agent Communication: The primary agent and a flight searcher agent operate in separate environments. They exchange messages over MQ queues, demonstrating resilient communication even if one agent becomes unavailable or is temporarily offline.
Agents define both outbound and inbound messaging networks through configuration files, making it straightforward to connect and scale additional agents.
Our primary objective with this work is to start a conversation:
- How should state and message durability be handled in agentic systems?
- How can DMAS patterns be hardened for mission-critical environments?
- Where does MQ’s reliability and security offer the most value?
We are interested in speaking with teams working on agent-based systems, especially those where reliability, scalability, and secure messaging are priorities. We welcome feedback, ideas, and collaboration opportunities.
Let us know what you think!
- Open an issue or discussion thread in the GitHub repository.
- Or contact us directly at AskMessaging@uk.ibm.com
Acknowledgements
Many thanks to Francesco Rinaldi for leading this initiative and developing the samples.
Also, thank you to the following for their invaluable insights and review: Richard Coppen, Soheel Chughtai, and David Ware.