Introduction
When you run large-scale integrations, chances are you’ll eventually need to connect IBM App Connect Enterprise (ACE) with Microsoft Azure Service Bus. That sounds simple enough until reality hits. Between unsupported JMS clients, broken connections, and connector limitations, we ended up building our own generic solution.
In this post, I’ll share the journey: the problems we hit, why the official connectors weren’t good enough, and how we solved it with a custom JavaCompute implementation. Hopefully, this helps anyone else who finds themselves battling the same issues.
The First Problem: Unstable JMS Connections
It all started when Microsoft dropped support for the JMS client version we were using (0.9.0). Overnight, about half of our calls to Azure Service Bus began to fail, and messages piled up in our backout queue.
Microsoft’s advice was clear: upgrade to a supported version of the JMS client. We moved to 1.13.0, upgraded ACE to v12.0.11.0, and switched our Java runtime to version 11. To do this, we followed the blog of Trevor Dolby, ace-v12-with-java11-and-apache-qpid-jms
That fixed the immediate instability, but we quickly hit another issue. After 300,000 ms of inactivity, the connection dropped..
The connection was closed by the container because it did not have any active links in the past 300,000milliseconds.
Normally, the connection would restart when a new message arrived. Sometimes, though, it simply refused to reconnect. We had an integration that was only reliable when messages were constantly flowing, which is not acceptable in production.
Trying the Official Azure Service Bus Connector Nodes
Around this time, IBM introduced Azure Service Bus connector nodes in ACE v13.0.4.0. We thought: great, maybe this solves our problem. What is new in ace-13-0-4-0
Unfortunately, testing them uncovered several limitations:
- Payload handling: Every message came in as a string, even if it was JSON or binary data.
- Application properties: These weren’t passed automatically; they had to be configured manually in the discovery wizard. That made a dynamic, multi-queue setup almost impossible.
- Sending JSON: Even when we explicitly set content-type: application/json, the connector sent the message as a plain string.
IBM did provide a fix for some of the property handling issues for v13.0.4.0, but it still wasn’t flexible enough for our use case. We needed something generic, dynamic, and capable of preserving both payload and properties.
Designing a Custom Generic Solution
So we went back to basics and built our own connector using the Azure Service Bus Java SDK inside JavaCompute nodes.
The design looked like this:
Receiving messages (Azure → ACE):
- A scheduler node triggers polling from Azure Service Bus queues.
- The JavaCompute node (ReceiveFromAzure) pulls messages, extracts headers + payload, and maps them into ACE messages.
- Messages are dynamically routed into the right ACE JMS queues
Sending messages (ACE → Azure):
- Multiple ACE queues feed into a JavaCompute node (sendToAzure).
- The node determines the correct Azure Service Bus queue dynamically from headers.
- Application properties and binary/JSON payloads are preserved.
- A confirmation flow ensures messages were sent successfully.
The Receiver Implementation
The receiver polls messages in batches, copies properties, preserves payload, routes to the correct ACE JMS queue, and only completes (removes) messages from Azure after successful processing.
Flow Diagram

Dependencies


Setting things up (onStart)
When the node starts, it:
· Loads configuration values (endpoint, credentials, and queue list).
o Endpoint: The Azure Service Bus namespace.
o Credentials: SharedAccessKeyName and SharedAccessKey
o Queue list: A comma-separated list of queues it should handle.
· For each configured input queue:
o Builds a ServiceBusReceiverClient (to receive messages).
o Maps it to a target entity path (where ACE should forward the message).
All this is stored in a receiverMappings hashmap, so the node knows which receiver belongs to which destination.

Cleaning up (onStop)
When the flow stops, it closes all receiver clients, ensuring connections are released cleanly.

Processing messages (evaluate)
Here’s the main logic:
1. Check all receivers
· For each configured input queue, it polls for up to 100 messages (with a 10-second wait).
2. For each message received:
· Creates a new ACE message (MbMessage).
· Copies application properties into a JMS-like structure inside the ACE message.
· Adds the message body (payload) as a BLOB.
· Sets the JMSDestinationList so ACE knows where to route it.
· Propagates the message downstream in the flow.
· Marks the message as completed in Azure (removing it from the source queue).
Supporting method (setJmsDestinationList
)
Creates a destination list dynamically based on the queue that was linked to the received message.

Error handling
If processing fails, the message is not completed, meaning it stays in Azure Service Bus for retry/reprocessing.
The Sender Implementation
The sender dynamically resolves the target Azure queue from JMS headers, preserves application properties, and sends payloads (binary or JSON) correctly without forcing them into strings.
Flow Diagram

Dependencies


Setting things up (onStart)
When the node starts, it:
· Loads configuration values (endpoint, credentials, and queue list).
o Endpoint :The Azure Service Bus namespace.
o Credentials: SharedAccessKeyName and SharedAccessKey
o Queue list: A comma-separated list of queues it should handle.
For each queue, it builds a ServiceBusSenderClient (a sender object from Azure’s SDK). These are stored in a map, ready for use later.

Cleaning up (onStop)
When the node stops, it closes all the sender clients so there are no resource leaks.

Processing a message (evaluate)
Here’s the main logic:
1. Copy the input message
2. Find the target queue
- It looks at the JMS headers to find the destination (JMSDestination).
- Extracts the queue name (removing extra bits like queue:/// or query params).
3. Get the right sender client
- It matches the queue name with the sender client created earlier.
4. Extract the payload
- The actual message body (in BLOB format).
5. Build an Azure message
- Add the payload.
- Copy any application properties from the original message.
6. Send it to Azure Service Bus.
Finally, the message is passed down the output terminal so ACE can continue its flow.

Error handling
If something goes wrong (like missing configuration, queue not found, or Azure error), the code throws an MbUserException. This will roll back the message and place it on the backout queue of the queue from which the message was read.
Configuration with ACE Policies
To keep connection details out of the Java code, we store all Azure Service Bus settings in a UserDefined policy. This way, we can update connection strings, keys, and queue mappings without touching code or redeploying flows.
Example policy :

Deployment Tips
A few additional notes for deployment:
- Place the Azure Service Bus Java SDK libraries in your ACE classpath.
- Make sure you’re running on a supported ACE version (12.0.11.0+ with Java 11).
- Deploy the UserDefined policy alongside your BAR file so the environments can override connection details.
- For high-throughput scenarios, tune the batch size and receive timeout in the receiver flow.
- Monitor connection health: Azure Service Bus SDK is generally more resilient than the old JMS client, but network hiccups can still happen.
Lessons Learned
This approach gave us a solution that was:
- Generic and dynamic, supporting multiple queues without static config.
- Preserving headers and payloads, no more strings-only payloads.
- Stable, built on the supported Azure SDK.
Downside:
- custom code needs to be maintained and tested against new ACE and SDK versions.
Conclusion
What started as a simple need to connect ACE with Azure Service Bus turned into a journey through broken JMS clients, idle connection failures, and half-working connectors. In the end, rolling our own solution with the Azure SDK gave us the flexibility and reliability we needed.
For more integration tips and tricks, visit Integration Designers and check out our other blog posts.