App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.


#Applicationintegration
#App Connect
#AppConnect
 View Only

Explore the new features in App Connect Enterprise 13.0.5.0

By Ben Thompson posted 15 days ago

  
iStock_000020343379_XXXLarge


We are very pleased to announce the delivery of IBM App Connect Enterprise 13.0.5.0 software.  We aim to provide regular quarterly mod releases for ACE 13 which contain both new features and regular maintenance.

  • IBM App Connect Enterprise 13.0.1.0 was released in September 2024 - more information here
  • IBM App Connect Enterprise 13.0.2.0 was released in December 2024 - more information here
  • IBM App Connect Enterprise 13.0.3.0 was released in March 2025 - more information here
  • IBM App Connect Enterprise 13.0.4.0 was released in June 2025 - more information here
  • IBM App Connect Enterprise 13.0.5.0 has just been released - more information below

This blog post summarizes all the latest and greatest capabilities which are made available in IBM App Connect Enterprise 13.0.5.0:

  • Kafka node enhancements for Transactions
  • Kafka node enhancements for Scaling
  • Context Trees in the Toolkit Visual Debugger and Flow Exerciser
  • Using the BLOB domain for large data handling with Microsoft Azure Blob Storage Request node
  • New Toolkit Discovery Request Nodes:
    • Microsoft Azure Event Hubs, Google Gemini, IBM Aspera, Redis, Splunk, and Vespa
  • New Toolkit Discovery Input Nodes
    • Microsoft Azure Event Hubs
  • Toolkit AI Pattern for RAG using Pinecone and Watsonx.ai
  • Designer Account renaming at creation time

Kafka node enhancements for Transactions

ACE 13.0.5.0 includes some structural enhancements to our Kafka nodes in the area of Transactionality. These new capabilities bring a range of benefits in terms of the functional use cases we can now support and also give the option of improved performance.
 
  • Transactionality when producing Kafka messages:  Prior to ACE 13.0.5.0 the ACE KafkaProducer node acted non-transactionally. When a message is published to Kafka non-transactionally, it is immediately available to all consumers and the publication cannot be undone. However, when a message is published to Kafka by a transactional Kafka producer (which is now possible via some new properties we have added in 13.0.5.0), it is not immediately available to all consumers. The message does not become available to all consumers unless the transaction under which the message was published is committed.

  • Transactionality when consuming Kafka messages:  Prior to ACE 13.0.5.0 the ACE KafkaConsumer node acted non-transactionally. Users could choose to commit the message offset in Kafka (typically this is the behaviour most users would normally adopt, so that messages are not reprocessed), and optionally wait for confirmation that the commit of the message offset had completed before propagating the data down the flow. These properties have been replace in 13.0.5.0 with new more detailed options on the new Transactionality tab of the KafkaConsumer node. A new property called Isolation level, controls which messages are delivered to the KafkaConsumer node: 
read_uncommitted means that the KafkaConsumer node will receive messages when they are published without waiting for the publisher's transaction to be committed. This means that the KafkaConsumer node might receive a message that is subsequently discarded if the transaction is rolled back.
 
read_committed means that the KafkaConsumer node won't receive messages that are not yet committed. If the node reaches an uncommitted message in the topic partition, such a message will block others from being read by the KafkaConsumer until the message is committed or rolled-back.
 
Given the changes described above, the new Transactionality settings for both Kafka consumers and producers mean that the nodes can operate together transactionally, allowing message flows that use Kafka nodes to provide exactly once messaging semantics.

Kafka node enhancements for Scaling

ACE 13.0.5.0 also brings improvements to the Kafka message flow nodes' scaling options when consuming messages from Kafka, which now enable the distribution of Kafka messages across multiple Kafka connections, multiple message flows and across additional instances. 
 
  • Increasing additional instances:  As existing users will be very well aware, ACE has always exhibited market-leading performance and the product's ability to scale in a consistent way has always been one of our strengths. Typically when looking to increase the throughput of a message flow, users will often begin by increasing the number of additional instances which have been assigned to a message flow. Changing this setting either at the message flow level or using a dedicated pool of additional instances for a particular input node, assigns a greater number of concurrent threads in the integration server operating system process, allowing ACE to process more messages in parallel. 
 
You can increase the number of additional instances for a flow by setting the value of Additional instances on the Instances tab of the KafkaConsumer node. Increasing the number of additional instances enables parallel execution of Kafka messages. Messages that are received from Kafka are distributed across these instances, allowing concurrent processing. This technique works well, but in prior ACE versions, when interacting with Kafka using a KafkaConsumer message flow node, parallel execution has previously meant sacrificing message ordering functionality. With the introduction of new transactional settings on the KafkaConsumer node, using additional instances will no longer mean sacrificing message ordering, if the Commit message offset to Kafka property has been configured for transactionality.
 
  • Deploying multiple message flows: You can deploy multiple flows that use KafkaConsumer nodes that are configured with the same Consumer group ID and subscribed to the same Topic name on the Basic tab of the KafkaConsumer node. These message flows will collectively form a Kafka consumer group, and with ACE 13.0.5.0 each flow will establish its own connection to Kafka to pull messages independently, increasing parallelism in message consumption.
 
  • Kafka Connections Node Property:  In ACE 13.0.5.0, on the Advanced tab of the KafkaConsumer node's properties, there is a new Kafka connections property. 
By default, this property is set to 1. Increasing this value allows a single flow to establish multiple consumer connections to Kafka, all within the same consumer group. Each Kafka connection receives its own set of partitions, as assigned by Kafka. Therefore, increasing the number of connections can improve parallelism in message consumption. This method can be combined with additional instances for maximum scalability. For example:
 
    • Increase Kafka Connections to pull more messages concurrently.
    • Increase Additional Instances to process those messages in parallel.

Context Trees in the Toolkit Visual Debugger and Flow Exerciser

In our last release, ACE 13.0.4.0, we introduced a read-only Context Tree for the purpose of holding output data from each discovery connector node which is propagated forward through the message flow so that it is available when doing mapping tasks with later nodes in the flow. Our release blog described the tree and how it is accessible in a Compute node.

Experienced users of ACE will be familiar with the shape of the logical tree structure which is held in memory and passed from one message flow node to the next when data travels through a message flow. We often call this tree structure the Message Assembly. The Message Assembly has historically been made up of 4 separate sub-trees:

  • Message tree: The message tree is a part of the logical tree (message assembly) in which the internal representation of the message body is stored.
  • Environment tree: The environment tree is a part of the logical tree (message assembly) in which you can store information while the message passes through the message flow.
  • Local environment tree: The local environment tree is a part of the logical tree (message assembly) in which you can store information while the message flow processes the message.
  • Exception list tree: The exception list tree is a part of the logical tree (message assembly) in which the message flow writes information about exceptions that occur when a message is processed.
 The new section of the Message Assembly that was added in 13.0.4.0 is called the Context tree:
  • Context tree: The context tree is another way of carrying information about the actions executed by individual message flow nodes. It is a read-only structure and is intended to be available for all downstream nodes in your flow so that the message flow developer doesn't need to be concerned with copying forward relevant information for use later in the message flow.  Unlike the other logical trees, the context tree is not captured by default; in order for it to be made available, it must be referenced in the message flow.  Initially we intend the Context tree to be used in conjunction with the Discovery Connector nodes, but over time we expect developers may wish to use it more generally.

New in ACE 13.0.5.0 we have added further enhancements in this area:

  • The Context Tree is now visible when you pause a message flow at a breakpoint in the Toolkit Visual Debugger:
  • The Context Tree is also now visible when you click on a wire showing the execution path of a message flow in the Toolkit Flow Exerciser:

  • Compute node ESQL has been enhanced with a new CONTEXTINVOCATIONNODE function, which saves the user from having to know the name of the input node for the common use case of wanting to access the context tree payload that came into the message flow.  As a reminder - ACE 13.0.4.0 provided a CONTEXTREFERENCE function in ESQL which provides access to a section of the Context tree from a specific prior node in the message flow propagation path.  This function returns a REFERENCE to the relevant subfolder of the context tree that has been requested.
  • The Java Compute node has been enhanced with classes named MbContextTreeNode and MbContextTreeNodePayload which provide new methods for navigating the context tree and retrieving information that can be used within a Java Compute node transformation

Using the BLOB domain for large data handling with Microsoft Azure Blob Storage Request node

Typically when flow developers use the ACE Discovery Request nodes in a message flow, the input data structure which the node expects to receive and the output data structure which is sent further on down the message flow is in a JSON format, and is presented to the flow developer in the logical tree in the JSON message domain. This approach is consistent with the fact that the vast majority of endpoint applications which our discovery connectors are designed to interact with are accessible through JSON payloads sent to Open API endpoints. However, there are situations where JSON is not the most appropriate way of describing a message payload. Traditional ACE message flows in the Toolkit solve this problem by providing integration specialists with a variety of built-in parsers (and in some cases the relevant associated message models) out-of-the-box such as BLOB, XMLNSC, DFDL etc. 
 
ACE 13.0.5.0 introduces the ability to use BLOB messages directly (without the need to transform via a JSON domain message) when pushing data into the Microsoft Azure Blob Storage Request node. As a reminder, the BLOB message domain describes a bit stream as a single string of bytes and although you can manipulate it within a message flow using positional functions (such as functions in ESQL to count or substring a certain number of bytes), you cannot identify specific pieces of the byte string using a modelled field reference, in the way that you can with messages in other domains. Consider a simple message flow which has an HTTP Input node configured to receive data in the BLOB domain, which is wired (after initially echoing back the data to the requesting client) to a Microsoft Azure Blob Storage Request node:
After Launching Connector Discovery, as with other discovery connector nodes, you specify connection details and choose the relevant action you wish to perform - in this case an action to Update or create blob
When assigning the value to be placed into the Content field, you will now find an option in the Available mappings to take BLOB data from the Payload of the incoming logical message tree:
Similarly, the reverse scenario is possible. Imagine you are receiving data back into the Toolkit message flow for manipulation downstream of a Microsoft Azure Blob Storage Request node and you would like this data to be represented in the BLOB domain. At discovery time you would select an action to Download blob content
On the following panel if you expand the Controls section, you now have a new way of controlling the chosen Response binary data handling to be BLOB domain:
This will result in the message flow node having its Response Message Parsing tab set up for the BLOB Message domain:
 

New Toolkit Discovery Request Nodes

Continuing our mission to expand the available Toolkit message flow nodes for easy connection to third party applications, this quarter ACE 13.0.5.0 has added six further new Discovery Request Message Flow nodes:

  • Microsoft Azure Event Hubs Request node: Microsoft Azure Event Hubs is a high-throughput data streaming platform that ingests and processes millions of events every second, enabling real-time analytics across enterprise systems. Use the Microsoft Azure Event Hubs Request node to connect to Microsoft Azure Event Hubs and issue requests to create, update, delete, or retrieve consumer groups, messages, event hubs, and partitions.
  • Google Gemini Request node: Google Gemini is a family of multimodal AI models that process data types such as text, code, audio, images, and video. Use the Google Gemini Request node to connect to Google Gemini and issue requests to perform actions on objects in the Google Gemini application.
  • IBM Aspera Request node:  IBM Aspera enables high-speed data transfer of large files and data sets across global networks, regardless of file size, transfer distance, or network conditions. Use the IBM Aspera Request node to connect to IBM Aspera and issue requests to perform actions on objects such as files, node configuration, permissions, tokens and transfers.
  • Redis Request node: Redis (REmote DIctionary Server) is an open source data store, which holds data in memory and functions as a database, cache, and message broker. As a NoSQL database, Redis stores key/value pairs in data structures (such as string, hash, and list data types) rather than within tables and other database objects, which are used by SQL-based relational databases. Redis also provides a set of commands that can be used to execute operations on the Redis server (for example, SET and GET) in much the same way as SQL-based statements such as INSERT or SELECT. Use the Redis Request node to connect to Redis and issue requests to create, update, delete, or retrieve items and run custom SQL queries.
  • Splunk Request node: Splunk is a software platform used for searching, monitoring, and analyzing machine-generated data in real time. Use the Splunk Request node to connect to Splunk and issue requests to perform actions on objects such as applications, HTTP Event Collector (HEC), search and users.
  • Vespa Request node:  Vespa is a platform for low-latency computation on large datasets. It stores and indexes structured text and vector data, allowing for fast queries, data processing, and machine learning model inference at scale. Use the Vespa Request node to connect to Vespa and issue requests to perform actions on objects such as documents.

Each new type of connector also has a corresponding new policy type, which helps Toolkit users define configuration properties for easy connection to the applications. These policies also link to credential information that can be encrypted and stored in an ACE vault, enabling the ACE runtime to safely and securely connect to your applications. For example, here's a picture of the new Microsoft Azure Event Hubs policy type:

New Toolkit Discovery Input Nodes

In ACE 13.0.5.0 we have also added one new Discovery Input Message Flow node:

  • Microsoft Azure Event Hubs Input node: Microsoft Azure Event Hubs is a high-throughput data streaming platform that ingests and processes millions of events every second, enabling real-time analytics across enterprise systems. Use the Microsoft Azure Event Hubs Input node in a message flow to monitor and accept input from Microsoft Azure Event Hubs. For example, you can use the Microsoft Azure Event Hubs Input node to monitor Microsoft Azure Event Hubs for new messages and accept input when the event occurs.

Toolkit AI Pattern for RAG using Pinecone and Watsonx.ai

An increasingly popular method of improving the quality of responses from Artificial Intelligence Large Language Models is Retrieval-Augmented-Generation or commonly referred to as RAG for short. The heritage of this description lies in Retrieval being a shorthand description of pulling relevant information from internal or external data sources (which in the case of the ACE feature described below will be plain text documents). Similarly, the term Generation is referring to the fact that a language model is being used to generate responses based on the retrieved content. RAG methods are popular because they can:

  • Reduce hallucinations by grounding responses in actual data
  • Keep answers up to date with live or recent information (versus potentially out-of-date data which may have been used to train the model)
  • Work with existing knowledge, removing the need for pretrained models
  • Accelerate decision making by surfacing the right insights

ACE 13.0.5.0 introduces a new category in the Patterns Gallery: AI Patterns. The initial pattern provided in this category is our RAG Pattern, which will generate two separate message flows, an indexing flow that loads text data into a Pinecone vector database, and a Query API flow that retrieves relevant information from the database. The information that is retrieved from the database is used as additional context for the Large Language Model running in IBM watsonx, which enhances the quality and relevance of the model responses. 

Once you have clicked on the pattern and started it (which involves a very quick "installation" of some files downloaded to your Toolkit from our pattern store hosted in a publically accessible GitHub repository), you can create an instance of the pattern which has configuration values tailored to your individual circumstance. The picture below shows the variables that you can change which will influence the behaviours of the generated message flow:
When you click the Generate button, the pattern instance will create the required projects as shown below:
The Indexing.msgflow will look like this:
The REST API Query subflow will look like this:

Designer Account renaming at creation time

In ACE 13.0.4.0 and prior mod releases, when you are in the Designer flow authoring tool doing connector discovery and you connect to an application for the first time, you are requested to provide connection information and credentials. These details are saved as an account configuration, and the Designer generates a name for that account which is based on the name of the application you are connecting to, followed by a number. If you don't like this name, later on, you can return to the created account and Rename it to a name of your own choosing.

In ACE 13.0.5.0 we have enhanced this experience so that you can choose an alternative name for the account at the same time when it is initially created. The "Before" and "After" pictures below show the new field in a red box:

0 comments
59 views

Permalink