Integration

 View Only

Explore the new features in App Connect Enterprise 13.0.2.0

By Ben Thompson posted Sun December 08, 2024 08:05 PM

  
iStock_000020343379_XXXLarge


We are very pleased to announce the delivery of IBM App Connect Enterprise 13.0.2.0 software.  We aim to provide regular quarterly mod releases for ACE 13 which contain both new features and regular maintenance.

  • IBM App Connect Enterprise 13.0.1.0 was released in September 2024 - more information here
  • IBM App Connect Enterprise 13.0.2.0 has just been released - more information below

This blog post summarizes all the latest and greatest capabilities which are made available in IBM App Connect Enterprise 13.0.2.0:

  • Using Avro and Schema Registries with the Kafka nodes
  • Templates for use in the IBM App Connect Enterprise Designer authoring tool
  • Support for MS SQL Server and PostgreSQL databases with Business Transaction Monitoring
  • PostgreSQL - Support for stored procedures with dynamic result sets
  • TCPIP message flow nodes with timeouts in fractions of a second
  • Dynamic credentials for SFTP and Local Security Profiles
  • New Couchbase Request message flow node
  • Support for IPv6 network addresses
  • New SSL options for the ibmint deploy command
  • Securing Open Telemetry messages with an authentication header and a new credential type
  • Auto-complete for ibmint commands in a bash shell on Linux and UNIX

Using Avro and Schema Registries with the Kafka nodes

Apache Avro is an open source method for serializing data as it is exchanged between different systems and applications, which is frequently used in conjunction with Kafka messages. An Avro schema defines the structure of the data in the Kafka message. Starting in ACE 13.0.2.0 the KafkaProducer node has been enhanced so that JSON messages passed into the node can be serialized into Avro format using a referenced Avro schema before it is published to the topic.  You can define a schema identifier directly on the message flow node's properties, or if you prefer you can define the schema identifier dynamically in the LocalEnvironment section of the logical tree which is passed into the message flow node. The KafkaProducer node must also be given the name of a policy which describes how to locate the required Avro schema. A new Schema Registry policy type is provided for this purpose.

The KafkaConsumer and KafkaRead nodes have similar equivalent settings (although in those situations there is no need for the Schema Id property).  The Schema Registry policy contains the connection information for the schema registry from which the Avro schema is retrieved. If you redeploy a Schema Registry policy, all message flows that are using the policy will be stopped and restarted. The graphic below shows the properties which are available in this new policy type:

For more information about Avro serialization, check out the ACE documentation page.

Templates for use in the IBM App Connect Enterprise Designer authoring tool

The latest mod release of ACE 13.0.2.0 has enhanced the Designer authoring tool with a large catalog containing nearly 400 flow templates.  Templates provide users a quick way to get started when authoring event-driven or API flows in the Designer tool. As shown in the screenshot below, a filter is provided enabling you to quickly find an example of a template to meet your requirements.

Support for MS SQL Server and PostgreSQL databases with Business Transaction Monitoring

One of the popular features of App Connect Enterprise is Business Transaction Monitoring which helps users monitor the lifecycle of a message payload through a business transaction, which can include travelling through multiple message flows. In this context we talk about a Business Transaction meaning a set of events or actions that form a self-contained business use case; such as booking an airline ticket or the order, dispatch and delivery of a parcel. A Business Transaction can incorporate several different technical transactions or units of work, and can include data being exchanged across multiple protocols between message flows.  

Business Transaction Monitoring is based upon the ability of ACE to publish monitoring messages over MQ, and this feature also uses a relational database to store the progress of the transactions.  ACE 13.0.2.0 extends the set of supported database types which can be used by this feature to include MS SQL Server and PostgreSQL, alongside DB2 and Oracle which were previously already supported. ACE makes connections to the database using ODBC.

  • On Windows, the ODBC driver for MS SQL Server is provided by the operating system.
  • On the other ACE platforms, ACE provides a DataDirect ODBC driver for connecting to MS SQL Server.
  • On Windows and xLinux, ACE provides a DataDirect ODBC driver for connecting to PostgreSQL.

Also, the Record and Replay feature is now also supported for use with PostgreSQL on Windows and xLinux. When configuring Business Transaction Monitoring and / or Record and Replay, ACE provides ddl files for the creation of the relevant database tables in the directory <ACE Product Installation Directory>\server\ddl

PostgreSQL - Support for stored procedures with dynamic result sets

The first version of IBM App Connect Enterprise to provide a built-in ODBC database driver for communicating with PostgreSQL databases was 12.0.10.0. The latest mod release of ACE 13.0.2.0 extends this support to enable users to invoke stored procedures on PostgreSQL which return dynamic result sets. For users familiar with using Compute node ESQL to invoke stored procedures on other database types, you may notice a slight difference in the syntax we are promoting for PostgreSQL. For PostgreSQL, a dummy cursor value must be supplied in the ESQL CALL statement for each intended results set.

For this example, consider a PostgreSQL database table which is created and populated with some rows of data like this:

CREATE TABLE mix_table ( id SERIAL PRIMARY KEY, name VARCHAR(50), email VARCHAR(50), ts TIMESTAMP, luckynumber NUMERIC );
INSERT INTO mix_table VALUES (4, 'Ali', 'ali@example.com','1999-01-08 04:05:06', 40 );
INSERT INTO mix_table VALUES (5, 'Bobby', 'bobby@example.com', '2000-01-08 04:05:06', 50 );
INSERT INTO mix_table VALUES (6, 'Charles', 'charles@example.com','2001-01-08 04:05:06', 60 );

The resulting populated table would look like this:

id   name       email                   ts                       luckynumber
===========================================================================
4    Ali         ali@example.com         1999-01-08 04:05:06     40
5    Bobby      bobby@example.com       2000-01-08 04:05:06    50
6    Charles    charles@example.com     2001-01-08 04:05:06    60

Consider further, a stored procedure called return_mixed_results which is defined like this:

CREATE OR REPLACE PROCEDURE return_mixed_results (
   IN username1 character varying,
   IN username2 character varying,
   IN username3 character varying,
   INOUT user_cursor1 refcursor,
   INOUT user_cursor2 refcursor,
   INOUT user_cursor3 refcursor
)
language plpgsql
as $$
begin       
   OPEN user_cursor1 FOR
     SELECT id, name, email, luckynumber
     FROM mix_table
     WHERE name = username1;
            
  OPEN user_cursor2 FOR
     SELECT id, name, email
     FROM mix_table
     WHERE name = username2;
            
   OPEN user_cursor3 FOR
     SELECT id, name, email, ts
     FROM mix_table
     WHERE name = username3;
end;$$;

A Compute node's ESQL would then define a CREATE PROCEDURE statement that specifies three IN parameters, three INOUT parameters and specifies three dynamic result sets:

CREATE PROCEDURE my_return_mix(IN username1 CHAR, IN username2 CHAR, IN username3 CHAR, INOUT ResultSet01 CHAR, INOUT ResultSet02 CHAR, INOUT ResultSet03 CHAR)
  LANGUAGE DATABASE
  DYNAMIC RESULT SETS 3
EXTERNAL NAME "return_mixed_results";

Consider then invoking the my_return_mix procedure and then using its results to construct an output message using Compute node ESQL like this:

DECLARE ResultSet01 ROW;
DECLARE ResultSet02 ROW;
DECLARE ResultSet03 ROW;
DECLARE username1 CHAR 'Charles';
DECLARE username2 CHAR 'Bobby';
DECLARE username3 CHAR 'Ali';
DECLARE dummy_cursor1 CHAR;
DECLARE dummy_cursor2 CHAR;
DECLARE dummy_cursor3 CHAR;

CALL my_return_mix(username1, username2, username3, dummy_cursor1, dummy_cursor2, dummy_cursor3, ResultSet01.data[], ResultSet02.data[], ResultSet03.data[]) ;
            
SET OutputRoot.XMLNSC.Results.Data1.UserID = ResultSet01.data.id;
SET OutputRoot.XMLNSC.Results.Data1.UserName = ResultSet01.data.name;
SET OutputRoot.XMLNSC.Results.Data1.UserEmail = ResultSet01.data.email;
SET OutputRoot.XMLNSC.Results.Data1.ts = ResultSet01.data.ts;
SET OutputRoot.XMLNSC.Results.Data1.luckynumber = ResultSet01.data.luckynumber;

SET OutputRoot.XMLNSC.Results.Data2.UserID = ResultSet02.data.id;
SET OutputRoot.XMLNSC.Results.Data2.UserName = ResultSet02.data.name;
SET OutputRoot.XMLNSC.Results.Data2.UserEmail = ResultSet02.data.email;
SET OutputRoot.XMLNSC.Results.Data2.ts = ResultSet02.data.ts;
SET OutputRoot.XMLNSC.Results.Data2.luckynumber = ResultSet02.data.luckynumber;

SET OutputRoot.XMLNSC.Results.Data3.UserID = ResultSet03.data.id;
SET OutputRoot.XMLNSC.Results.Data3.UserName = ResultSet03.data.name;
SET OutputRoot.XMLNSC.Results.Data3.UserEmail = ResultSet03.data.email;
SET OutputRoot.XMLNSC.Results.Data3.ts = ResultSet03.data.ts;
SET OutputRoot.XMLNSC.Results.Data3.luckynumber = ResultSet03.data.luckynumber;   

This would generate an output message like this:

<Results>
      <Data1>
            <UserID>6</UserID>
            <UserName>Charles</UserName>
            <UserEmail>charles@example.com</UserEmail>
            <luckynumber>60</luckynumber>
      </Data1>
      <Data2>
            <UserID>5</UserID>
            <UserName>Bobby</UserName>
            <UserEmail>bobby@example.com</UserEmail>
      </Data2>
      <Data3>
            <UserID>4</UserID>
            <UserName>Ali</UserName>
            <UserEmail>ali@example.com</UserEmail>
            <ts>1999-01-08T04:05:06</ts>
      </Data3>
</Results>

TCPIP message flow nodes with timeouts in fractions of a second

App Connect Enterprise 13.0.2.0 provides new support for timeout values expressed as fractions of a second. App Connect Enterprise has six message flow nodes for communicating with applications across raw TCP/IP sockets:

  • TCPIPClientInput: Use this node to create a client connection to a raw TCP/IP socket, and to receive data over that connection.
  • TCPIPClientOutput: Use this node to create a client connection to a raw TCP/IP socket, and to send data over that connection to an external application.
  • TCPIPClientReceive: Use this node to receive data over a client TCP/IP connection.
  • TCPIPServerInput: Use this node to create a server connection to a raw TCPIP socket, and to receive data over that connection.
  • TCPIPServerOutput: Use this node to create a server connection to a raw TCP/IP socket, and to send data over the connection to an external application.
  • TCPIPServerReceive: Use this node to receive data over a server TCP/IP connection.

Prior to this enhancement, these message flow nodes could only allow express timeout values in whole numbers of seconds, with a minimum value of 1 second. This was restrictive for some situations where users would like the TCPIP nodes to timeout more quickly and accurately, in order to cope with target systems that do not respond fast enough and where the flow is required to then carry out some other action (such as replying to another application).

You can now specify timeout values on TCPIP nodes up to three decimal places (in order to maintain back compatibility, the units in which timeout values are expressed are still seconds). So for example, to set the timeout to 250 milliseconds, you would specify 0.250. The default timeout value is still 60 seconds. Note that 0.100 (100 milliseconds) is the shortest timeout that can be specified. 

Dynamic credentials for SFTP and Local Security Profiles

Users of ACE 13 may have noticed a new command addition for the product which we added a few months ago in the 13.0.1.0 release in September which provides the ability to ibmint display credential-types. This command will list the available types of credential, and for each one the command output will also show whether the credential type is static or dynamic. A dynamic credential can be updated and the update will take effect without the server requiring a restart. A good proportion of the available credential types are already dynamic - in particular the credential types used by most of the discovery connectors. Notable for those users of the more traditional ACE feature set might find particular interest in the ODBC credential type being made dynamic in the 13.0.1.0 release as well.

In ACE 13.0.2.0 we have further extended the set of dynamic credentials to also include credentials for communicating with SFTP servers and when using a local security profile (for example to carry out basic authentication for a flow invoked over HTTP).

New Couchbase Request message flow node

Over the last two and a half years (since App Connect Enterprise 12.0.5.0 in June 2022) the product has introduced over 150 message flow nodes known as Discovery Connector message flow nodes. These new discovery connectors come with an easy guided experience in the Toolkit for connecting to online applications and gathering the required configuration.
The latest of these new message flow nodes, added in this 13.0.2.0 release is the Couchbase Request node which is available on Windows and Linux systems (if you attempt to deploy to an AIX server you will receive BIP4639 stating that the flow cannot be initialised due to the required discovery connector couchbase not being installed).. 
You can use the Couchbase Request node to connect to Couchbase and issue requests to perform actions on objects such as buckets, collections, custom SQL, documents, and scopes.  The Couchbase Request node also has a corresponding new policy type, which helps Toolkit users define configuration properties for easy connection to the application. The policy also links to a Couchbase credential which is encrypted and stored in an ACE vault, enabling the runtime to safely and securely connect to Couchbase.

Support for IPv6 network addresses

App Connect Enterprise 13.0.2.0 introduces support for connecting to TCP/IP addresses which are expressed in Internet Protocol version 6 (IPv6) format, in addition to IPv4. IPv4 is still used by default.  There are various alternate syntax options for expressing an IPv6 address. The template server.conf.yaml file provides examples for some of these options when expressing a ListenerAddress for the HTTPConnector as shown below:

  HTTPConnector:
    #ListenerPort: 0              # Set non-zero to set a specific port, defaults to 7800
    #ListenerAddress: '0.0.0.0'      # Set the IP address for the listener to listen on all IPv4 addresses (Default)
    #ListenerAddress: '::'           # Set the IP address for the listener to listen on all IPv6 and all IPv4 addresses
    #ListenerAddress: 'ipv6:::'      # Set the IP address for the listener to listen on all IPv6 and all IPv4 addresses
    #ListenerAddress: '127.0.0.1'    # Set the IP address for the listener to listen on to the localhost IPv4 address
    #ListenerAddress: '::1'          # Set the IP address for the listener to listen on to the localhost IPv6 address
    #ListenerAddress: '2001:DB8::1'  # Set the IP address for the listener to listen on a specific IPv6 address
  #ListenerAddress: '192.168.0.1'  # Set the IP address for the listener to listen on a specific IPv4 address

There are various other locations where a user can specify IPv6 style addresses including direct properties of message flow nodes, BAR file overrides and properties in a policy file.

New SSL options for the ibmint deploy command

ACE 13.0.2.0 introduces some new SSL command options. When communicating with a remote integration node or server, the ibmint deploy command has been extended to provide new properties to make it easier when using SSL.  For example, if you always want to use SSL for the connection, you can specify the --https parameter, and if you do not want to use SSL, you can specify the --no-https parameter. If you specify neither of these options then the command will first attempt to connect using HTTPS, and then with HTTP if the first attempt falls.

There are lots of examples provided in the ACE documentation page for ibmint deploy, but for quick convenience a summary of some of the new options is provided below.

--output-uri URI
URI for a remote integration server in the form tcp://[user[:password]@]host:port or in the form ssl://[user[:password]@]host:port.

--https

Specifies that HTTPS will be used for the connection to the integration node or server. If neither --https nor --no-https is specified, the connection is tried first with HTTPS and then without using HTTPS if the first attempt fails. The --https parameter is valid only if the --output-host parameter is specified, or if the --output-uri parameter is specified with a URI that starts with ssl://.

--no-https

Specifies that HTTPS will not be used for the connection to the integration node or server. If neither --https nor --no-https is specified, the connection is tried first with HTTPS and then without using HTTPS if the first attempt fails. The --no-https parameter is valid only if the --output-host parameter is specified, or if the --output-uri parameter is specified with a URI that starts with ssl://.

--cacert cacertFile

Specifies the path to the certificate file (in either PEM, P12, or JKS format) to be used to verify the integration node or server. If no cacert file is specified and default admin-ssl is enabled, the cacert file defaults to the default pem file for admin-ssl.  The --cacert parameter is valid only if HTTPS is used for the connection, so it cannot be set together with the --no-https parameter. You can set --cacert when the --https parameter has been set or when neither the --https nor --no-https parameter has been set (in which case SSL is used by default).  The --cacert parameter can be set only if the --output-host parameter is specified, or if the --output-uri parameter is specified with a URI that starts with ssl://.

--cacert-password cacertPassword

The password for password-protected cacert files. The --cacert-password parameter is valid only if HTTPS is used for the connection and if the --cacert parameter has been set. You cannot set it together with the --no-https parameter.  The --cacert-password parameter can be set only if the --output-host parameter is specified, or if the --output-uri parameter is specified with a URI that starts with ssl://.

--insecure

Specifies that the certificate that is returned by the integration node or server will not be verified.  The --insecure parameter is valid only if HTTPS is used for the connection, so it cannot be set together with the --no-https parameter. You can set --insecure when the --https parameter has been set or when neither the --https nor --no-https parameter has been set (in which case SSL is used by default).  The --insecure parameter can be set only if the --output-host parameter is specified, or if the --output-uri parameter is specified with a URI that starts with ssl://.

Securing Open Telemetry messages with an authentication header and a new credential type

Since ACE 12.0.7.0 it has been possible to emit Open Telemetry tracing information as documented here. Subsequent mod releases have been used to round out wider platform coverage, and in this latest 13.0.2.0 release we have also added the ability to carry a Basic Auth security identity in the header of the Open Telemetry messages which are propagated to an external Open Telemetry Collector. Accordingly, you will find two new optional properties carried in the OpenTelemetryManager stanza of the server.conf.yaml file:

The referenced security identity should be of the new opentelemetry credential type. Given the earlier topic in this blog about dynamic credentials, it's worth noting that this new credential type is also dynamic!  You can create such credentials from a command console session or from the Toolkit:

Auto-complete for ibmint commands in a bash shell on Linux and UNIX

To make it easier to use ibmint commands on Linux and UNIX platforms, within a Bash shell, ACE 13.0.2.0 provides a new method of auto-completion to help you build up valid examples of ibmint commands.  In an ACE install, if you navigate to the directory <ACE Product Installation Directory>\server\bash_autocompletion then you will find a shell script called ibmintcomplete.sh  If you are using a Bash shell, the ibmint auto-complete feature is enabled automatically when you run the mqsiprofile command. 

Since the first release of ACE 13, it has also become possible to run a terminal session inside the Toolkit. You can launch a terminal session using the shortcut button on the top taskbar, or using the keyboard shortcut combination of Ctrl - Alt - Shift - T. In this regard, a small word of warning relating to this new feature … In some circumstances, depending on how exactly the Eclipse Toolkit is initially launched, when you come to use a terminal session within the Toolkit with this feature, it may be necessary to first source the mqsiprofile command and then apply ibmintcomplete.sh before auto-complete will spring into action!


#Featured-area-2-home
1 comment
46 views

Permalink

Comments

21 days ago

Hello, Ben!

Thanks for sharing such an important article with us!

Let me ask you a question: With the release of App Connect v13, will we also see the Healthcare version available in v13?.