App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Explore the new features in App Connect Enterprise version 11.0.0.10

By Ben Thompson posted Wed August 26, 2020 10:30 AM

  
iStock_000020343379_XXXLarge


IBM App Connect Enterprise v11.0.0.10 is the tenth fix pack released for App Connect Enterprise software. We provide regular fix packs, approximately once per quarter, which we intend to continue throughout the year. Fix packs provide both regular maintenance for the product, and also new functional content. This blog summarizes all the latest and greatest capabilities:

  • A new Welcome!
  • A new Administration Log in the Web UI and to file for audit purposes
  • Enhanced protection against excessive authentication requests
  • ODM message flow node dynamic responses to rule changes
  • ACE and IBM MQ Uniform Clusters
  • Support for using IBM MQ replicated data queue managers (RDQM) for ACE High Availability
  • Kafka - wait for message offset commit on KafkaConsumer
  • New Data Studio drivers for graphical data mapper DB2 database discovery
  • Dynamic HTTP URL Redirection

A new Welcome!

Although our product identity changes every now and again, it has been a while since we updated the main Welcome screen which is presented to new users when the App Connect Enterprise Toolkit is launched for the first time. So, with fix pack 10 we have provided an updated welcome screen which brings the Toolkit welcome in line with our latest branding on both our Cloud Pak for Integration platform and the public IBM Cloud too. We hope you like it! As part of this change, if your Toolkit is launched on a machine which has an internet connection, then the content of this welcome page is sourced online (if you're offline then we just fallback to a local page shipped as part of the product installation). This way of working will enable us to alter the page in future much more easily (e.g. to highlight new features).

A new Administration Log in the Web UI and to file for audit purposes

App Connect Enterprise v11 provides an incredibly rich administrative REST API which the product describes using an Open API document. This is provided with the product installation (but for convenience we also place it on github here). This makes it very straight forward to create client applications which control and configure App Connect Enterprise, in an array of different languages. To learn about the administrative REST API, you can use a web browser to navigate to the /apidocs URL where we serve the Open API document - either from an independent integration server or an integration node.

As administrative changes are applied to ACE, you may wish to have an audit record of when particular changes were made, what has happened, and who has applied configuration updates. It has always been possible to piece together similar information from system logs, but for the first time in fix pack 10 we are providing a new and dedicated administration log for this purpose. The Admin Log data includes details such as the timestamp, BIP message number, message text, and the user name and authorized role of the user who carried out each action. Users of the ACE web UI will be able to browse these log entries which are sourced from an in-memory store held by the product. If you wish you can also choose to have ACE persist all entries to the file system as well. If you choose to write to file, then you should carefully consider whether you have sufficient disk space available. More info to help with sizing this is provided further down this article, but first lets consider the web browser view of the Admin Log:

The Admin Log tab is shown at the integration node level of the web UI (picture above), and there are also specific entries for particular integration servers that are shown at the integration server level:



Admin Logs are generated for administration activities against independent integration servers and against integration nodes (including the integration servers that they manage). The Admin Logs contain information such as the date and time of an action, the description and result of the action (success or failure), and the user name and authorized role of the user who initiated the action.

If you have chosen to persist the Admin Log to files then by default, each daily log file will be stored for 30 days before it is deleted. Again with default settings, for each of those 30 days, up to 10 files can be created (if there are sufficient activities to require it) with a maximum size for each file of 100MB by default. This is for each server (and if you're running node-owned servers, you should account for a log file for the integration node itself as well). You can specify an alternative retention period, number of files per day, and maximum file log size, by setting the fileLogRetentionPeriod, fileLogCountDaily, and fileLogSize properties in the node.conf.yaml or server.conf.yaml file for the integration node or server. Given these details, you should carefully consider the disk requirements for the retention of files which match your desired situation. Here's an example of the configuration settings:

AdminLog:
  enabled: true               # Control logging admin log messages. Set to true or false, default is true.
  # When enabled the maximum amount of disk space required for admin log files is
  # fileLogRetentionPeriod * fileLogCountDaily * fileLogSize
  fileLog: false              # Control writing admin log messages to file. Set to true or false, default is false.
  fileLogRetentionPeriod: 30  # Sets the number of days to record admin log.
                              # After this, old files are deleted as new ones are created. Default is 30 days.
  fileLogCountDaily: 10       # Maximum number of admin log files to write per day, default is 10 per day.
  fileLogSize: 100            # Maximum size in MB for each admin log file. Maximum size is 2000MB, default size is 100MB.
  consoleLog: false           # Control writing admin log messages to standard out. Set to true or false, default is false.
  consoleLogFormat: 'text'    # Control the format of admin log messages written to standard out, default is text.

For more information, including all the details about where the logs are written, please check out the Knowledge Center.

Enhanced protection against excessive authentication requests

App Connect Enterprise optionally allows the configuration of administration security authentication (sometimes referred to as basic authentication). If this has been enabled, then client applications communicating with the ACE REST API must log in with an acceptable userid and password. The ACE administrative web UI uses a pop-up login screen for the purpose of providing these details:

The combination of userid and password is either checked locally by App Connect Enterprise against credentials which have been previously defined using the mqsiwebuseradmin command, or checked against a third party LDAP server. Once authenticated, the logged in user is authorised for particular permissible actions that are associated with their role. These security capabilities have been part of the product for a long time, but new in fix pack 10 is the ability to protect the product against excessive authentication requests after repeated failures. You can now specify a maximum number of failed login attempts that can be made within a specified time period before the client is locked out. There are three settings controlling this new behaviour, which are made in the server.conf.yaml file for an integration server, or node.conf.yaml for an integration node:

  • authMaxAttempts - the maximum number of login attempts that can be made during the specified period before the user is blocked from logging in (default is 5)
  • authAttemptsDuration - the time (in seconds) during which the maximum number of login attempts can be made before the user is blocked from logging in (default is 300)
  • authBlockedDuration - the time (in seconds) for which the client is blocked from logging in when the maximum number of login attempts has been reached without success (default is 300).
If you supply an incorrect userid / password combination the web UI will report it as shown below:

If you continue to supply incorrect values, and breach the settings described above then the block is reported by the web UI as shown below: 

ACE and IBM MQ Uniform Clusters

Background:

The continuous delivery release of IBM MQ v9.1.2 introduced a new feature known as Uniform Clusters. Uniform clusters are a specific pattern of an IBM MQ cluster that provides a highly available and horizontally scaled small collection of queue managers. These queue managers are configured almost identically, so that an application can interact with them as a single group. This makes it easier to ensure each queue manager in the cluster is being used, by automatically ensuring application instances are spread evenly across the queue managers. In the uniform cluster, client connections are grouped together based on the application name. Applications that connect to any member of the uniform cluster using the same application name are considered to be equivalent to any other applications using the same application name. The objective of a uniform cluster deployment is that applications can be designed for scale and availability, and can connect to any of the queue managers within the uniform cluster. This removes any dependency on a specific queue manager, resulting in better availability and workload balancing of messaging traffic.

ACE and Uniform clusters:

App Connect Enterprise utilises MQ in many different ways: Most obviously we have MQInput, MQOutput and MQGet message flow nodes which enable message flows to move messages to and from MQ queues, but there are also other message flow nodes which rely on using MQ to store state. For this reason, although it may be tempting to think "ACE is just an example of an application using MQ, so ACE should support the use of MQ Uniform Clusters", there are important details preventing such a succinct statement covering all potential scenarios. Consider these general points:

  1. ACE is used in some MQ request/reply scenarios, with a design pattern whereby reply messages must be returned back to the same App Connect Enterprise integration server. For example, when ACE is invoked by an inbound HTTP request and then calls out to an endpoint application via MQ, the reply from the MQ application would need to be returned to the correct integration server in order to return information to the HTTP client connection. This kind of design pattern, doesn't work well with an MQ connection that could be moved at any time where there is no function to move messages to follow ACE to the new queue manager, or ensure that any reply messages also go back to the new queue manager. So for situations like this one, MQ Uniform Clusters currently remains an inappropriate choice until such point as further enhancements are made to the product(s).
  2. We also have the problem of stateful nodes (Aggregation, Sequence, Resequence, Collector and Timeout) which need to access previously stored messages, which will also fail to work if the MQ connection is moved. Whilst in theory it may become possible in future to instruct MQ to move the connection only when there are no live aggregations in flight (for example), for now we recommend not using a uniform cluster to provide queues for these message flow nodes.
  3. Last but not least, note that Uniform clusters are expected to have IBM MQ applications connecting as client applications, rather than locally bound applications. Locally bound applications are not prevented from connecting to uniform cluster members, but uniform clusters cannot achieve even workload distribution with locally bound applications, because they cannot connect to any other member of the cluster.

To try and help navigate when to use this relatively new technology, IBM MQ have also published a specific page all about Limitations and considerations for uniform clusters.

How to connect ACE to a Uniform cluster using MQ configuration:

So having understood the exceptions above and noted that uniform clusters are not suitable for everyone, there are still some ACE use cases where uniform clusters can be helpful - specifically those related to one-way messaging, where it is absolutely supported to use queues defined on a Uniform Cluster. If you are in this situation then it is possible to configure ACE to utilise a connection to an MQ Uniform cluster , using configuration entirely outside the ACE product - by configuring a CCDT (Client Channel Definition Table), and either the MQCLNTCF environment variable or configuration changes in the mqclient.ini file to control the MQ Client code.

How to connect ACE to a Uniform cluster using ACE configuration:

In Fix pack 10 we have also made it even easier to make these configurations using an ACE interface.

A new Reconnect option property is provided on the MQEndpoint policy, which enables you to control whether the IBM MQ client automatically attempts to reconnect to the queue manager if the connection is lost. By default, the IBM MQ client's default reconnection strategy is used, modified by using IBM MQ client configuration described above. The Reconnect property is only used for client connections and is ignored for server connections. The property takes the following possible values:

  • default (the default) [which corresponds to the MQ option MQCNO_RECONNECT_AS_DEF]
  • enabled [which corresponds to the MQ option MQCNO_RECONNECT]
  • disabled [which corresponds to the MQ option MQCNO_RECONNECT_DISABLED]
  • queueManager [which corresponds to the MQ option MQCNO_RECONNECT_Q_MGR]

These values are passed to the IBM MQ client in the MQCONNX call, for example which is made by an MQInput node at the start of a message flow. The advantage of this configuration approach is that it becomes possible to use different reconnection settings for different deployments to the same integration server which can be configured to utilise different MQEndpoint policies (thus improving the global nature of providing the reconnect setting through an MQ environment variable).

How does connection to a Uniform cluster effect flow behaviour?


Question:
If a connection is moved when a message flow is idle (polling on MQGET without receiving a message), does the integration server notice?
Answer: Yes the integration server reconnects to the new queue manager and then continues to poll.

Question: When messages are running through a non-transactional message flow, what happens if the connection is moved part of the way through the flow?
Answer: The flow will continue to operate. Messages which are sent before and after the rebalancing reconnect, may be delivered to different queue managers. This could even be the case for separate message flow node instances in the same flow.

Question: For a transactional message flow, what happens if the connection is moved part of the way through the flow?
Answer: The transaction will be rolled back and an exception is thrown from the next MQ call, and the backed out message will still be on the original queue manager. Once the connection has been re-established, the backed out message will be available to be processed again somewhere (most likely another instance of the application). If you wish, you can catch and ignore the exception mentioned, at which point your message flow may carry on and do more work on the re-established connection, committing only the work after the connection was re-established!

What's next with Uniform Clustering?

Whilst we're very pleased to announce the changes above, which will make ACE much easier to configure to use an MQ Uniform Cluster, we also note there could be further enhancements you would like us to consider to further exploit uniform clusters. For example you might wish to have ACE configuration to enable the setting of MQAPPLNAME in order to distinguish different ACE applications from an MQ rebalancing perspective if you are using an ACE integration server as a multi-tenant process. Another potential future enhancement would be in the situation of ACE applications that use multiple MQ connections against a uniform cluster, to be able to move their connections as a single unit. As usual, if you would like to see these ideas or others like them get implemented, please get in touch through our Early Experience Program to discuss further.

Support for using IBM MQ replicated data queue managers (RDQM) for ACE High Availability

Fix pack 10 introduces formal documented support for using an App Connect Enterprise v11 integration node with an IBM MQ replicated data queue manager (RDQM), in order to provide a high availability (HA) solution.

The MQ RDQM configuration consists of three servers configured in a high availability (HA) group, each with an instance of the queue manager. One instance is the running queue manager, which synchronously replicates its data to the other two instances. If the server running this queue manager fails, another instance of the queue manager starts and has current data to operate with. The three instances of the queue manager share a floating IP address, so clients only need to be configured with a single IP address. Only one instance of the queue manager can run at any one time, even if the HA group becomes partitioned due to network problems. More information on MQ RDQM is available here.

This page in the ACE Knowledge Center describes the detailed configuration steps you should go through in order to place the ACE files in the correct location on the filesystem so that MQ replicates the data successfully on ACE's behalf.

ODM message flow node dynamic responses to rule changes

Fix pack 10 marks the third fix pack in a row with enhancements for executing ODM Business Rules locally within the integration server. As you may recall, you can use ODMRules or JavaCompute nodes in a message flow to execute a set of business rules. These nodes use an ODMServer policy to configure access to the rulesets that are to be run; either to the IBM Operational Decision Manager Rule Execution Server that hosts the rulesets or to a locally-stored ruleset archive that has been downloaded previously from the ODM Rule Execution Server.

The ODMServer policy has been enhanced in fix pack 10 with new options which control the way in which ACE applies updates to refresh its cache of rulesets, which can now be achieved dynamically without needing to restart deployed applications.

  • Ruleset load strategy: This property species when a ruleset is to be compiled for the first time, and can be set to the following values:
    onFirstMessage (default): The referenced rulesets are loaded when the first messages tries to execute them, which can delay the first message.
    onFlowStart: The referenced rulesets are loaded when the flow starts, which can delay the startup of the flows
  • Ruleset refresh mode: This property specifies the method to be used for updating rulesets, and can be set to one of the following values
    manual (default): In manual ruleset refresh mode, rulesets are not updated automatically, but you can update them manually by using the administration REST API.
    polling: In polling ruleset refresh mode, the rule execution server (RES) is polled for updates at the frequency that is specified by the Ruleset refresh polling interval property.
  • Ruleset refresh interval: This property specifies how often (in seconds) the rule execution server (RES) is polled for updates to rulesets. This property can be set to any positive integer value, and the default is 600 seconds (10 minutes).
  • Ruleset refresh update strategy: This property specifies how a ruleset update is applied, and can be set to one of the following values:
    immediately (default): If this property is set to immediately, flows are paused immediately when a ruleset is updated, to wait for the new or updated ruleset to be compiled. Updates are then applied in sync across all ODM runtime engines.
    whenReady: If this property is set to whenReady, updated flows continue to run with the old version of the ruleset until the updated ruleset is compiled. If a ruleset is associated with multiple ODM runtime engines, which each have to compile the updated ruleset, the update might be applied to different flows at slightly different times.

If you choose to instrument a refresh of rulesets in a manual way, you can make requests using the ACE REST Admin API to achieve this. A new method named refresh-ruleset-cache is provided for this purpose which you can see below in our Open API documentation, from where you can also try it out!



The REST call starts a ruleset refresh in the background, and a message confirms that the update request has been started. The ruleset update is therefore asynchronous from the REST call. After the request has been made, messages are written to the syslog when the rulesets are updated.

Kafka - wait for message offset commit on KafkaConsumer

If this property is selected, the KafkaConsumer node waits for the message offset to be committed to Kafka before delivering the message to the message flow, which provides an at-most-once level of message reliability. If this option is not selected, the KafkaConsumer node does not wait for the response from the commit operation to be received before delivering the message to the flow. As a result, message throughput is increased in exchange for a reduction in message reliability, as messages can be redelivered to the message flow if the request to commit the consumer offset subsequently fails.

New Data Studio drivers for graphical data mapper DB2 database discovery

With ACEv11.0.0.10, following an upgrade to the embedded Data Studio drivers to v4.13, which are part of the ACE Toolkit, it is now possible to connect your Toolkit to a DB2 v11 database in order to discover metadata about the database. This enables you to use information about the database as part of a graphical data map. We're pleased to announce this small but important enhancement which improves the compatability and usability of the product.

Dynamic override for HTTP URL redirection

Fix pack 10 also includes a small enhancement to let a message flow dynamically override whether HTTP Request nodes should follow HTTP(S) redirection using a Local Environment override. Prior to the implementation of this feature, the option 'Follow HTTP(S) redirection' was only available to be enabled / disabled through configuration of the message flow node's properties:



Now, the Local Environment override can be activated using ESQL code (for example) like this:

SET OutputLocalEnvironment.Destination.HTTP.Request.FollowRedirection = TRUE;



0 comments
131 views

Permalink