Message Image  

IBM Integration Bus Advanced Logging Configuration

 View Only
Wed July 01, 2020 10:10 AM

Logging is one of the essential components of system monitoring and diagnosis. Logging is the action of collecting a series of events for the purpose of determining what steps were taken by a program. It is thus very important to configure logging in a manner that helps you monitor application processing.

The Syslog and Event Viewer

Linux/Unix and Windows by default have logging abilities. The systems use different terms and programs. The syslog is the Linux/Unix term for the main log generated by the operating system. In Windows it is called the Event Viewer. The Windows Event Viewer by default is enabled. The syslog on Linux/Unix is not enabled by default.

The syslog is a plain text file with each event taking a single line (normally). The Event Viewer is an application with a logging hierarchy (System->Application->Event).

Here is a sample of the syslog:

Syslog entries showing IIB starting

Here is a sample of the Event Viewer:

Windows Event Viewer showing IIB has stopped

To enable the syslog, follow the basic syslog configuration available here:
Configuring the syslog daemon

However, this just outputs the messages and does not provide instructions for dedicated logging or more often, rotating and archiving the logs when they get too big.

All syslog configuration is outside of the scope of IBM Integration Bus (IIB) as it just uses your OS’s logging hooks. You must then configure your syslog at the OS level. Here are some (free) options you have.

Linux/Unix

syslog-ng

This allows fairly complex configuration of the messages. You need to do some extra work to configure it, but the results tend to be great.

//get a hook into the raw syslog data stream
source src { internal(); unix-dgram("/dev/log"); }; //set up a filter to extract only the desired messages
filter f_iiblog { match("IBM Integration Bus") and match("BIP"); }; //define where you want to log messages
destination iiblog { file("/home/mqm/iiblog/iib.log" owner(mqm) group(mqbrkrs) perm(0644)); }; //put it all together to get the output
log { source(src); filter(f_iiblog); destination(iiblog); };

rsyslog

This program is very complex with a ton of functionality. It can probably do more than you ever want with your syslog. Order matters in this configuration, so be careful how you order your statements. For IIB, you would need something like:

//set the permissions on the file to be created
$fileCreateMode 0644 //write the IIB messages to a log
:msg, contains, "IBM Integration Bus" /var/log/iib.log //remove all IIB messages from subsequent logs generated later in the config file
:msg, contains, "IBM Integration Bus" ~ //write all non-IIB message to a different log
*.* /var/log/syslog

This program is great for complex logging. For example, you can have it send the log entries over an encrypted channel to another machine if you desire. And it even has some built in log rotation functions.

Windows

Event Viewer

The provided event viewer can be configured to generate better results.

Open the event viewer. In the left hand pane, expand Windows Logs and select the Application log. In the right hand pane (not the file menu) is the Actions section.

The first action you will likely want to make is to increase the log size limit and/or enable archiving and rotation. To do this, choose the Properties action. In the properties, you should see the default log size of 20MB. Increase that size as needed. Below are the options for archiving/rotation.

Once your log can hold more than a few events, create a custom view to extract the desired messages. Use the Create Custom View action. This will act more like a permanent filter that will appear in the left hand pane under Custom Views. To log all IIB messages, select Logged = Anytime, Event level = All, select the “By source” radio button, and in the drop down list, select the IIB version and fix pack. Here you will see there is an entry for each installed fix pack level. Thus if you install a new fix pack, you will need to update your view later.

You can add more filters if desired, such as user id or specific event messages.

Once you have a custom view, you have a couple more options that may be of use. Open your custom view by selecting it in the left pane. In the right pane are two actions you can set. You can attach a task for specific events or to the entire view. You could for instance have it send an email whenever a integration node (broker) or integration server (EG) starts so that you can monitor for unexpected abends. Or you could start a script to take an action on a specific event (restart the integration node automatically, clear disk space, etc.).

Splitting / Filtering

Splitting is taking a single input and dividing it into several outputs. Filtering is the removal of data from an input so that only the desired data is left in the output. The examples above are simple examples of both. You filtered out only the IIB records and then split IIB from the rest of the data.

Linux

Syslog-ng and rsyslog both allow for splitting and filtering as demonstrated before. Using the same approach, you can extend the sample to allow greater granularity in your logs. For instance, you could have a log for each integration node or each integration server. All you need to know is the command and the syntax used by the input. The inputs are in this format, where {} denotes a variable:

IBM Integration Bus v{####}[{#*, optional}]: ({integration server name}.{integration node name, if applicable})[{Thread}] (Msg {#}/{#}{optional})BIP####{I, W, E}: {message}

Examples:
IBM Integration Bus v9001[4718858]: (NODE)[1]BIP8873I: Starting the component verification for component ‘SERVER’.
IBM Integration Bus v9003 (NODE.SERVER) [Thread 5743] (Msg 1/6) BIP2628E: Exception condition detected on input node ‘MyFlow.MQInput Node’.

To filter by integration server, you would search for the integration server’s name in the entry:
Syslog-ng:

filter f_nodelog { match("IBM Integration Bus") and match ("NODE"); }; 
rsyslog: :msg, contains, "NODE" /var/log/server.log

Windows

Windows doesn’t have a true splitting feature. All entries are routed to the same location, then filters are applied on top. By saving the filter to the daemon, you get a pseudo-split as each filter is split, but you rerun the filter on each invocation. To do this, it helps to select an event in the viewer first, then select the ‘Details’ tab followed by the ‘XML view’. Next, create a filter like mentioned above, stopping before clicking the ‘OK’ button. Once ready, switch to the ‘XML’ tab. This will be populated with the details of the ‘Filter’ tab. At the bottom is a check box to ‘Edit query manually’. Check this and accept the warning if it pops up. If you have the event pulled up in the background, it should look something like this:

Side-by-side view of event details and query editor

Your event may have more information in the ‘EventData’ depending on the message output. The only field that will be there for sure is the integration node name.

To complete the filter, you need to extract the integration node name, called ‘node_w_q’ in this image. In the query, the ‘Select’ element is an XPath 1.0 statement, which has some limitations, but is usable for our needs. Before the end tag for the ‘Select’, you now need to add our additional filter options. The filter you need to add looks like:

and *[EventData[Data and (Data=’node_w_q’)]]

The final query looks like this:

<QueryList>
   <Query Id="0" Path="Application">
     <Select Path="Application">*[System[Provider[@Name='IBM Integration v10006']]] and *[EventData[Data and (Data='node_w_q')]]</Select>
   </Query>
</QueryList>

Here it is important to note that the event is generated using both the integration node and integration server name in one field if applicable. Thus the above query would only pull out messages for the integration node, but not the integration servers within that integration node. To get the integration servers, you would need a different filter as there is no wildcard support nor are string functions (such as contains) supported:

<QueryList>
  <Query Id="0" Path="Application">
    <Select Path="Application">*[System[Provider[@Name='IBM Integration v10006']]] and (*[EventData[Data and (Data='node_w_q.default')]] or *[EventData[Data and (Data='node_w_q')]]) </Select>
  </Query>
</QueryList>

In this example, the integration server is called ‘default’. You would need to update the query every time you add a new integration server to your integration node.

More information:
Consuming Events

Log Rotation

Log rotation and archiving are common topics brought up by admins as a way to manage disk space. IIB will generate several logs beyond the above syslog. Common ones are:

  • Integration node stdout and stderr (console.txt on Windows)
  • Integration server stdout and stderr (console.txt on Windows)
  • httplistener stdout and stderr (console.txt on Windows)

There are no limits to the file size and some traces (class loader trace, JSSE trace) can write to the logs as well. If you don’t turn off the traces or handle the errors, your logs can fill quickly.

Linux/Unix

Built-in to the OS you should find the logrotate command: https://linux.die.net/man/8/logrotate

This command does rotation and archiving in one step (if desired). The sample rotation would look like:

/var/mqsi/components/<integration node>/<integration server>/stderr {
size 100m
compress
missingok
create 644 mqbrkrs mqbrkrs
}

This will compress (compress) the log when it reaches 100MB (size 100m), using permissions 644 and mqbrkrs owner/group, and don’t throw an error if the log is currently missing (missingok).

The command has other functionality you can use. For example, keep only X number of archives, rotating based on time instead of size, emailing archives, or moving the archives to a different directory.

Windows

Event Viewer

The event viewer by default overwrites itself when the log reaches 20MB. You can change this limit as per above. In the same location you can enable the archiving.

Generating Custom Log Entries

The above information is useful in setting up your logs and will show the entries generated automatically by IIB such as unhandled errors and configuration changes. But this is not always enough, especially when debugging or developing a flow. To overcome this, you need to generate dynamic user messages. IIB provides several ways to produce your own log entries:

  • Trace node
  • Throw node
  • THROW command in ESQL & Throw in Java
  • DECLARE HANDLER command in ESQL

By using these in the flow, you can log a vast amount of data. These can help with issues encountered in flows and can also be used to return useful errors to clients.

Trace Node

The trace node is the best logging tool available to you. These nodes will not affect your message trees, they are dynamic, and you can turn all the nodes on/off from the command, so that they are only enabled when you need to investigate an issue. You can have it write to traces, files, or the syslog. For instance, the following pattern will output all the message trees and will uniquely identify the exact integration node, integration server, application, flow, and node that wrote the message:

============================== S T A R T   O F   T R A C E =============================
                  Time:     ${CURRENT_TIMESTAMP}
   IntegrationNodeName:     ${BrokerName}  
      QueueManagerName:     ${QueueManagerName}
IntegrationNodeVersion:     ${BrokerVersion}  
              Platform:     ${Family}
             ProcessID:     ${ProcessId} 
 IntegrationNodeUserId:     ${BrokerUserId}
IntegrationServerLabel:     ${ExecutionGroupLabel}
      ApplicationLabel:     ${ApplicationLabel}
      MessageFlowLabel:     ${MessageFlowLabel}
             NodeLabel:     ${NodeLabel}
==============================     E X C E P T I O N S     =============================
${ExceptionList}     
========================   L O C A L    E N V I R O N M E N T   ========================
${LocalEnvironment}      
==============================    E N V I R O N M E N T    =============================
${Environment}    
==============================          R O O T         ================================
${Root}
==============================   E N D   O F   T R A C E   =============================

You can find a list of all properties that are available here:
Integration node properties that are accessible from ESQL, the Mapping node, and Java

Placing a trace node on every failure terminal and catch terminal will make sure that all the exceptions are logged along with where in the flow the issue occurred and what the message being processed was. Use the node between nodes in the flow to make sure that the data leaving the prior node and entering the next node is well formed and has the expected data.

Trace nodes can be enabled or disabled by using this command:
mqsichangetrace

Caveat: When you select the user trace or local system log as output destinations, you may find that data is truncated. For instance, the user trace usually only logs 16KB of data. On Linux/Unix, the syslog entry is usually only one line long. Using the direct file write option is the best option if you have lots of data per entry to log.

More information:
Trace node

Throw Node

Throw nodes are used primarily to throw exceptions, which cause a roll back and failure or catch processing to be executed. In addition, when the error is thrown, you can choose the error number and error message. This can be used to log a more user friendly error to the log and/or the client.

Throw nodes work best if preceded by a trace node. If you connect a trace node to each failure terminal, then all exceptions will be caught, however, if there are no subsequent nodes, processing will end and the transaction will commit. You would want to use the throw node after the trace to invoke your error or catch terminal processing. For instance, this simple flow receives a SOAP input, makes a SOAP request, and then replies. When thorough logging is added, it looks more complicated, but the additions will help identify all problems that could occur and will auto inform the correct team members to investigate the issues, as the ideal flow would not have errors:

SOAP flow with an abundance of error handling

The four failures that are re-thrown at the top roll back to the SOAP Input node where the catch processing is then invoked.

More information:
Throw node

THROW in ESQL and Throw in Java

These commands allow you to throw dynamic errors and record log entries at the exact moment they occurred within your code rather than letting a faulty message pass to other nodes. They work much in the same manner as the Throw node, but they are more dynamic. The Throw node will always output the same static message you tell it to. With the throw commands, you can change that message as you need. You can thus make the errors personal or contain additional debugging information not available to other nodes. For example, you could add database query results that affected the outcome.

More information:
THROW statement
JavaCompute node exception handling

DECLARE HANDLER in ESQL

This command is a bit different than we’ve discussed thus far. The above all either write log entries and/or throw errors. This command does neither. Instead, it is used as part of the error handling process. The throw commands will stop the current processing and cause a roll back when executed. However, you may not want to stop processing, just perform the logging or roll back only so far. For example, if you alter the above flow, you could remove the SOAPRequest fault and failure processing. To handle those errors, you could have added a DECLARE HANDLER in the Transform node. The code could have altered the SOAPRequest configurable properties to point to a different backend, then reissue the request to the new target. If that failed, it could have then used the THROW statement to throw a different error, such as “Attempted all backend servers and did not get a successful response. See the following errors from each server attempted”. This would make the flow more robust while simplifying the logging process.

Additional information:
DECLARE HANDLER statement

Log4j

Log4j is a set of Java libraries from Apache used for logging purposes. As it is Java code, it can be integrated into IIB. Not only can you use the libraries in a Java compute node, but there is also a supportpac which adds a logging node built from the Log4j libraries. The down side is that it writes to its own log file, not to a syslog or trace, unlike the throw statements.

The best use of Log4j is inline logging. While primarily used with Java, it can be called from ESQL if you create a simple procedure that calls a Java method that does the actual logging.

The Log4j node is less sophisticated than the trace node built in. The Log4j node is limited in where it can pull data from. For example, in the trace node sample above, you can extract information about the integration node, integration server, application, flow, and node. You can also access multiple trees simultaneously and write custom text. Log4j cannot do that. It can output a single message tree or part of one. Or it can output a line of custom text.

More information:
Apache Log4j 1.2
IAM3: WebSphere Message Broker – Node for log4j
Configuring the Log4j node in WebSphere Message Broker

I hope this helps extend the logging functionality in IIB.

2 comments on"IBM Integration Bus Advanced Logging Configuration"

  1. Lucas Pereira January 21, 2019

    There is an integration with logBack with Integration Bus?

    Reply (Edit)
    • Matthew SeGall January 25, 2019

      There is no built in integration at this time, but it can still be used with IIB in the same way Log4j can be. You add the JARs to IIB shared-classes and call the API as needed.


#IIB
#monitoring
#logging

Comments

Fri August 06, 2021 05:09 PM

Thank you very much for your answer!

Fri August 06, 2021 04:33 AM

Hello Francisco, Matthew's reply to you.....

There is very rarely an impact when using trace nodes. The trace node outputs, typically, parts of the message tree. If the message tree is already in memory, then no CPU or memory is needed by the node. If however the message tree is not currently in memory, like right after an input or request node, then the Trace node can parse the message in its entirety. This parsing may use CPU and memory and can also cause parsing exceptions. If your flow does not perform a full parse, then this Trace node action could result in additional CPU or memory usage that the flow does not see, or could result in parsing exceptions not seen.

Yes, IIB did allow you to enable/disable the trace nodes on a per flow basis. ACE removed this so that it can only be performed on the integration server level. This was done so the property could reside in the server.conf.yaml. When running ACE in containers, changing the integration server trace state is more or less the same as setting it at the flow level as you should only have one application per container. This problem is mostly noted on on-prem installations. We do not currently have plans to change this.

Regards, Matthew

Thu July 29, 2021 01:21 PM

Hello Mr. SeGall,

After reading your interesting article, I have two questions about the Trace Node:

Is there any consideration about the behaviour of the Trace Nodes in IBM App Connect Enterprise? (memory, cpu, etc.)

In IBM App Connect Enterprise, the mqsichangetrace -n command option enables the trace nodes of all the flows that are deployed in the Integration Server, but in IIB there was the -k command option to enable trace nodes by application criteria, is there any way to do the same with IBM ACE?

Thanks in advance.