WebSphere Application Server

Centralized Logging with WAS Traditional

By Kirby Chin posted Fri January 03, 2020 09:42 AM

  

By Kirby Chin and Don Bourne


Centralized logging is important for any system administrator or developer who maintains multiple servers such as the WebSphere Application Server. For any administrator, the challenging task of monitoring a large number of deployments can be relieved by having server logs consolidated. Since services may depend upon each other, a centralized logging system can help quickly pinpoint which services are failing and help determine causation for problems occurring in other services.

JSON Logging

Being able to search your logs effectively is important. While tools such as grep can get you pretty far, they have limitations and may not satisfy other sophisticated queries. For instance, you may want to only see error or warning level log entries, logs from all servers with a particular message ID, logs/trace from the same thread, or all trace entries from the same class on a server. Customizing these searches manually would be a hassle if it means building up multiple regular expressions in order to retrieve information from the logs. For this reason, it is important to shift focus from plain-text search to a field-based search supported by formats such as JSON.

WAS traditional has a basic logging facility which produces the SystemOut.log file. In addition, it has the High Performance Extensible Logging (HPEL) facility, which stores logs more efficiently in a binary format. LogViewer is a tool for viewing logs from the HPEL facility that provides a variety of output formats, including JSON.

 

Run LogViewer by following the steps below:

  • Navigate to your Application Server's bin folder.
    cd /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/bin
  • Start LogViewer.
    ./logViewer.sh -repositoryDir ../logs/server1 -monitor 1 -maxFileSize 1048576 -maxFiles 5 -resumable -resume -format json -outLog /server-logs/hpelOutput.log &
  • After running for a few seconds, ensure that the JSON file has been created properly at the outLog location.

It is important to direct LogViewer to do a few things from the command above:

  • Emit logs in JSON format. -format json
  • Write output to a rolling set of 5 files to limit the total space consumption and give time for log forwarders to ingest data before data is deleted. -maxFiles 5 -maxFileSize 1048576
  • Keep track of the position searched in any log file, so it knows where to resume when restarted.
    -resumable -resume
  • Decide where the JSON file is written. -outLog /server-logs/hpelOutput.log
  • Check for updates in the log/trace repository every second. -monitor 1 

Elastic Stack

The Elastic stack is a set of open source tools used to build up a real-time centralized logging system. An important component of the stack is Elasticsearch, which provides a distributed search index. Servers configured with an Elasticsearch, Kibana, and Logstash instance can consolidate log resources from multiple servers/nodes. A log forwarder, such as Filebeat, is required on each endpoint to monitor and forward logs.

Sample configuration files are available to help you get started at: https://github.com/WASdev/sample.logstash.websphere-traditional

Setup centralized logging by following the steps below:

  • Start Elasticsearch, Kibana, and Logstash.
  • On your WAS deployment, start Filebeat, observing the JSON file generated by LogViewer.
    • Download the was_filebeat.yml file from the repository link above.
    • Add an entry under the paths heading to let Filebeat know where the JSON files are located. Since LogViewer automatically generates up to 5 maxFiles, each of them will be appended with a timestamp, so we can pattern match those files with the asterisk symbol.
      -/server-logs/hpelOutput*.log.
  • Open Kibana. Click Management > Index Patterns > Create Index Pattern > Type logstash-* > Next > Under Time Filter field name, select ibm_datetime.
  • (Optional) Download the was-kibana.json file from the repository link above to view the dashboard and visualizations. Import it by navigating to Settings > Kibana > Saved Objects > Import.

WAS Traditional on Docker Hub

WebSphere Application Server traditional comes in Docker images that are pre-configured to emit JSON logs to stdout using LogViewer. The image will automatically start up an Application Server instance and LogViewer process, with no extra setup required.

  • Run the image.
    docker run --name was-server -h was-server -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:latest
  • View the JSON output generated by LogViewer.
    docker logs was-server

Viewing WAS Traditional Logs in Kibana

In the Discover tab, data can be viewed as events over time and magnified to any time interval.

Kibana Discover Tab of Events Over Time


Log and trace information can be narrowed down as desired. For example, filtering by the devops.example.com hostname or the message field.

Kibana Discover Tab of Graph and Text Events Over Time


The provided sample dashboard helps visualize log and trace data with colour-coded events to distinguish different message IDs, trace levels, and log levels.

Displays log and trace messages from WAS Traditional specific fields.


Similar to the Discover tab, any field can be filtered to be viewed in closer detail. For instance, filtering messages by the WARNING log level.

Dashboard displaying events queried by the WARNING log level.


Related Links

Learn more about LogViewer's capabilities here: https://www.ibm.com/support/knowledgecenter/en/SS7K4U_8.5.5/com.ibm.websphere.zseries.doc/ae/rtrb_logviewer.html
Learn how to enable HPEL mode for logging and tracing here: https://www.ibm.com/support/knowledgecenter/en/SSHR6W/com.ibm.websphere.wdt.doc/topics/thpel.html
Learn more about the Elastic Stack here:
https://www.elastic.co/products/
Learn more about WAS Traditional Docker images here:
https://hub.docker.com/r/ibmcom/websphere-traditional/

 

0 comments
82 views

Permalink