Overview
The IBM® Cloud Pak® System Log Management feature allows to export Cloud Pak System log files to a log manager software.
These logs are sent by group - not individually, by file or by selecting them. You can’t decide to send / receive a specific log.
In this article, I explain how to split IBM Cloud Pak System logs so you can use and store only the logs you need – for instance the secure log – as displayed on this picture:
How to configure IBM Cloud Pak Log Management
The simplest way to configure Log Management on IBM Cloud Pak System is to use the graphical console. For this article, I used one IBM Cloud Pak System machine, version 2.3.3.0.
Step 1 – Access Log Management feature
The Log Management feature is accessible from System menu, System Settings choice.
On System Settings page, expand the Log Management entry.
So Log Management settings are usable:
Step 2 – Log Management settings
There is one optional setting and two mandatory settings to configure.
The optional setting is the Maximum number of days to retain log files check box. Use it to set the number of days to keep the logs on your IBM Cloud Pak System until the system removes them. If enabled, the default value is 90.
As Destination address, set the IP (or FQDN) server used to receive the log files.
Then select the log files group you want to collect.
On IBM Cloud Pak System 2.3.3.0, the IBM Cloud Pak System log files check box enables:
- "/var/log/purescale/*.log" files
- "/var/log/purescale/*/*.log" files
- "/var/log/purescale/*/*.txt" files
- "/var/log/purescale/*/*/*.log" files
- "/var/log/purescale/*/*/*/*.log" files
- "/var/log/purescale/*/*_log" files
- "/var/log/purescale/db2/db2eventlog.*" files
- "/var/log/purescale/db2/db2inst/db2eventlog.*" files
- "/var/log/purescale/db2/db2inst/db2inst.*.nfy" files
- "/var/log/purescale/db2/db2ipas.*.nfy" files
- "/var/log/purescale/ipas.async/VGENTask" file
- "/var/log/purescale/ipas.events/CleanOrphanEvents.*.xml" files
- "/var/log/purescale/nmon/nmonlog.nmon" file
- "/var/log/purescale/nmon/psm-capture-data.out" file
- "/var/log/purescale/purescale.log.prekickstart.*" files
- "/var/log/purescale/tsa/*-failed" files
- "/var/log/purescale/tsa/*-started" files
- "/var/log/purescale/tsa/*-stopped" files
On IBM Cloud Pak System 2.3.3.0, the Security and audit log files check box enables:
- "/drouter/ramdisk2/mnt/raid-volume/raid0/logs/access.log" file
- "/drouter/ramdisk2/mnt/raid-volume/raid0/logs/audit/lic-audit.*.log" files
- "/var/log/audit/audit.log" file
- "/var/log/purescale/*/access.log" files
- "/var/log/purescale/audit" file
- "/var/log/secure" file
On IBM Cloud Pak System 2.3.3.0, the System log files check box enables:
- "/var/log/boot.log" file
- "/var/log/cron" file
- "/var/log/maillog" file
- "/var/log/messages" file
- "/var/log/secure" file
- "/var/log/spooler" file
On IBM Cloud Pak System 2.3.3.0, the Workload Manager log files check box enables:
- "/drouter/ramdisk2/mnt/raid-volume/raid0/dumps/verbosegc.*.txt.*" files
- "/drouter/ramdisk2/mnt/raid-volume/raid0/logs/*.log" files
- "/drouter/ramdisk2/mnt/raid-volume/raid0/logs/*/*.csv" files
- "/drouter/ramdisk2/mnt/raid-volume/raid0/logs/*/*.log" files
- "/drouter/ramdisk2/mnt/raid-volume/raid0/usr/servers/*/*/*.log.*" files
⇒So, for each log files group, IBM Cloud Pak System sends a set of log files.
Except if you want to keep all these files as one log, you need to split these files before to use them.
Step 3 –Configure Log Management settings
For this article, I enable Security and audit log files check box, so I receive three kinds of log files (access, audit and secure):
⇒ The configuration is done on IBM Cloud Pak System side.
How to split IBM Cloud Pak System log files
The IBM Cloud Pak System Log Management feature uses syslog protocol (https://tools.ietf.org/html/rfc5424), that means logs are sent on port 514, with TCP protocol (not UDP, the default protocol for syslog).
Each log record contains:
- The record header
- The log pathname
- The log message based on syslog format
The header contains:
- The application name: “IPAS”. It means “IBM Pure Application System” (the previous name of IBM Cloud Pak System)
- The application ID between “[ ]”.
- The message timestamp: when the message is sent
- The host IP. This IP can be the CPS floating IP, the CPS primary IP used by PSM leader, the CPS secondary IP used by PSM non-leader.
So, to split log files, I analyze the header and the log pathname.
For that, I use Logstash (https://www.elastic.co/logstash/ ), a free and open data processing pipeline that ingests data with different format – including syslog, transforms it, and then writes as you want.
Step 4 – Logstash installation
Download Logstash from https://www.elastic.co/downloads/logstash . Logstash installation is straightforward. For this article, I used Logstash version 7.10.1, with rpm installation file.
To install it, just run:
rpm --install <filename>.rpm
Like on this sample:
⇒ Logstash is installed and ready to be configured
Step 5 – Logstash configuration
Startup configuration file
The Startup configuration file is located at /etc/logstash/startup.conf.
First, we change the default user and group id – logstash – used at startup by root:
Indeed, the port used by IBM Cloud Pak System with syslog protocol is 514 (default port for syslog). As port 514 is a privileged port (under 1024), only root user can use it.
After that, I run again the installation script on /use/share/logstash/bin/system-install.
Note: If you don’t want to use root userid see other ways at https://discuss.elastic.co/t/logstash-bind-to-port-514/44022/7 and at https://dev.to/bidhanahdib/binding-privileged-port-514-to-logstash-7-10-0-4og2.
Logstash configuration file
By default, Logstash configuration files are located on /etc/logstash/conf.d/ and are suffixed by “.conf”.
A Logstash configuration file contains the stages of Logstash processing pipeline:
- The Input section enables a specific source of events to be read by Logstash.
- The Filter section performs processing on an event.
- The Output section sends event data to a particular destination.
Each section uses one or several plugins. The matrix of plugins available is defined here: https://www.elastic.co/support/matrix#matrix_logstash_plugins.
Let's create split_cps_files.conf configuration file with a text editor :
vi split_cps_files.conf
Input Section
This section listens the flow received:
# Input section: listen IBM Cloud Pak System syslog flow on port 514
input {
syslog {
port => 514
}
}
As this flow is based on syslog protocol, I simply use the Syslog input plugin with port setting.
Filter section
This section analyzes each record received:
# Filter section: analyze each message received
filter {
mutate {
remove_tag => ["cps_secure"]
}
grok {
match => { "message" => "(IPAS\[-\]\:)%{SPACE}%{TIMESTAMP_ISO8601:cps_logmsg_timestamp}%{SPACE}%{IPV4:cps_logmsg_ip}%{SPACE}(/var/log/secure)%{SPACE}%{GREEDYDATA:cps_logmsg_secure}" }
add_tag => ["cps_secure" ]
}
}
For each record, I analyze the log pathname I want to capture. For this article, I only want to capture records from /var/log/secure log file.
To perform the analysis, I use Grok filter plugin with regular expressions based on Oniguruma library (https://github.com/kkos/oniguruma/blob/master/doc/RE ).
Before “/var/log/secure”, I analyze the header to capture the host IP into “cps_logmsg_ip” field, so I can reuse it on Output section.
When a record match the Secure log file pattern, I add a tag to the record with a value (here it’s “cps_secure”). A tag is an information linked to an event but not added to the record. It’s not a field.
As the flow contains records from different IBM CPS log files, at each record I remove tag used for previous record with the Mutate filter plugin.
Output section
On this section, I write records on output log files:
# Output section: split each message into dedicated Log file
output {
if "cps_secure" in [tags] {
file {
path => "/cps_syslog/secure/secure_%{cps_logmsg_ip}.log"
codec => line { delimiter => "" format => "%{cps_logmsg_secure}"}
flush_interval => 0
}
}
}
I use “cps_secure” tag to write only the Secure records on output file.
On output log filename, I use “cps_logmsg_ip” field to split Secure log files from different IBM Cloud Pak System machines (for this article, I used only one machine, but you can configure several IBM CPS machines to send their log files to the same Logstash server, so it is useful to split log files from each CPS machine). “cps_logmsg_ip” field can also be used to split logs from PSM leader (the primary IP) and PSM non-leader (the secondary IP).
On output record, I use “cps_logmsg_secure” field to write the log record, without the header and log pathname. In addition, as the Line output plugin adds – by default – a line-feed delimiter after each message, it creates an empty line between each message. To avoid that, I set the delimiter setting to “” (no value).
How to debug
To debug your configuration file, you can read this blog post: https://logz.io/blog/debug-logstash/.
To debug Grok syntax, I use the Grok debugger at http://grokdebug.herokuapp.com/ .
Step 6 – Start Logstash
I start Logstash with the configuration file:
cd /usr/share/logstash/bin
./logstash -f /etc/logstash/conf.d/split_cps_files.conf
Logstash opens the output file as soon as it processes a record for this output file:
As you see, Logstash can open then close several times an output file. Logstash does that when it doesn’t receive and so doesn’t process records for an output file. This behavior is normal because IBM Cloud Pak System sends continuously records from different log files.
Step 7 – Check Log files split
As I split IBM Cloud Pak System log files to keep only records from Secure log file.
I check the “secure_*.log” file content (I used only one CPS machine, so there is only one filename with IP 10.x.y.z):
Conclusion
In this article, I explained how to split IBM Cloud Pak System log files by using log pathname.
With Logstash filter plugins – I used Grok plugin, pattern and regular expressions but you can define yours – you can filter IBM Cloud Pak System log files as you want.
This is useful to capture specific events like user login.