The
Getting started with Watson for AIOps Event Manager blog mini-series will cover deployment, configuration, and set-up of Event Manager system to get you off to a fast start, and help you to get quick value from your investment.
This
second module focuses on configuring and connecting an on-premise Netcool/Probe to the Event Manager. In most demonstration or POC scenarios, there will be an existing on-premise Netcool deployment present, and there will be a need to connect that environment to the system running on OpenShift. Typically this will involve connecting Probes or uni-directional ObjectServer Gateways into the Event Manager, to provide a source of events.
NOTE: This module assumes you have an existing on-premise Netcool/OMNIbus deployment that you want to connect into your Event Manager system. If this is not the case, and you are using only webhook-based event feeds (configured via the GUI), you can skip this module and move on to
module 3.
This deployment scenario will assume you are deploying onto IBM Cloud however the steps would generally apply to an OpenShift cluster deployed on another cloud provider or on-premise.
By the end of this module, you will have enabled the ObjectServer nodeports on your cluster, configured your on-premise Netcool/OMNIbus system to connect to the ObjectServer embedded within Event Manager, and connected a Probe.
This module should take you about
50 minutes to complete and includes the following steps:
Step 1: Activate the ObjectServer nodeports on your OpenShift cluster (15 minutes)
Step 2: Gather the cluster connection information (15 minutes)
Step 3: Configure Netcool/OMNIbus to connect to the cluster (15 minutes)
Step 4: Configure and connect a Netcool/Probe (5 minutes)
Step 1: Activate the ObjectServer nodeports on your OpenShift cluster
Behind the scenes, the event stores in Watson for AIOps Event Manager are Netcool/OMNIbus ObjectServers. These are running in containers and are not accessible outside the cluster by default. This first step involves modifying your Event Manager deployment to activate the nodeports.
Log in to your OpenShift UI and navigate to:
Operators → Installed Operators → IBM Cloud Pak for Watson AIOps Event Manager → NOI.
Next, click on your deployment - for example:
evtmanager - and then click on the
YAML tab.
Add a new sub-section to the
spec: section in your configuration with the following text:
helmValuesNOI:
global.service.nodePort.enable: true
NOTE: the spacing and indentation is important here.
After you have added this sub-section and clicked
Save, the resulting configuration should look something like the following:
NOTE: In this screenshot, other properties have also been added. The key added lines are on lines
196 and
198.
Next, you need to modify the stateful set configuration for
evtmanager-ncoprimary and
evtmanager-ncobackup.
Navigate to:
Workloads →
StatefulSets and search for
primary in the search bar. Click on
evtmanager-ncoprimary and click on the
YAML tab.
Scroll down to approximately line 415 and remove the
.namespace.svc
suffix from the
NCO_IDUC_LISTENING_HOSTNAME
value.
For example:
- name: NCO_IDUC_LISTENING_HOSTNAME
value: evtmanager-objserv-agg-primary-nodeport.netcool.svc
...becomes:
- name: NCO_IDUC_LISTENING_HOSTNAME
value: evtmanager-objserv-agg-primary-nodeport
After you have made this change and clicked
Save, the resulting configuration should look something like the following:
After you have saved the configuration, OpenShift will detect the change, and redeploy the relevant services.
Repeat this step for the
evtmanager-ncobackup StatefulSet.
Next, log in to the OpenShift cluster via the command line, as you did in the
previous module. To check that the nodeports have been successfully deployed, you can query the
Proxy Service, and check its output for the nodeport values:
$ oc get service -o yaml evtmanager-proxy -n noi
apiVersion: v1
kind: Service
metadata:
...
ports:
- name: aggp-proxy-port
nodePort: 32767
port: 6001
protocol: TCP
targetPort: 6001
- name: aggb-proxy-port
nodePort: 31280
port: 6002
protocol: TCP
targetPort: 6002
selector:
app.kubernetes.io/instance: evtmanager
app.kubernetes.io/name: proxy
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
$
From the output above, you can see that the nodeports have been deployed on ports
32767 for the primary ObjectServer and
31280 for the backup. These are the port numbers that are externally accessible to the cluster and are what your Netcool Probe (or in-bound Netcool Gateway) will use to connect.
NOTE: Make a note of these port numbers as you'll need them in subsequent steps.
For more information on this step, see the following documentation link:
https://www.ibm.com/docs/en/noi/1.6.4?topic=service-identifying-proxy-listening-port
Step 2: Gather the cluster connection information
This step involves compiling information about the cluster's certificate common name (CN) and acquiring the cluster certificate for import into the Probe server.
In the
previous module, you identified the
Ingress subdomain from the IBM Cloud UI, from the information relating to your cluster.
Ping the
Ingress subdomain to identify the IP address to use to communicate with the cluster:
$ ping swat01-4693fb98e216d694995fd035c18ac049-0000.us-east.containers.appdomain.cloud
PING swat01-4693fb98e216d694995fd035c18ac049-0000.us-east.containers.appdomain.cloud (52.117.99.214) 56(84) bytes of data.
64 bytes from evtmanager-proxy.noi.svc (52.117.99.214): icmp_seq=1 ttl=46 time=89.2 ms
64 bytes from evtmanager-proxy.noi.svc (52.117.99.214): icmp_seq=2 ttl=46 time=96.10 ms
^C
--- swat01-4693fb98e216d694995fd035c18ac049-0000.us-east.containers.appdomain.cloud ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 89.216/93.100/96.985/3.896 ms
$
Two things to note from the example output above are the
proxy service name:
evtmanager-proxy.noi.svc
- and the
IP address: 52.117.99.214. These represent the hostname and IP address we will be using to communicate with the cluster.
NOTE: Make a note of the proxy service name and IP address as you'll need them shortly.
Next, log into your server where the Netcool/Probe is installed. Use the
OpenSSL command to retrieve the x.509 certificate that is returned by the proxy and verify the certificate common name (CN). You need to again use the Ingress subdomain value in conjunction with the nodeport value you discovered in Step 1:
$ openssl s_client -connect swat01-4693fb98e216d694995fd035c18ac049-0000.us-east.containers.appdomain.cloud:32767
CONNECTED(00000003)
depth=1 CN = openshift-service-serving-signer@1645115959
verify return:1
depth=1 CN = openshift-service-serving-signer@1645115959
verify return:1
depth=0 CN = evtmanager-proxy.noi.svc
verify return:1
---
Certificate chain
0 s:CN = evtmanager-proxy.noi.svc
...
Here you can see the proxy service name being used by the server, that is associated with the certificate, and that it matches the hostname returned by the ping command earlier. This is important because the hostname we use to connect to the cluster must match that referred to in the certificate in order for the SSL connection to work correctly.
Finally, download the certificate from the cluster using the
oc utility:
$ oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data "tls.crt"}}' | base64 --decode > cluster-ca-cert.pem
zane:~$ ls -l cluster-ca-cert.pem
-rw-rw-r--. 1 zane zane 1212 Apr 14 12:06 cluster-ca-cert.pem
$
Copy this file (
cluster-ca-cert.pem
) to the Probe server in preparation for the next step.
For more information on this step, see the following documentation link:
https://www.ibm.com/docs/en/noi/1.6.4?topic=service-configuring-tls-encryption-red-hat-openshift
Step 3: Configure Netcool/OMNIbus to connect to the cluster
Now that we have the proxy service hostname, the nodeport of the primary ObjectServer, and the cluster's certificate, we are ready to configure the Probe to connect to the Event Manager.
Log in to the Probe server and add an entry to the
/etc/hosts
file with the
cluster IP address and the
proxy service hostname obtained previously:
52.116.218.94 noi-proxy.noi.svc
This is the hostname that will be used in the Netcool interfaces file.
If you haven't already created one previously,
create a keystore on your Probe server to import the certificate into:
$NCHOME/bin/nc_gskcmd -keydb -create -db "$NCHOME/etc/security/keys/omni.kdb" -pw password -stash -expire 1000
Copy the certificate you downloaded in Step 2 to the Probe server, and
import it into your newly created keystore:
$NCHOME/bin/nc_gskcmd -cert -add -file cluster-ca-cert.pem -db $NCHOME/etc/security/keys/omni.kdb -stashed
Create an interfaces file entry in
$NCHOME/etc/omni.dat
, representing the primary ObjectServer running in OpenShift:
[ROKS_AGG_P]
{
Primary: noi-proxy.noi.svc ssl 32767
}
NOTE: The entry contains the string
ssl which indicates that an encrypted connection should be used.
Run
$NCHOME/bin/nco_igen
to update the interfaces file information.
Use the
nco_ping
utility to test the connection the ObjectServer:
$ $OMNIHOME/bin/nco_ping ROKS_AGG_P
NCO_PING: Server available.
$
You are now ready to connect your Probe.
For more information on this step, see the following documentation link:
https://www.ibm.com/docs/en/noi/1.6.4?topic=service-configuring-tls-encryption-red-hat-openshift
Step 4: Configure and connect a Netcool/Probe
The fourth and final step is to configure the Probe to connect to the primary ObjectServer running in OpenShift.
Using the Simnet Probe as an example, run the Probe in debug mode to ensure that the Probe can connect to the ObjectServer:
$OMNIHOME/probes/nco_p_simnet -server ROKS_AGG_P -messagelevel debug -messagelog stdout
After verifying successful connect to the ObjectServer, you can run the Probe outside of debug mode, normally under Process Agent control.
--
You have now completed this module and are ready for
module 3:
Set up a webhook integration.