AIOps

 View Only

Connect an IDUC client to an ObjectServer running in OCP

By Zane Bray posted Wed April 19, 2023 11:20 AM

  

If you deploy the Cloud Pak for Watson AIOps Event Manager (also known as Netcool Operations Insight, or NOI) onto OpenShift, this will include some containerised Netcool components, including a fail-over pair of Netcool/OMNIbus 8.1 ObjectServers. By default, these ObjectServers are inaccessible from outside the OpenShift cluster.

This blog post outlines how you can expose these ports and connect to them, in particular, how to connect an IDUC client, such as a Netcool/Gateway Reader connection, or a Netcool/OMNIbus Native Event List. Netcool/Gateway Reader connections are of particular importance as they are the type used by the JDBC Gateway (for historic event archiving) and the ticketing Gateways, such as the Netcool/Gateway for Remedy or ServiceNow.

The scenario is one where you have a VM with Netcool/OMNIbus 8.1 installed and you want to connect an out-bound Netcool/Gateway to the ObjectServers running in your Watson for AIOps Event Manager instance.

Below is a high level overview of the steps required to expose the ObjectServer ports:

1. Gather a list of your worker node external IP addresses
2. Configure load-balancer external IP services to map the internal primary and backup ObjectServer ports to external ports
3. Modify the IDUC port used by your backup ObjectServer so it doesn't clash with that of the primary
4. Gather the IDUC listening host names used by your ObjectServers
5. Create a local hosts file entry that binds these listening host names to one of your worker IP addresses
6. Create your local Netcool/OMNIbus interfaces file entry using the listening host names
7. Test and verify your connection

The following sections go through each step in detail, and guides you through the process of opening the ports, then testing and validating the connection.

GATHER A LIST OF YOUR WORKER NODE IP ADDRESSES

Open a command prompt and authenticate to your OpenShift cluster. Next, run the following command to get a list of your worker node IP addresses:

oc get nodes --selector='node-role.kubernetes.io/worker' -o jsonpath="{.items[*].status.addresses[?(@.type=='InternalIP')].address}"


CONFIGURE LOAD-BALANCER SERVICES

Log into your OpenShift console and select the namespace where you deployed your Netcool Operations Insight instance (eg. "noi").

Click on the "Import YAML" button (the plus button in the top-right of the screen) to open a new YAML import entry screen. Use the following as an example to create your three load-balancer services: one for the proxy, one for the primary ObjectServer, and one for the backup ObjectServer:

apiVersion: v1
kind: Service
metadata:
  name: evtmanager-proxy-externalip 
spec:
  ports:
  - name: aggp-tds
    port: 6001
  - name: aggb-tds
    port: 6002
  externalTrafficPolicy: Cluster
  externalIPs:
- 10.1.1.1
- 10.1.1.2
- 10.1.1.3
  type: LoadBalancer 
  selector:
    app.kubernetes.io/name: proxy
---
apiVersion: v1
kind: Service
metadata:
  name: evtmanager-ncoprimary-externalip 
spec:
  ports:
  - name: aggp-tds
    port: 4100
  - name: aggp-iduc
    port: 4101
  externalTrafficPolicy: Cluster
  externalIPs:
- 10.1.1.1
- 10.1.1.2
- 10.1.1.3
  type: LoadBalancer 
  selector:
    app.kubernetes.io/name: ncoprimary
---
apiVersion: v1
kind: Service
metadata:
  name: evtmanager-ncobackup-externalip 
spec:
  ports:
  - name: aggb-tds
    port: 4102
    targetPort: 4100
  - name: aggb-iduc
    port: 4103
  externalTrafficPolicy: Cluster
  externalIPs:
- 10.1.1.1
- 10.1.1.2
- 10.1.1.3

  type: LoadBalancer 
  selector:
    app.kubernetes.io/name: ncobackup


Notes:
- Replace references to evtmanager with your instance name
- Replace the IP addresses listed in the externalIPs section with your worker node IP addresses
- The load-balancer services will ensure requests for the primary ObjectServer go to the worker where that pod is running
- Likewise the load-balancer services will ensure requests for the backup ObjectServer go to its respective worker

MODIFY BACKUP OBJECTSERVER IDUC PORT NUMBER

From the OpenShift GUI, select: Operators → Installed Operators → IBM Cloud Pak for Watson AIOps Event Manager → NOI

Click on your deployed instance (eg. "evtmanager") and select the YAML tab.

Scroll down to the spec section and add the following two lines:

  helmValuesNOI:
  ncobackup.objserv.internal.iducPort: 4103

Note: it is essential to ensure the space characters are correct. There are two space characters before helmValuesNOI and four space characters before ncobackup.

After adding the lines, your YAML should look like the following. Note the addition of lines 156 and 157:


Click Save to save your changes to your NOI deployment instance.

If you now go to: Workloads → Pods and search for "backup", you'll see the backup ObjectServer pod restarting, as it updates with the new settings.

GATHER IDUC LISTENING HOSTNAMES

Navigate to: Workloads → Pods and search for "primary" and select the primary ObjectServer pod.

Click on the Environment tab and make a note of the value of the NCO_IDUC_LISTENING_HOSTNAME environment variable.

Do the same for the backup ObjectServer pod.

They will be something like: evtmanager-objserv-agg-primary and evtmanager-objserv-agg-backup respectively, depending on your deployment instance name.

CREATE LOCAL HOSTS FILE ENTRY

On your VM where you have Netcool/OMNIbus 8.1 installed, edit your local hosts file and add an entry that resolves the IDUC listening host names to the IP address of one of your worker nodes - for example:

10.1.1.1 evtmanager-objserv-agg-primary evtmanager-objserv-agg-backup


This will ensure that when the ObjectServer responds to the IDUC client connection request, the client will know where to connect.

Note: This associates the target IDUC listening host names with just one of your worker node IP addresses. You may wish to associate other IP addresses also using a load-balancer, in case the worker node referred to goes offline, to ensure continued service. In a POC or demo environment however, one worker node IP address should suffice.

CREATE INTERFACES FILE ENTRIES

On your VM where you have Netcool/OMNIbus 8.1 installed, edit your interfaces file: $NCHOME/etc/omni.dat and add an entry for your ObjectServer pair using the listening host names as the targets and the port numbers used in your load-balancer service:

[AGG_P_OCP]
{
    Primary: evtmanager-objserv-agg-primary 4100
}
[AGG_B_OCP]
{
    Primary: evtmanager-objserv-agg-backup 4102
}
[AGG_V_OCP]
{
    Primary: evtmanager-objserv-agg-primary 4100
    Backup: evtmanager-objserv-agg-backup 4102
}


Run $NCHOME/bin/nco_igen afterwards to update the interfaces file used by Netcool/OMNIbus processes.

TEST AND VERIFY CONNECTION

Use $OMNIHOME/bin/nco_ping to verify your connection to AGG_P_OCP and AGG_B_OCP.

Next, navigate to: Workloads → Secrets and search for "omni-secret" to locate your ObjectServer's root password.

Execute the following to launch a Native Event List that will connect to your primary ObjectServer. Since the Native Event List is an IDUC client, it will test your connection to the ObjectServer's main port as well as the IDUC port:

$OMNIHOME/bin/nco_event -server AGG_V_OCP -user root -password <copied-secret>


All going well, you should now be logged in to your primary ObjectServer and be able to launch to event views:


FINAL NOTES

You have now exposed your ObjectServer's primary and IDUC ports externally to the cluster and can connect any IDUC client to the ObjectServer pair, including out-bound Netcool/Gateways.

The ObjectServers running within OpenShift do not support TLS connections, however this may be acceptable in many deployment scenarios, for example, demonstration, test, or POC environments. If an encrypted connection is required, consider using the TLS proxy to connect to the ObjectServers instead:

https://www.ibm.com/docs/en/noi/1.6.8?topic=tp-installing-configuring-completing-common-usage-scenarios-tls-proxy

0 comments
86 views

Permalink