Co-authored by Martin Smithson and Jagadeeswar Gangaraju.
WebSphere Automation "How To" Series #19 : How to install WebSphere Automation to Red Hat® OpenShift® Local
Previous blogs in this WebSphere Automation "How To" series :
WebSphere Automation "How To" Series #1 : How to get WebSphere Automation UI URL
WebSphere Automation "How To" Series #2 : How to specify user roles and permissions
WebSphere Automation "How To" Series #3 : How to configure WebSphere Automation with an Enterprise LDAP
WebSphere Automation "How To" Series #4 : How to register WebSphere Application Server traditional servers using configuretWasUsageMetering.py script
WebSphere Automation "How To" Series #5 : How to register WebSphere Liberty servers
WebSphere Automation "How To" Series #6 : How to configure email server and email addresses for notifications
WebSphere Automation "How To" Series #7 : How to setup Instana to send alerts to WebSphere Automation
WebSphere Automation "How To" Series #8 : How to setup secure access to Linux or UNIX servers
WebSphere Automation "How To" Series #9 : How to trigger a memory leak health investigation when used heap is over 80 percent
WebSphere Automation "How To" Series #10 : How to view WebSphere Automation REST APIs using Swagger UI
WebSphere Automation "How To" Series #11 : How to get and delete assets using APIs
WebSphere Automation "How To" Series #12 : How to get security bulletins using APIs
WebSphere Automation "How To" Series #13 : How to retrieve a list of vulnerabilities using APIs
WebSphere Automation "How To" Series #14 : How to get CVE impact summaries using APIs
WebSphere Automation "How To" Series #15 : How to install interim fixes and fix packs using WebSphere Automation UI
WebSphere Automation "How To" Series #16 : How to retrieve and delete installations using APIs
WebSphere Automation "How To" Series #17 : How to retrieve fixes using APIs
WebSphere Automation "How To" Series #18 : How to register WebSphere Liberty servers running in containers
This post will focus on how to install WebSphere Automation to an instance of Red Hat OpenShift Local (formerly Red Hat CodeReady Containers) for the purpose of evaluating the offering or performing a proof of concept.
WebSphere Automation helps to free operations teams from the routine "care and feeding" of their WebSphere environments. With proactive CVE protection for WebSphere, and integration with Instana to help reduce time to resolution on disruptive memory leaks, WebSphere Automation is giving time back to operation teams to focus on strategic initiatives like Liberty adoption, moving workloads to cloud and containers, and supporting the organization's digital transformation initiative.
However, WebSphere Automation must be installed on Red Hat OpenShift Container Platform (OCP) 4.6 or later on Linux® x86_64 and the prospect of deploying an OpenShift Cluster in order to evaluate WebSphere Automation can be a little daunting to anyone who is not familiar with the underlying technology.
One option is to deploy WebSphere Automation to an instance of Red Hat OpenShift Local. Red Hat OpenShift Local provides a minimal OCP cluster on your local computer for development and testing purposes. The OCP cluster runs in a virtual machine known as an
instance and it uses a single node which behaves as both a control plane and worker node.
This post will focus on how to install WebSphere Automation to Red Hat OpenShift Local.
System Requirements
For this example, we will be deploying Red Hat OpenShift Local to a Linux virtual machine running CentOS 9. The
Getting Stated Guide for Red Hat OpenShift Local describes the
presets that can be specified when configuring the instance. The preset that is configured specifies the managed container runtime that will be used to run OpenShift Local. The presets that are provided by OpenShift Local are as follows:
- OpenShift Container Platform
- Podman container platform
We will be using the
OpenShift Container Platform preset when configuring the instance.
Minimum System Requirements
The minimum system requirements that are specified in the Getting Started Guide for the OpenShift Container Platform preset are as follows:
- 4 physical CPU cores
- 9 GB of free memory
- 35 GB of storage space
However, the Getting Started Guide also states the following:
The OpenShift Container Platform cluster requires these minimum resources to run in the Red Hat OpenShift Local instance. Some workloads may require more resources.
Because we will be deploying WebSphere Automation on the OpenShift Local instance, the virtual machine that we will be using needs to be configured with more resources than those described above. The virtual machine that we will be using is configured with the following resources:
- 24 physical CPU cores
- 64 GB of memory
- 200 GB of storage space
NOTE: It is possible to configure the OpenShift Local instance with more CPUs than are actually available on the host that it is running on, although this will obviously impact the performance since the OpenShift Local instance will be resource constrained. We have tested deploying OpenShift Local to a virtual machine with 16 CPU cores.
User Requirements
The OpenShift Local executable (
crc) cannot be run as the
root user or as an
administrator. We will create a new user to run OpenShift Local on the virtual machine. In order to do this, run the following commands as the
root user, replacing
<PASSWORD>
with a suitable value in your environment:
useradd -m crcuser
passwd <PASSWORD>
On most Linux distributions, creating a new user with the
useradd
command does not create a home directory for the user. The
-m
(or
--create-home
) option specifies that the user's home directory should be created as
/home/crcuser
.
Add Sudo Capabilities To The New User
On Linux or macOS, the user account that is used to run OpenShift Local must have permission to use the
sudo command. Perform the following steps to configure the
crcuser to use the
sudo command:
- As the root user, execute the following command to edit the sudoers file:
visudo
- Insert the following line in the relevant section of the file:
crcuser ALL=(ALL) NOPASSWD:ALL
Required Software Packages For Linux
Red Hat OpenShift Local requires the
libvirt and
NetworkManager packages to run on Linux. Use the following command to install these packages on the CentOS 9 virtual machine:
su -c 'yum install NetworkManager'
NOTE: If you are using a different Linux distribution, please refer to the Getting Started Guide for the relevant command.
The process of installing WebSphere Automation also requires a GUI to be installed on the virtual machine. If a GUI is not installed on the virtual machine, use the following command to install the relevant packages:
dnf groupinstall "Server with GUI"
Install And Configure NFS
Various components in WebSphere Automation make use of persistent volumes in order to store files. For example, fix files that are downloaded from Fix Central are stored are stored in a persistent volume so that they can be used to install the relevant fix to servers that are registered with WebSphere Automation in order to resolve vulnerabilities that have been detected.
For a typical deployment of WebSphere Automation, the storage provider that is installed and configured in the OCP cluster would need to comply with the
storage requirements specified in the WebSphere Automation documentation. However, for this example, we want to use a more lightweight storage provider that will not consume much resource from the OCP instance or the underlying virtual machine.
For this reason, we will install and configure an NFS server on our virtual machine and will configure the OCP instance with an automatic provisioner that can use this NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Perform the following steps as the
root user to install and configure the NFS server on the virtual machine:
- Install the nfs-utils package:
yum install nfs-utils
- Enable and start the rpcbind service:
systemctl --now enable rpcbind
- Enable and start the NFS server:
systemctl --now enable nfs-server
- Create the directory that will be exposed by the NFS server:
mkdir /var/nfsshare
chmod -R 755 /var/nfsshare
- If you are using a firewall, you may need to update the configuration to allow access to the NFS server. For example, if you are using the firewalld daemon, you would use the following commands:
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --permanent --zone=public --add-service=rpcbind
firewall-cmd --reload
- Configure the NFS server to allow all clients to access by adding the following line to the
/etc/exports
file, creating the file if it does not exist:
/var/nfsshare *(rw,sync,insecure,no_root_squash)
- Restart the NFS server:
systemctl restart nfs-server
Install Red Hat OpenShift Local
The installation instructions in the Getting Started Guide directs you to the following URL to in order to download the latest version of OpenShift Local:
However, at the time of writing, WebSphere Automation does not support the OCP version that is embedded in the latest version of OpenShift Local (
4.11.1
). As a result, we need to download and install an older version of OpenShift Local that embeds a version of OCP that is supported by WebSphere Automation. Perform the following steps in order to do this:
- From the command line, ssh into your virtual machine as the crcuser user.
- Change to the Downloads directory:
cd ~/Downloads
- Download version
2.6.0
of of OpenShift Local:
wget https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/2.6.0/crc-linux-amd64.tar.xz
NOTE: This version of OpenShift Local embeds version 4.10.22
of the OpenShift Container Platform.
- Extract the contents of the archive that has been downloaded:
tar xvf crc-linux-amd64.tar.xz
- Create the
~/bin
directory if it does not exist and copy the crc executable to it:
mkdir -p ~/bin
cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
- Add the
~/bin
directory to the $PATH
environment variable:
export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
Configure Red Hat OpenShift Local
The resources that are configured for the OpenShift Local instance default to the same values specified for the minimum system requirements, that is,
4 CPUs and
9 GiB of memory. However, as discussed above, we need to configure the OpenShift Local instance with more resources in order to be able to deploy and run WebSphere Automation on the OCP cluster that it hosts. The
crc config
command is used to configure the Red Hat OpenShift Local instance. Perform the following steps as the
crcuser to configure the resources for the OpenShift Local instance:
- Configure the number of vCPUs available to the instance:
crc config set cpus 24
- Configure the memory available to the instance:
crc config set memory 53248
- Configure the size of the disk for the instance:
crc config set disk-size 200
- On Linux, the OpenShift Container Platform preset is selected by default. If you are not running on Linux, you can specify the preset using the following command:
crc config set preset openshift
NOTE: You cannot change the preset of an existing Red Hat OpenShift Local instance. Preset changes are only applied when a Red Hat OpenShift Local instance is created. To enable preset changes, you must delete the existing instance and start a new one.
- Red Hat OpenShift Local collects anonymous usage data to assist with development. Disable data collection using the following command:
crc config set consent-telemetry no
Before the OpenShift Local instance can be started, it needs to perform operations to set up the environment on the host machine. The
crc setup
command is used to perform the setup operations:
crc setup
The
crc setup
command creates the
~/.crc
directory if it does not already exist.
NOTE: The steps described in this section only need to be performed once. Once they have been run, the OpenShift Local instance can be started/stopped as required without needing to execute them again.
NOTE: You cannot change the configuration of a running Red Hat OpenShift Local instance. To enable configuration changes, you must stop the running instance and start it again.
Start Red Hat OpenShift Local
Once the Red Hat OpenShift Local instance has been configured, it can be started using the
crc start command. Perform the following steps as the
crcuser in order to start the OpenShift Local instance:
- Start the Red Hat OpenShift Local instance:
crc start
- Because we using the OpenShift preset, we need to supply a user pull secret when prompted. This can be obtained by opening a browser to the following URL, logging in, clicking the Copy pull secret link and then pasting it in to the prompt:
https://console.redhat.com/openshift/create/local
NOTE: The cluster takes a minimum of four minutes to start the necessary containers and Operators before serving a request.
Install The NFS Subdir External Provisioner
The
NFS Subdir External Provisioner is an automatic provisioner that uses an existing NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned in OpenShift with names of the form
${namespace}-${pvcName}-${pvName}
. The names of the directories that are created in the underlying directory shared via the NFS server are of the same form. The GitHub repository lists various mechanisms that can be used to deploy NFS Subdir External Provisioner. The simplest mechanism that can be used to deploy the provisioner to the OpenShift Local instance is to use the provided Helm charts.
Install Helm
Perform the following steps as the
crcuser to install Helm to the OCP cluster running in the OpenShift Local instance:
- Download Helm:
cd ~/Downloads
wget https://get.helm.sh/helm-v3.6.0-linux-amd64.tar.gz
- Unpack the archive using the tar command:
tar xvf helm-v3.6.0-linux-amd64.tar.gz
- Move the executable to the
/usr/local/bin
directory:
sudo mv linux-amd64/helm /usr/local/bin
- Clean up the downloaded and unpacked files:
rm helm-v3.6.0-linux-amd64.tar.gz
rm -rf linux-amd64
Install The Provisioner
Perform the following steps as the
crcuser to install the NFS Subdir External Provisioner to the OCP cluster running in the OpenShift Local instance using Helm:
- Configure the OpenShift CLI oc command in your shell:
eval $(crc oc-env)
- Retrieve the credentials for the kubeadmin user for the OpenShift Local instance:
crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p e98bx-iBaRK-JxVFH-GueHJ https://api.crc.testing:6443'
- Copy the oc login command for logging in as an admin and execute it in the shell, for example:
oc login -u kubeadmin -p e98bx-iBaRK-JxVFH-GueHJ https://api.crc.testing:6443
- Add the Helm repository:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
- Install the provisioner, replacing
<HOSTNAME>
with the hostname of the virtual machine:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=<HOSTNAME> --set nfs.path=/var/nfsshare
- Set the NFS Subdir External Provisioner as the default storage class in the OCP cluster:
oc patch storageclass nfs-client -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
Install The WebSphere Automation Operator
Although the
WebSphere Automation documentation provides extensive instructions on how to install the WebSphere Automation Operator to an OCP cluster, we will walk through them step-by-step to ensure that it is installed correctly to the OpenShift Local instance.
Add The IBM Operator Catalog
The IBM Operator Catalog is an index of operators available to automate deployment and maintenance of IBM Software products into OCP clusters. Operators within this catalog have been built following Kubernetes best practices and IBM standards to provide a consistent integrated set of capabilities. The catalog can be added to any OCP 4.6 and newer cluster by creating a
CatalogSource
resource. Perform the following steps to create the
CatalogSource
resource for the IBM Operator Catalog in the OCP cluster:
- Create the IBM Operator Catalog:
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-operator-catalog
namespace: openshift-marketplace
annotations:
olm.catalogImageTemplate: "icr.io/cpopen/ibm-operator-catalog:v{kube_major_version}.{kube_minor_version}"
spec:
displayName: IBM Operator Catalog
publisher: IBM
sourceType: grpc
image: docker.io/ibmcom/ibm-operator-catalog:latest
updateStrategy:
registryPoll:
interval: 45m
EOF
- Check the
CatalogSources
operators:
oc get CatalogSources ibm-operator-catalog -n openshift-marketplace
The command output lists the installed operators for CatalogSources
. Ensure the ibm-operator-catalog is listed:
NAME DISPLAY TYPE PUBLISHER AGE
ibm-operator-catalog IBM Operator Catalog grpc IBM 28s
- Check that the pods for
CatalogSources
are running:
oc get pods -n openshift-marketplace
We want the command output to show a Running
status for the pods. Ensure the ibm-operator-catalog pod is listed and reaches Running
status:
NAME. READY STATUS RESTARTS AGE
ibm-operator-catalog-r96r2 1/1 Running 0 1s
Get An Entitlement Key To IBM Entitled Container Fulfillment Registry
In order to be able to pull the WebSphere Automation images from the
IBM Entitled Container Fulfilment Registry, we need to obtain an entitlement key. Perform the following steps to get an entitlement key:
- Log in to MyIBM Container Software Library with an IBMid and password that are associated with the entitled software.
- In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
If WebSphere Automation is not listed in the
Container software library, you can register for a
WebSphere Automation trial.
Configure The Global Pull Secret With Entitled Registry Credentials
We need to configure the global pull secret in the OCP cluster with the credentials that will be used to access the entitled registry.
Perform the following steps to configure the global pull secret:
- Open the OpenShift Console in the default web browser:
crc console
- Login as the kubeadmin user, using the credentials obtained using the
crc console --credentials
command.
- In the OpenShift Console, expand Workloads in the left hand navigation menu and click Secrets.
- Select openshift-config in the Project drop down and then select the pull-secret secret from the list:
- Select Edit Secret from the Actions drop down:
- Scroll to the bottom of the list of existing credentials and click Add Credentials.
- Enter cp.icr.io in the Registry server address field.
- Enter cp in the Username field.
- Paste the entitlement key obtained above into the Password field.
- Click Save:
Create The Subscription To The WebSphere Automation Operator
The final step that we need to perform is to create a subscription to the WebSphere Automation Operator. This is the step that effectively installs the WebSphere Automation Operator on the OCP cluster.
Perform the following steps to create the subscription:
- Create the subscription:
cat <<EOF | oc apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ibm-websphere-automation
namespace: openshift-operators
spec:
channel: v1.4
installPlanApproval: Automatic
name: ibm-websphere-automation
source: ibm-operator-catalog
sourceNamespace: openshift-marketplace
startingCSV: ibm-websphere-automation.v1.4.2
EOF
- WebSphere Automation builds on top of a number of other common services that are provided by the IBM Automation Foundation (IAF) layer. As a result, installing the WebSphere Automation Operator will also install a number of other operators that WebSphere Automation depends on. We need to make sure that the pods for all of these operators are running before we can continue to the next step in the process. Use the following command to check that all of the operator pods are running:
oc get pod -n openshift-operators
The output should be similar to the following:
NAME READY STATUS RESTARTS AGE
iaf-core-operator-controller-manager-975ffb65d-h2wbm 1/1 Running 0 11h
iaf-eventprocessing-operator-controller-manager-695cdd548-vgt47 1/1 Running 2 (11h ago) 11h
iaf-flink-operator-controller-manager-ffc9cd8b-vcrtm 1/1 Running 0 11h
iaf-operator-controller-manager-6845f7b5b8-xlnmb 1/1 Running 1 (11h ago) 11h
ibm-common-service-operator-7c94b974fc-9wmvh 1/1 Running 0 11h
ibm-elastic-operator-controller-manager-6457d5cf8d-wmkf9 1/1 Running 1 (11h ago) 11h
websphere-automation-operator-controller-manager-5d6cbcf88pjcnc 1/1 Running 1 (11h ago) 11h
NOTE: It may take several minutes for all of the pods to reach the Running state
Create And Configure The WebSphere Automation Project
We will deploy WebSphere Automation into its own project (namespace) in the OCP cluster. Depending on the WebSphere Automation functionality that we want to evaluate, we may also need to create some additional resources in the project in order to enable that functionality.
Create The Project
Use the following command to create the project for WebSphere Automation:
oc new-project websphere-automation
Create The Fix Central Secret (Optional)
In order for WebSphere Automation to be able to fetch security fixes, we need to configure WebSphere Automation with the credentials that will be used to access
IBM Fix Central. The steps required to create the secret are described in the
Setting up credentials for Fix Central section of the WebSphere Automation documentation.
Setup Secure Remote Access (Optional)
WebSphere Automation requires remote access to managed servers in order to apply security fixes or to collect the heap dump information. WebSphere Automation and the managed servers must be properly configured to allow for remote access. The steps required to configure both WebSphere Automation and the managed servers are described in the
Setting up secure remote access section of the WebSphere Automation documentation.
Create The WebSphere Automation Instance
By default, the production profile is used when creating an instance of WebSphere Automation in the OCP cluster. The production profile configures 3 replicas of the underlying services and makes the operator highly available in OCP cluster.
However, because the OpenShift Local instance is running on a virtual machine with limited resources, we need to use the start profile when creating the WebSphere Automation instance. The starter profile consumes less CPU and memory from the OCP cluster compared to the production profile because it does not configure multiple replicas of the underlying services.
Execute the following command to create an instance of WebSphere Automation that is based on the starter profile. Please note that this instance will include both the server health and security monitoring components of WebSphere Automation.
cat <<EOF | oc apply -f -
apiVersion: base.automation.ibm.com/v1beta1
kind: AutomationBase
metadata:
name: starter
namespace: websphere-automation
spec:
license:
accept: true
tls: {}
version: v1.2
kafka:
kafka:
config:
offsets.topic.replication.factor: 1
transaction.state.log.min.isr: 1
transaction.state.log.replication.factor: 1
replicas: 1
resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: '1'
memory: 1Gi
storage:
type: persistent-claim
size: 10Gi
zookeeper:
replicas: 1
resources:
limits:
cpu: '1'
memory: 1Gi
requests:
cpu: '1'
memory: 1Gi
storage:
type: persistent-claim
size: 2Gi
---
apiVersion: automation.websphere.ibm.com/v1
kind: WebSphereAutomation
metadata:
name: wsa
namespace: websphere-automation
spec:
replicas: 1
dataStore:
replicas: 1
license:
accept: true
---
apiVersion: automation.websphere.ibm.com/v1
kind: WebSphereSecure
metadata:
name: wsa-secure
namespace: websphere-automation
spec:
license:
accept: true
replicas: 1
---
apiVersion: automation.websphere.ibm.com/v1
kind: WebSphereHealth
metadata:
name: wsa-health
namespace: websphere-automation
spec:
license:
accept: true
replicas: 1
EOF
NOTE: The process of installing WebSphere Automation on the virtual machine will take some time due to the limited resources available.
Installation times of up to 90 minutes have been observed when deploying to a virtual machine with 16 CPU cores.We can check the progress of the installation by inspecting the
status of the WebSphereAutomation custom resource that we created. In order to do this, execute the following command:
oc describe websphereautomation wsa
We want to make sure that the
Status that is reported for each of the conditions is
True and that the message
All prerequisites and installed components are ready
is displayed. The
Status section in the output should look as follows:
Status:
Conditions:
Message: All prerequisites and installed components are ready
Status: True
Type: Ready
Status: True
Type: CartridgeReady
Status: True
Type: AutomationBaseReady
Status: True
Type: CartridgeRequirementsReady
Message: Kafka cluster is ready
Status: True
Type: KafkaReady
Message: Kafka resources are ready
Status: True
Type: KafkaResourcesReady
Message: Data store is ready
Status: True
Type: DataStoreReady
Status: True
Type: ActivityRecordManagerReady
Status: True
Type: WebSphereAutomationAPIsReady
Message: All prerequisites and WebSphere Secure components are ready
Status: True
Type: WebSphereSecureReady
Message: All prerequisites and WebSphere Health components are ready
Status: True
Type: WebSphereHealthReady
Status: True
Type: RunbookManagerReady
Message: All updates to WebSphereAutomation instance have been processed
Status: True
Type: Reconciled
Network Configuration
The Networking chapter of the Red Hat OpenShift Local
Getting Started Guide describes the DNS domain names that are used by OpenShift Local and how the DNS configuration of the virtual machine is modified by the
crc setup command so that these domains can be resolved on the machine that is hosting the OpenShift Local instance:
The OpenShift Container Platform cluster managed by Red Hat OpenShift Local uses 2 DNS domain names, crc.testing and apps-crc.testing. The crc.testing domain is for core OpenShift Container Platform services. The apps-crc.testing domain is for accessing OpenShift applications deployed on the cluster.
For example, the OpenShift Container Platform API server is exposed as api.crc.testing while the OpenShift Container Platform console is accessed as console-openshift-console.apps-crc.testing. These DNS domains are served by a dnsmasq DNS container running inside the Red Hat OpenShift Local instance.
The crc setup command detects and adjusts your system DNS configuration so that it can resolve these domains. Additional checks are done to verify DNS is properly configured when running crc start.
However, in order to be able to evaluate WebSphere Automation we need to be able to communicate with the instance from remote servers. We need to install
haproxy on the virtual machine in order to forward incoming requests to the services running in the OpenShift Local instance.
Install HAProxy On The Virtual Machine
Perform the following steps as the
crcuser user to install and configure
haproxy on the virtual machine:
- Start the OpenShift Local instance (if it is not already running):
crc start
NOTE: We need to ensure that the cluster remains running during this procedure.
- Install the haproxy package and other utilities:
sudo dnf install haproxy /usr/sbin/semanage
- If you are using a firewall, you will need to update the configuration to allow communication with the cluster:
sudo systemctl enable --now firewalld
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo firewall-cmd --add-service=kube-apiserver --permanent
sudo firewall-cmd --reload
- For SELinux, allow haproxy to listen on TCP port 6443 to serve kube-apiserver on this port:
sudo semanage port -a -t http_port_t -p tcp 6443
- Create a backup of the default haproxy configuration:
sudo cp /etc/haproxy/haproxy.cfg{,.bak}
- Retrieve the IP address of the running OpenShift cluster:
export CRC_IP=$(crc ip)
- Configure haproxy for use with the cluster:
sudo tee /etc/haproxy/haproxy.cfg &>/dev/null <<EOF
global
log /dev/log local0
defaults
balance roundrobin
log global
maxconn 100
mode tcp
timeout connect 5s
timeout client 500s
timeout server 500s
listen apps
bind 0.0.0.0:80
server crcvm $CRC_IP:80 check
listen apps_ssl
bind 0.0.0.0:443
server crcvm $CRC_IP:443 check
listen api
bind 0.0.0.0:6443
server crcvm $CRC_IP:6443 check
EOF
- Start the haproxy service:
sudo systemctl start haproxy
Configure Remote Servers To Communicate With The OpenShift Local Instance
The Getting Started Guide includes a chapter that describes how
dnsmasq can be installed and configured on remote servers to allow them to communicate with the remote OpenShift Local instance. However, we decided to take the simpler approach of manually modifying the
/etc/hosts
file on the machines hosting the WebSphere and Liberty servers that we wanted to register with our WebSphere Automation instance running in OpenShift Local.
The line that needs to be added to the
/etc/hosts
file is as follows, where
<VIRTUAL_MACHINE_IP>
is replaced with the IP address of the virtual machine that the OpenShift Local instance is running on:
<VIRTUAL_MACHINE_IP> console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing cpd-websphere-automation.apps-crc.testing api.crc.testing cp-console.apps-crc.testing
Once the
/etc/hosts
file has been modified on a remote machine:
- The OpenShift console can be accessed using a browser on the remote machine using the following URL:
https://console-openshift-console.apps-crc.testing
- The WebSphere Automation UI can be accessed using a browser on the remote machine using the following URL:
https://cpd-websphere-automation.apps-crc.testing
- The URL of the usage metering service in WebSphere Automation that is used when configuring WebSphere and Liberty servers to register with WebSphere Automation is:
https://cpd-websphere-automation.apps-crc.testing/websphereauto/meteringapi
Once you are able to access the WebSphere Automation UI, you can retrieve the information required to register servers with WebSphere Automation using the UI as follows:
- In the Secure view of the WebSphere Automation UI, click the Servers tab.
- Click the Register + button:
- In the Register server fly-out, click either the WebSphere Liberty or WebSphere traditional radio button, depending on the type of server that you want to register:
- Copy the configuration from the relevant text fields as required
Conclusion
In this blog post, we have described how WebSphere Automation can be deployed to an instance of OpenShift Local running on suitably configured virtual machine for the purposes of evaluating the offering as part of a proof of concept.
You can find more IBM Docs related to WebSphere Automation at https://www.ibm.com/docs/en/ws-automation.