View Only

Maximo Application Suite Automation Tooling

By Neil Patterson posted Mon November 13, 2023 09:05 AM


Authored by  Kathleen Ho Sang - ( and Neil Patterson (


IBM Maximo Application Suite (MAS) has automation for its management and operations, powered by Red Hat’s Ansible and OpenShift Pipelines. This automation is available in public repositories, maintained by IBM Development. This document will summarize how to use and contribute to this automation.

The automation has 3 components that work together:

  1. The CLI utility, mascli
    mascli is a command line interface built to help manage MAS and its components. It is powered by Ansible and OpenShift Pipelines, which are described in more detail below. The public repository can be viewed here. See available mascli commands below.

2. Ansible
Ansible is an open source, command-line IT automation software application written in Python. It can configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible’s main strengths are simplicity and ease of use1. Ansible is at the core of the automation that was built for MAS. It can also be executed independently and works under the hood of mascli and OpenShift Pipelines. The Ansible collection for MAS, ibm.mas_devops, is available from Ansible Galaxy here, and the source code is available in GitHub here.

Ansible Galaxy is a platform and community hub for finding, sharing, and managing collections of Ansible roles. It serves as a repository for Ansible roles, which are reusable sets of tasks and configurations that can be applied to multiple hosts. This simplifies the process of using and managing roles, which are fundamental building blocks in Ansible automation.


3. OpenShift Pipelines
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces several standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions2. When using the mascli, the mascli commands will launch a OpenShift Pipeline to execute the underlying ansible code being called to complete the management task. See below to see an example of pipeline to install mas.

Using the MAS Command Line Interface

The public repository for the command line interface for MAS can be viewed here. Each version of the mascli maps to an ibm.mas_devops Ansible version.

Documentation to use the mascli can be viewed here.

There are two main ways to execute mascli commands: interactive mode or non-interactive mode. When using interactive mode, the mascli will prompt the user for all the information required to execute, while in non-interactive mode, the user needs to pass all the required information as part of the argument, and for any optional parameters, the defaults will be used unless otherwise indicated in the argument. See further documentation on non-interactive mode here.


Let’s use a step by step example to understand how the mascli works.

1. Pull the mascli docker image. It can be pulled directly, or in air gapped environments, the image needs to be mirrored to the image registry since there is not access to public repositories. The image needs to be pulled from the machine with connectivity to the OpenShift cluster, and with docker requirements installed. More information can be found here.

By default, the user will start in the mascli directory. The ansible-devops directory is copied in the file system in the container. This is the ansible that the mascli will use.

2. Next, execute the desired cli command. You can simply run mas install, and the cli will prompt you for the required information in interactive mode, or you can pass installation variables as part of the command in non-interactive mode. Non-interactive mode is recommended for creating identical environments since all installation variables can be passed in one command, which can be cleanly saved and re-used.

The below command will install MAS and MAS Manage using OpenShift Container Storage for persistent storage and Db2 for the MAS Manage database. Below is the non-interactive mode command to execute, and the first few prompts from mas install interactive mode. One or the other may be run for the same result. When using non-interactive mode, the recommendation is to set an environment variable for any keys, passwords, or other sensitive data, as noted below.

oc login


mas install -i mas1 -w ws1 -W "My Workspace" -c v8-230929-amd64 --mas-channel 8.10.x \

--manage-channel 8.6.x --db2u-channel v110508.0 --db2u-manage --manage-components base=latest,health=latest \

--ibm-entitlement-key $IBM_ENTITLEMENT_KEY \

--license-id 1e0920e8 --license-file /mnt/home/entitlement.lic \

--uds-email --uds-firstname Katie --uds-lastname H \

--storage-rwo ocs-storagecluster-ceph-rbd --storage-rwx ocs-storagecluster-cephfs \

--storage-pipeline ocs-storagecluster-cephfs --storage-accessmode ReadWriteMany \

--no-confirm \


Both interactive mode and non-interactive mode will launch an OpenShift Pipeline where you can monitor progress. Other mascli commands execute the automation without using OpenShift Pipelines, and you will see log files upon executing, rather than the OpenShift Pipelines link.

Follow the progress of the mascli command using the OpenShift Pipelines tooling. Look at the task logs, and check error messages or logs if the automation is unable to complete.

3. View pipeline progress in the OpenShift console.

Navigate to the mas-install pipeline and click on the last run.

You should see a screen like this:

Look at the logs as tasks are executed and monitor progress.

How the Command Line Interface Works

In this section we will demonstrate the inner workings of the MAS automation using the install command as an example. The focus here is on the mechanics of the tooling and not necessarily the functional details.

The following sequence diagram shows how the key components of the solution interact for the install process.

The details of the steps are as follows:

1. From a bastion node running docker or Podman the user runs/execs the version of the command line interface (CLI) required for the install, a configuration directory, MAS_CONFIG_DIR needs to be passed as a volume to the container. This directory will contain any files we need for installation such as the MAS license file. The CLI_VESION is the version of the CLI container to run. This example will use the 7.2.0 version.

podman run -dit --rm --pull always --name mas-install -v ${MAS_CONFIG_DIR}:/home/${CLI_VERSION}
podman exec -it mas-install bash

2. The user will need to login to the OpenShift cluster from the CLI container, the login details for the cluster can be obtained from the OpenShift console as shown below:

Execute the command in the container

oc login --token=<your token> --server=<your server:port>

3. From the CLI, the user invokes the mas install program, a Linux shell script. As discussed earlier this can be interactive or not, passing a parameter to the install will cause the install to run in non-interactive mode. We then run the installer with the following command (note you will have to replace the –ibm-entitlement-key, –license-id and –license-file with your own values, the license file should be available in the /home/local directory of the container, configured earlier with the MAS_CONFIG_DIR directory. Also, the UDS environment variables should be replaced with something more appropriate). Note that in this example we are assuming that storage is provided by OpenShift Container Storage:

mas install -i inst1 \
-w wkspc1 \
-W Workspace_1 \
-c v8-230829-amd64 \
--mas-channel 8.10.x \
--ibm-entitlement-key $IBM_ENTITLEMENT_KEY \
--license-id <your license id> \
--license-file /home/local/<your license file> \
--uds-email \
--uds-firstname Neil \
--uds-lastname Patterson \
--storage-rwo ocs-storagecluster-ceph-rbd \
--storage-rwx ocs-storagecluster-cephfs \
--storage-pipeline ocs-storagecluster-cephfs \
--storage-accessmode ReadWriteMany \

Regardless of the way the install is run the results are the same. Environment variables are set as part of the interaction, or they are set from the arguments passed to the mas shell script. As an example, the value passed for the -I (--mas-instance-id) parameter is exported as the MAS_INSTANCE_ID environment variable.

4. If Tekton pipelines are not installed in the cluster, then they will be installed in the openshift-pipelines namespace on the cluster.

5. The pipeline run is prepared. The PipelineRun custom resource is created locally from a template and all environment variables in the template are replaced with those set up in the environment of the CLI.

6. If the mas pipelines are not installed then the installer will install the mas pipelines into the mas-<instance-id>-pipelines project. The pipeline contains the definition of which Tasks should be run and the parameters that should be passed to each task. It also contains information as to the sequencing of the tasks through the use of the runAfter command - the example below shows the details in the pipeline for the ibm-catalogs task and it can be seen that this will run after the pre-install-check task.

- name: ibm-catalogs
- name: image_pull_policy
value: $(params.image_pull_policy)
- name: devops_suite_name
value: setup-ibm-catalogs
- name: mas_catalog_version
value: $(params.mas_catalog_version)
- name: artifactory_username
value: $(params.artifactory_username)
- name: artifactory_token
value: $(params.artifactory_token)
- name: mas_catalog_digest
value: $(params.mas_catalog_digest)
- pre-install-check

There is a single instance of a Pipeline on a cluster. Execution of the pipeline is conducted through the use of PipelineRuns.

7. A PipelineRun allows you to instantiate and execute a Pipeline on-cluster. A Pipeline specifies one or more Tasks in the desired order of execution. A PipelineRun executes the Tasks in the Pipeline in the order they are specified until all Tasks have executed successfully or a failure occurs. The PipelineRun custom resource that was prepared earlier is applied to the cluster the operator sees this new Custom Resource and starts to execute the pipeline run.

8. For each Task in the Pipeline that is in scope for the Pipeline run, a new instance of a task is created as a new TaskRun custom resource. This custom resource is populated with the parameters needed to run the task from the PipelineRun. For each step defined in the task, a new container is instantiated using the supplied image parameter and the role the step is to run.

- command:
- /opt/app-root/src/
- ibm-catalogs
image: ''

9. This new container the runs the script, passing the parameter of the step to run. The script sets the role name in the environment then invokes the Ansible DevOps playbook run_role

export ROLE_NAME=$1
ansible-playbook ibm.mas_devops.run_role

10. The run_role playbook gets the role name from the environment and constructs the role to run as shown below

- hosts: localhost
any_errors_fatal: true
role_name: "{{ lookup('env', 'ROLE_NAME') }}"

- "ibm.mas_devops.{{ role_name }}"

Using an ansible playbook

In the chapter above we looked at how the CLI works, and how the CLI makes use of a single playbook from the ansible DevOps code. But there are many more playbooks that are shipped with this code.

The CLI is designed to offer a simplified guided installation, it makes certain decisions for the user and does not expose every single possibility available when using the underlying Ansible collection directly. The power afforded by the Ansible collection comes with increased complexity and the potential to misconfigure an installation. The CLI will deliver reliable, proven MAS installs based on recommendations from the MAS development team themselves, while the Ansible collection offers users the ability to tweak almost every aspect of an install, and the sample playbooks included provide a starting point to build upon.

The instructions for installing the ansible code can be found here. Note that there are a couple of options: installing it locally or using the CLI container where all the required tools are preinstalled. We will use the container as we did in the previous example. We will add the manage component to our solution using the ansible playbook oneclick-add-manage that is documented here. We will use the documented approach to show how we can use the playbook to support a manage installation using an external database.

As with the previous example run the CLI container and exec into this container.

podman run -dit --rm --pull always --name mas-install -v ${MAS_CONFIG_DIR}:/home/${CLI_VERSION}
podman exec -it mas-install bash

We the login to our target OpenShift cluster

oc login --token=<your token> --server=<your server:port>

To run a playbook, we are responsible for setting up the environment variables first. The following code sets up the variables we need to add manage to our installation. Note that some environment variables will need to be populated with values for your system.

export MAS_CONFIG_DIR=/home/local
export MAS_INSTANCE_ID=<instance id>
export MAS_WORKSPACE_ID=<workspace id>
export MAS_WORKSPACE_NAME=<workspace name>
export CATALOG=v8-230111-amd64
export LICENSE_ID=<licesne id>
export LICENSE_FILE=/home/local/licesnse.dat
export MAS_APP_CHANNEL=8.6.x
export MAS_APP_ID=manage
export DB_INSTANCE_ID=<database instance id>
export MAS_JDBC_USER=<database user>
export MAS_JDBC_PASSWORD=<database password>
export MAS_JDBC_URL=jdbc:db2://<server and port of the database>/<database name>
export MAS_APP_SETTINGS_DB2_SCHEMA=<database schema>
export MAS_CONFIG_SCOPE=wsapp
export MAS_APPWS_BINDINGS_JDBC=workspace-application
export MAS_APPWS_COMPONENTS="base=latest"
export SSL_ENABLED=false
export STORAGE_RWO=ocs-storagecluster-ceph-rbd
export STORAGE_RWX=ocs-storagecluster-cephfs

To install manage, the playbook is executed with the following command:

ansible-playbook ibm.mas_devops.oneclick_add_manage

The playbook will run a number of ansible roles to deploy the manage application. Examples of the roles that are executed are:

  • ibm.mas_devops.gencfg_jdbc
  • ibm.mas_devops.suite_config
  • ibm.mas_devops.suite_app_install
  • ibm.mas_devops.suite_app_config

Note that unlike the CLI install documented above, these roles are not executed via the pipelines but directly via the playbook. When the playbook has executed a play recap will be provided in the CLI shell:

PLAY RECAP *********************************************************************************************************************************************************************************
localhost : ok=51 changed=1 unreachable=0 failed=0 skipped=36 rescued=0 ignored=0

The logs from running the playbook can also be found in the ${HOME}/.ansible.log directory of the CLI container instance.

Using a Role

The documentation for the ansible DevOps code that can be accessed here documents how an individual role can be executed. Two options are given:  run the role  either by invoking the role directly or by using the run_role playbook (the same playbook that the pipelines code used as shown earlier in this document). The documentation describes the mandatory and optional parameters that each role supports. We will continue our installation by adding the cluster monitoring to our cluster using the run role playbook. The documentation for the role can be found here. We will run the role from the same container that we ran the playbook. As with the playbook, set up the parameters need by the role (note that some of these would be defaulted but we set them anyway as part of this example).

export PROMETHEUS_STORAGE_CLASS=ocs-storagecluster-cephfs
export PROMETHEUS_ALERTMGR_STORAGE_CLASS=ocs-storagecluster-cephfs
export GRAFANA_INSTANCE_STORAGE_CLASS=ocs-storagecluster-cephfs
export ROLE_NAME=cluster_monitoring

The role can then be executed with the following command:

ansible-playbook ibm.mas_devops.run_role

As with the playbook execution, a RECAP is provided and the logs are available in the same location within the CLI container.


There is a robust set of automation scripts to aid in the installation, operation, and management of Maximo Application Suite. Feedback is encouraged, and can be raised as tickets in the relevant GitHub repositories. Simply navigate the relevant repository, click “Issues”, then click “New Issue”. The ansible-devops issue page can be viewed here, and the mascli issue page can be viewed here.

Both positive and negative feedback is welcomed, no matter how small. User feedback drives the development of existing and new capabilities.

In addition to raising issues or providing feedback, both the ansible and the CLI can be contributed to directly. Documentation on how to do so can be viewed here and here, respectively.




mascli Documentation

mas install Documentation

mascli Public Repository

mascli Issues Page

Contributing to mascli

ibm.mas_devops Ansible Galaxy

ibm.mas_devops source code (GitHub)

Ansible Usage Documentation

Install Manage using Ansible Documentation

Cluster Monitoring Ansible Role Documentation

ansible-devops Issues Page

Contributing to ansible-devops


Special thanks to David Parker (  and Jenny Wang ( from the development team for their help in putting this together