In today’s fast-paced digital landscape, APIs are the backbone of modern applications. Managing them efficiently is critical for delivering secure, scalable, and reliable services. IBM API Connect (APIC) not only simplifies API lifecycle management but also integrates seamlessly with CI/CD pipelines to automate deployment and reduce manual overhead.
IBM API Connect supports automation through its Developer Toolkit CLI(apic_cli_toolkit) and REST APIs (apic_platform_restapis), enabling integration with popular CI/CD tools like Jenkins, GitHub Actions , Azure DevOps etc.
You can integrate the cli commands/rest apis as part of bash scripts to be provided to your corresponding Devops pipeline.
For any APIC Commands to run first we need to download the toolkit which can be done directly from the API Connect instance using Wget/cURL and then extract the .tar file and change the toolkit file permission for execution
curl https://${apimanagerendpoint}/client-downloads/toolkit-linux.tgz -k -o toolkit-linux.tgz
tar zxvf toolkit-linux.tgz
chmod 755 apic
Then further to run any apic commands, need to connect API Connect instance by logging into the instance using the toolkit login command.
./apic --accept-license login -s ${apimanagerendpoint} --realm $realm --username $user --password $password
Once logged in to the API Connect instance you can perform multiple actions required as part of your pipeline using different cli commands.
Deleting a draft product
./apic -s ${apimanagerendpoint} -o ${porgname} draft-products:delete ${productname}:${productversion}
Creating a new draft product
./apic -s ${apimanagerendpoint} -o ${porgname} draft-products:create ${productname}
Staging and Publishing the products
./apic -s ${apimanagerendpoint} -o ${porgname} products:publish -c ${catalog} -- stage${productname}
./apic -s ${apimanagerendpoint} -o ${porgname} products:publish -c ${catalog} ${productname}
Above are few example api lifecycle commands, you can use the different cli commands to set-up your api cicd pipeline to have different tasks like performing validation of specs using apic governance ( using apic compliance:validate command) before publish task etc.
Once Published you can add test tasks using curl commands or integrating postman scripts etc as part of your pipeline and incorporating GitOps practices to further enhance.

An example APIC Pipeline created with Openshift Tekton Pipelines with three tasks added git-clone, apic-governance and apic-publish looks as shown below.

Apic pipeline sample spec:
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
name: apic-pipeline
namespace: apic-pipelines
spec:
tasks:
- name: apic-governance
params:
- name: baseimage
value: 'fedora:latest'
- name: hostname
value: small-mgmt-api-manager-cp4i.intranet.ibm.com
- name: platformapihost
value: small-mgmt-platform-api-cp4i.intranet.ibm.com
- name: subdirectory
value: ./dir
- name: user
value: user1
- name: password
value: password1
- name: realm
value: provider/default-idp-2
- name: catalog
value: sandbox
- name: providerorg
value: porgname
- name: productruleset
value: product_rule1
runAfter:
- git-clone
taskRef:
kind: Task
name: apic-governance
workspaces:
- name: workdir
workspace: test
- name: apic-publish
params:
- name: baseimage
value: 'fedora:latest'
- name: hostname
value: small-mgmt-api-manager-cp4i.intranet.ibm.com
- name: platformapihost
value: small-mgmt-platform-api-cp4i.intranet.ibm.com
- name: subdirectory
value: ./dir
- name: user
value: user1
- name: password
value: password1
- name: realm
value: provider/default-idp-2
- name: catalog
value: sandbox
- name: providerorg
value: amit-porg
runAfter:
- apic-governance
taskRef:
kind: Task
name: apic-publish
workspaces:
- name: workdir
workspace: test
- name: git-clone
params:
- name: url
value: 'https://github.com/user/apitest'
- name: submodules
value: 'true'
- name: depth
value: '1'
- name: sslVerify
value: 'true'
- name: crtFileName
value: ca-bundle.crt
- name: deleteExisting
value: 'true'
- name: verbose
value: 'true'
- name: gitInitImage
value: 'registry.redhat.io/openshift-pipelines/pipelines-git-init-rhel8@sha256:a538c423e7a11aae6ae582a411fdb090936458075f99af4ce5add038bb6983e8'
- name: userHome
value: /tekton/home
taskRef:
kind: Task
name: git-clone
workspaces:
- name: output
workspace: test
workspaces:
- name: test
Tasks used in apic-pipeline sample specs:
git-clone-
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: git-clone
spec:
description: |-
These Tasks are Git tasks to work with repositories used by other tasks in your Pipeline.
The git-clone Task will clone a repo from the provided url into the output Workspace. By default the repo will be cloned into the root of your Workspace. You can clone into a subdirectory by setting this Task's subdirectory param. This Task also supports sparse checkouts. To perform a sparse checkout, pass a list of comma separated directory patterns to this Task's sparseCheckoutDirectories param.
params:
- description: Repository URL to clone from.
name: url
type: string
- default: ''
description: 'Revision to checkout. (branch, tag, sha, ref, etc...)'
name: revision
type: string
- default: ''
description: Refspec to fetch before checking out revision.
name: refspec
type: string
- default: 'true'
description: Initialize and fetch git submodules.
name: submodules
type: string
- default: '1'
description: 'Perform a shallow clone, fetching only the most recent N commits.'
name: depth
type: string
- default: 'true'
description: Set the `http.sslVerify` global git config. Setting this to `false` is not advised unless you are sure that you trust your git remote.
name: sslVerify
type: string
- default: ca-bundle.crt
description: file name of mounted crt using ssl-ca-directory workspace. default value is ca-bundle.crt.
name: crtFileName
type: string
- default: ''
description: Subdirectory inside the `output` Workspace to clone the repo into.
name: subdirectory
type: string
- default: ''
description: Define the directory patterns to match or exclude when performing a sparse checkout.
name: sparseCheckoutDirectories
type: string
- default: 'true'
description: Clean out the contents of the destination directory if it already exists before cloning.
name: deleteExisting
type: string
- default: ''
description: HTTP proxy server for non-SSL requests.
name: httpProxy
type: string
- default: ''
description: HTTPS proxy server for SSL requests.
name: httpsProxy
type: string
- default: ''
description: Opt out of proxying HTTP/HTTPS requests.
name: noProxy
type: string
- default: 'true'
description: Log the commands that are executed during `git-clone`'s operation.
name: verbose
type: string
- default: 'gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.40.2'
description: The image providing the git-init binary that this Task runs.
name: gitInitImage
type: string
- default: /home/git
description: |
Absolute path to the user's home directory.
name: userHome
type: string
results:
- description: The precise commit SHA that was fetched by this Task.
name: commit
type: string
- description: The precise URL that was fetched by this Task.
name: url
type: string
- description: The epoch timestamp of the commit that was fetched by this Task.
name: committer-date
type: string
steps:
- env:
- name: HOME
value: $(params.userHome)
- name: PARAM_URL
value: $(params.url)
- name: PARAM_REVISION
value: $(params.revision)
- name: PARAM_REFSPEC
value: $(params.refspec)
- name: PARAM_SUBMODULES
value: $(params.submodules)
- name: PARAM_DEPTH
value: $(params.depth)
- name: PARAM_SSL_VERIFY
value: $(params.sslVerify)
- name: PARAM_CRT_FILENAME
value: $(params.crtFileName)
- name: PARAM_SUBDIRECTORY
value: $(params.subdirectory)
- name: PARAM_DELETE_EXISTING
value: $(params.deleteExisting)
- name: PARAM_HTTP_PROXY
value: $(params.httpProxy)
- name: PARAM_HTTPS_PROXY
value: $(params.httpsProxy)
- name: PARAM_NO_PROXY
value: $(params.noProxy)
- name: PARAM_VERBOSE
value: $(params.verbose)
- name: PARAM_SPARSE_CHECKOUT_DIRECTORIES
value: $(params.sparseCheckoutDirectories)
- name: PARAM_USER_HOME
value: $(params.userHome)
- name: WORKSPACE_OUTPUT_PATH
value: $(workspaces.output.path)
- name: WORKSPACE_SSH_DIRECTORY_BOUND
value: $(workspaces.ssh-directory.bound)
- name: WORKSPACE_SSH_DIRECTORY_PATH
value: $(workspaces.ssh-directory.path)
- name: WORKSPACE_BASIC_AUTH_DIRECTORY_BOUND
value: $(workspaces.basic-auth.bound)
- name: WORKSPACE_BASIC_AUTH_DIRECTORY_PATH
value: $(workspaces.basic-auth.path)
- name: WORKSPACE_SSL_CA_DIRECTORY_BOUND
value: $(workspaces.ssl-ca-directory.bound)
- name: WORKSPACE_SSL_CA_DIRECTORY_PATH
value: $(workspaces.ssl-ca-directory.path)
image: $(params.gitInitImage)
name: clone
script: |
#!/usr/bin/env sh
set -eu
if [ "${PARAM_VERBOSE}" = "true" ] ; then
set -x
fi
if [ "${WORKSPACE_BASIC_AUTH_DIRECTORY_BOUND}" = "true" ] ; then
cp "${WORKSPACE_BASIC_AUTH_DIRECTORY_PATH}/.git-credentials" "${PARAM_USER_HOME}/.git-credentials"
cp "${WORKSPACE_BASIC_AUTH_DIRECTORY_PATH}/.gitconfig" "${PARAM_USER_HOME}/.gitconfig"
chmod 400 "${PARAM_USER_HOME}/.git-credentials"
chmod 400 "${PARAM_USER_HOME}/.gitconfig"
fi
if [ "${WORKSPACE_SSH_DIRECTORY_BOUND}" = "true" ] ; then
cp -R "${WORKSPACE_SSH_DIRECTORY_PATH}" "${PARAM_USER_HOME}"/.ssh
chmod 700 "${PARAM_USER_HOME}"/.ssh
chmod -R 400 "${PARAM_USER_HOME}"/.ssh/*
fi
if [ "${WORKSPACE_SSL_CA_DIRECTORY_BOUND}" = "true" ] ; then
export GIT_SSL_CAPATH="${WORKSPACE_SSL_CA_DIRECTORY_PATH}"
if [ "${PARAM_CRT_FILENAME}" != "" ] ; then
export GIT_SSL_CAINFO="${WORKSPACE_SSL_CA_DIRECTORY_PATH}/${PARAM_CRT_FILENAME}"
fi
fi
CHECKOUT_DIR="${WORKSPACE_OUTPUT_PATH}/${PARAM_SUBDIRECTORY}"
cleandir() {
# Delete any existing contents of the repo directory if it exists.
#
# We don't just "rm -rf ${CHECKOUT_DIR}" because ${CHECKOUT_DIR} might be "/"
# or the root of a mounted volume.
if [ -d "${CHECKOUT_DIR}" ] ; then
# Delete non-hidden files and directories
rm -rf "${CHECKOUT_DIR:?}"/*
# Delete files and directories starting with . but excluding ..
rm -rf "${CHECKOUT_DIR}"/.[!.]*
# Delete files and directories starting with .. plus any other character
rm -rf "${CHECKOUT_DIR}"/..?*
fi
}
if [ "${PARAM_DELETE_EXISTING}" = "true" ] ; then
cleandir || true
fi
test -z "${PARAM_HTTP_PROXY}" || export HTTP_PROXY="${PARAM_HTTP_PROXY}"
test -z "${PARAM_HTTPS_PROXY}" || export HTTPS_PROXY="${PARAM_HTTPS_PROXY}"
test -z "${PARAM_NO_PROXY}" || export NO_PROXY="${PARAM_NO_PROXY}"
git config --global --add safe.directory "${WORKSPACE_OUTPUT_PATH}"
/ko-app/git-init \
-url="${PARAM_URL}" \
-revision="${PARAM_REVISION}" \
-refspec="${PARAM_REFSPEC}" \
-path="${CHECKOUT_DIR}" \
-sslVerify="${PARAM_SSL_VERIFY}" \
-submodules="${PARAM_SUBMODULES}" \
-depth="${PARAM_DEPTH}" \
-sparseCheckoutDirectories="${PARAM_SPARSE_CHECKOUT_DIRECTORIES}"
cd "${CHECKOUT_DIR}"
RESULT_SHA="$(git rev-parse HEAD)"
EXIT_CODE="$?"
if [ "${EXIT_CODE}" != 0 ] ; then
exit "${EXIT_CODE}"
fi
RESULT_COMMITTER_DATE="$(git log -1 --pretty=%ct)"
printf "%s" "${RESULT_COMMITTER_DATE}" > "$(results.committer-date.path)"
printf "%s" "${RESULT_SHA}" > "$(results.commit.path)"
printf "%s" "${PARAM_URL}" > "$(results.url.path)"
securityContext:
runAsNonRoot: true
runAsUser: 65532
workspaces:
- description: The git repo will be cloned onto the volume backing this Workspace.
name: output
apic-governance –
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: apic-governance
namespace: apic-pipelines
spec:
description: Validate a Product before Publish
params:
- default: 'fedora:latest'
description: Base container Image
name: baseimage
type: string
- default: apiui.apic.com
description: APIConnect hostname
name: hostname
type: string
- description: PlatformAPI hostname
name: platformapihost
type: string
- default: ./dir
description: Subdirectory containing the products to be validated
name: subdirectory
type: string
- description: User to log into the catalog with
name: user
type: string
- description: password for the user account
name: password
type: string
- default: provider/default-idp-2
description: User Registry Realm
name: realm
type: string
- default: sandbox
description: Target Catalog
name: catalog
type: string
- default: demo
description: Target Provider Org
name: providerorg
type: string
- default: ' '
description: Target Space (leave blank if spaces are not used)
name: space
type: string
- description: Product rules to be used for validation
name: productruleset
type: string
steps:
- computeResources: {}
env:
- name: dir
value: $(params.subdirectory)
- name: url
value: $(params.hostname)
- name: platformurl
value: $(params.platformapihost)
- name: user
value: $(params.user)
- name: password
value: $(params.password)
- name: realm
value: $(params.realm)
- name: target_catalog
value: $(params.catalog)
- name: target_porg
value: $(params.providerorg)
- name: target_space
value: $(params.space)
- name: productruleset
value: $(params.productruleset)
image: $(params.baseimage)
name: ansible-builder-create
script: |
#!/bin/sh
if [ "$target_space" == " " ] ; then
scope="catalog"
else
scope="space"
fi
rm -rf toolkit*
echo Downloading CLI tool
curl https://$url/client-downloads/toolkit-linux.tgz -k -o toolkit-linux.tgz
tar zxvf toolkit-linux.tgz
chmod 755 apic
echo Login
./apic --accept-license login -s $platformurl --realm $realm --username $user --password $password
product_list=$(grep -ri product:\ 1.0.0 $dir | sed -e s/:.\*//)
for i in $product_list ; do
echo "validating ./apic -m governance compliance:validate --server $platformurl --org $target_porg --rulesets $productruleset $i"
response=$(./apic -m governance compliance:validate --server $platformurl --org $target_porg --rulesets $productruleset $i)
if ! [[ $response == *"errors: []"* ]]; then
echo "Product validation failed : $response"
exit 1
fi
done
workingDir: $(workspaces.workdir.path)
workspaces:
- name: workdir
apic-publish –
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: apic-publish
namespace: apic-pipelines
spec:
description: Publishes a Product to API Connect
params:
- default: 'fedora:latest'
description: Base container Image
name: baseimage
type: string
- default: apiui.apic.com
description: APIConnect hostname
name: hostname
type: string
- default: ./dir
description: Subdirectory containing the products to be published
name: subdirectory
type: string
- description: User to log into the catalog with
name: user
type: string
- description: password for the user account
name: password
type: string
- default: provider/default-idp-2
description: User Registry Realm
name: realm
type: string
- default: sandbox
description: Target Catalog
name: catalog
type: string
- default: demo
description: Target Provider Org
name: providerorg
type: string
- default: ' '
description: Target Space (leave blank if spaces are not used)
name: space
type: string
- default: ' '
description: Auto migrate subscriptions (leave blank if not needed)
name: replace_subscriptions
type: string
steps:
- computeResources: {}
env:
- name: dir
value: $(params.subdirectory)
- name: url
value: $(params.hostname)
- name: user
value: $(params.user)
- name: password
value: $(params.password)
- name: realm
value: $(params.realm)
- name: target_catalog
value: $(params.catalog)
- name: target_porg
value: $(params.providerorg)
- name: target_space
value: $(params.space)
- name: replace_subscriptions
value: $(params.replace_subscriptions)
image: $(params.baseimage)
name: ansible-builder-create
script: |
#!/bin/sh
if [ "$replace_subscriptions" == " " ] ; then
migrate_subscriptions=""
else
migrate_subscriptions="--migrate_subscriptions"
fi
if [ "$target_space" == " " ] ; then
scope="catalog"
else
scope="space"
fi
rm -rf toolkit*
echo Downloading CLI tool
curl https://$url/client-downloads/toolkit-linux.tgz -k -o toolkit-linux.tgz
tar zxvf toolkit-linux.tgz
chmod 755 apic
echo Login
./apic --accept-license login -s $url --realm $realm --username $user --password $password
product_list=$(grep -ri product:\ 1.0.0 $dir | sed -e s/:.\*//)
echo Publishing to $publish_url
for i in $product_list ; do
echo "publishing ./apic products:publish $i -s $url -o $target_porg -c $target_catalog --scope $scope $migrate_subscriptions"
./apic products:publish $i -s $url -o $target_porg -c $target_catalog --scope $scope $migrate_subscriptions
done
workingDir: $(workspaces.workdir.path)
workspaces:
- name: workdir
Reference - https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.12/html/about_openshift_pipelines/understanding-openshift-pipelines