App Connect

 View Only

ACE containers: choosing a base image

By Trevor Dolby posted Mon February 12, 2024 09:02 AM

  

App Connect Enterprise has been supported in containers since IBM Integration Bus v10, but changes in the technology landscape during that period have led to a variety of different container images being created for various purposes, and this has led to some existing demos and blog posts becoming out of date.

Multiple options may have provided more flexibility, but the knowledge needed to choose the correct image and build approach has risen along with that flexibility, and this blog post will attempt to explain some of the factors to consider when making that choice, starting with a common scenario: a delivery pipeline that creates application images.

TLDR: use ace-server-prod with ACE operator, and ace or bespoke for everything else.

Container base images and application pipelines

Application pipelines that produce container images will require the choice of base image to be made almost immediately: as well as including the application itself, the containers being built will normally have the ACE product installed in the image and this install must come from somewhere. Not only will the containers have to be built from a base image of some kind (ACE application pipelines would rarely use "FROM scratch"), but they will also be run by container technology of some kind (Docker, Kubernetes, OpenShift, etc) both at runtime and during testing of the container in the pipeline:

The base image might seem to be independent of the container management technology, but this is only true up to a point: as described below, the container startup code (the entrypoint and associated scripts) may depend on very specific initial setup (volumes, environment variables, etc) provided by the infrastructure that starts the container, and so the two aspects are more tightly coupled than it appears in some cases.

Note that there is another ACE operator use case (not covered in the picture above) where there is a BAR file built in the pipeline instead of an application container (using the "Fried" approach described below) and yet a custom image is still needed to hold libraries or other artifacts that are too large to fit in a "generic files" configuration. In that case, however, there is very little choice in the base image and it only needs to be built once per ACE container release, so it is not the main focus of this article.

Images and repos background

Before describing more of the details on which base image to use, it's worth considering some of the possible approaches to building ACE container images for use as base images, and where they are (or have been) published as pre-built containers or as source. There are often trade-offs between ease of getting going (where it's simplest to just use an IBM-built image) and capabilities or features (where additional libraries or products might be needed), and so images can be categorized as follows:

"Bespoke" images, created by users from the ACE software installation package in a custom container build

These could use IBM-provided Dockerfiles as starting point or could be completely custom, but are independent of the IBM-built images. This approach provides maximum flexibility, including reducing the image size by leaving out various components, using Ubuntu or other (ACE-supported) distros instead of Red Hat UBI, etc.

For these images, ACE is fully-supported in container environments but the Dockerfiles are not supported by IBM because they are customer-written:

    • Issues with ACE itself such as ESQL code not working as documented, flow attributes not working correctly, etc, would be accepted as valid support cases by IBM.
    • Base image issues such as vulnerabilities in operating system packages would not be valid ACE support cases.
    • Problems installing Maven or other tools would also not be valid ACE support cases.
"General-purpose" images created by IBM for use in any container environment

Bespoke images require expertise in writing Dockerfiles and building container images, so it's often easier to build on top of a pre-built container image provided by IBM. The current example of this is the ace image (others have existed in the past - see below) and these images can be used in any container environment including docker running on a VM, Kubernetes, etc. This is a common approach when not using the ACE operator.

"ACE certified container" (ACEcc) images created by IBM for use with the ACE operator

General purpose images require scripting to configure the ACE server (adding credentials, setting up database definitions, etc) and ACEcc provides this out-of-the-box in conjunction with the ACE operator. This relies on various configurations being created up front, but then the actual work of creating (for example) odbc.ini files and setting environment variables is handled by the "runaceserver" command at container startup, so no scripting is required. The ACEcc ace-server-prod image can be used in one of two ways:

    • "Fried", where one of more BAR files are downloaded at container start time. This is the simplest way, requiring no container builds at all, but doesn't handle solutions that require large dependent libraries (among other things). Note that this approach doesn't require choosing a base image or any knowledge of container builds, as all of this is provided by IBM, but does normally require a highly-available BAR file repository, as otherwise containers may fail to start correctly.
    • Baked”, where ACEcc is used as a base image and a "custom image" is created by users with applications, config, etc, added in along with any other prereqs such as database drivers. This is the fastest-starting approach because the ACE work directory can be optimized during the container build process, but still benefits from ACEcc configuration handling at container startup.

For more details, the following blog posts provide background on the ACE operator and what it does, "baked" versus "fried", and more:

What is an Operator and why did we create one for IBM App Connect?
Exploring the IntegrationServer resource of the IBM App Connect Operator
Comparing styles of container deployment for IBM App Connect (a.k.a baked vs fried!)

These images can all be used as base images in ACE application build pipelines, with commands such as ibmint being used to create ACE work directories and then container startup scripting being used to merge in credentials and other configuration information at startup time. ACEcc greatly reduces the need for container startup scripting, but customization is still possible through ACE mechanisms such as startup scripts.

As well as several different images for different use cases, there have also been several public repositories and container registries for Dockerfiles and pre-built containers. It seems likely that many readers of this article will have used ACE/IIB in containers for some time, so the historical (no longer used) information is included to help older solutions move forward.

  • ot4i/iib-docker (source) and ibmcom/iib[-server] (Docker Hub image registry) for IIBv10 integration nodes in containers. Original location for Dockerfiles and built images, which could be used as general-purpose containers (with Helm charts for Kubernetes). No longer used due to ACE being a better fit for containerization.
  • ot4i/ace-docker (source) for ACE (v11 and v12) Dockerfiles. Could originally be used for building general-purpose images or operator- and helm-compatible images. Now changed to focus on general-purpose and bespoke use cases, with samples showing how to extend the containers to include pre-reqs, applications, etc. The images built from the Dockerfiles in this repo cannot be used with the ACE operator, but can be used in all other scenarios.
  • ibmcom/ace[-server] (Docker Hub image registry) was the original location for all pre-built ACE container images built from the ace-docker repo. No authentication required, but the license must still be accepted. No longer used, with new pre-built images published via cp.icr.io.
  • cp.icr.io/cp/appc (IBM-provided image registry) is the current location for both ACEcc (ace-server-prod) and the general-purpose ace image built from the main Dockerfile in ot4i/ace-docker. Authentication is required to access the images.

Timeline

The timeline for all of the options and scenarios (including some that are no longer used) listed above is as follows:

As can be seen, one of the key inflection points is at 12.0.4, where several things happened:

Part of the reason for the changes happening at 12.0.4 was the increasing level of container-friendly capability in the ACE v12 product itself: options such as external credentials providers and startup scripts were available in the base product by then so virtually all the capabilities provided by the older container-provided scripting were now available out-of-the-box.

Runtime base image choices

For the runtime container base image, the main split is between the ACE operator (with the certified container) and other use cases:

Operator: ACEcc is binary only, using cp.icr.io/cp/appc/ace-server-prod

  • This is the only image that can be used with the ACE operator.
  • Other images do not work, and the ACEcc image is not intended to be run locally.
  • Can be extended for the "fried" use case (additional libraries added in), and the "baked" case with whole applications included.
  • Images built from ot4i/ace-docker cannot be used.

All others: ot4i/ace-docker used to create the cp.icr.io/cp/appc/ace image

  • Used as a base for general-purpose images and cannot be used with the operator.
  • Can include MQ client, etc, and can be run locally for test or other purposes.
  • Can also be built by customers, and the various Dockerfiles modified as needed.

Bespoke container builds continue to work as before, and are still supported for use cases other than with the ACE operator. The ACE product is supported in containers in general as described at https://www.ibm.com/docs/en/app-connect/12.0?topic=docker-support-linux-systems for bespoke containers (or those built on the ace image).

Detecting different base images

While most of the time it is obvious which base image is in use simply from looking at the FROM line in a Dockerfile, this may not be sufficient if the image has been shadowed to a private registry and tagged something like official-ace-image:12.0.11 as the tag does not say whether it is ace or ace-server-prod. Although a detailed inspection of the image would show which it is (the presence of the runaceserver binary indicates ace-server-prod, for example), a quick check of the container log will usually provide enough information.

For ace-server-prod, the container startup would look as follows, starting with the runaceserver output:

2024-02-27T20:37:56.335Z Image created: 2024-01-25T17:09:10+00:00
2024-02-27T20:37:56.336Z ACE version: 12.0.11.0
2024-02-27T20:37:56.336Z ACE level: S1200S-L240112.10122
2024-02-27T20:37:56.336Z Go Version: go1.21.6
2024-02-27T20:37:56.336Z Go OS/Arch: linux/amd64
2024-02-27T20:37:56.336Z Checking for valid working directory
...

 while ace starts with the IntegrationServer itself:

2024-02-27 20:44:57.284948: BIP1990I: Integration server 'ace-server' starting initialization; version '12.0.11.0' (64-bit)
2024-02-27 20:44:57.486924: BIP9905I: Initializing resource managers.
...

ACE operator and "ace-server-prod" as one component

Despite the current "ACE operator or not" split, earlier versions of the ace-server-prod container image could be run without using the ACE operator, and the ot4i/ace-docker repo provided instructions on how to set environment variables (such as ACE_ENABLE_METRICS, FORCE_FLOW_HTTPS, etc) and provide content in /home/aceuser/initial-config to achieve the desired configuration at runtime.

Although this worked well for a while, it proved difficult to maintain the script-based approach while also enhancing the ACE operator (which relied on the same images), and so comments like "NOTE: The current dockerfiles are tailored towards use by the App Connect Operator and as a result may have function removed from it if we are no longer using it in our operator" appeared in the repo at 12.0.2. The split was formalized at 12.0.4 with ot4i/ace-docker carrying on as the public repo for ACE Dockerfiles and became the source for the ace images built by IBM.

From an ACE operator and ace-server-prod point of view, it may be helpful to think of them as two parts of the same component:

The ACE operator code and the runaceserver code in the container must work together closely enough that they are effectively one component, which is why using other images with the operator is not supported. Using a container image built on top of the ace image would not integrate correct with the ACE operator:

Using methods other than the ACE operator to start the ace-server-prod container (or any image built using it as a base) is also not supported:

Given how closely the two parts work together, it would be difficult to maintain a public source repo that would work with the operator and also be stable enough for others to build on top of: if the operator use cases (often driven by CP4i) required startup changes, then the public repo would have to change, potentially breaking existing images when they upgraded to the latest fixpack (as happened in the past).

Pipeline considerations

Returning to the initial pipeline scenario, the choice of runtime container is clear, and comes down to whether or not the ACE operator is used to run the application container:

  • If the ACE operator is used at runtime, then ace-server-prod must be the base image.
  • If the ACE operator is not used then ace-server-prod cannot be used and another container (such as ace) must be used as the base image.

However, this may present difficulties for testing within the pipeline: if ace-server-prod is used as the base image, then the container cannot be run using Docker for test purposes because that container image expects the operator to be the one starting the container. While it may be possible to reverse-engineer how the operator starts the container at a given fixpack and operator level, that interface may change at the next fixpack and is not designed to be stable for customer use (see previous section).

Two possible approaches can help in this case, either individually or in combination:

  • Testing can be shifted left out of the container and into unit tests at an earlier stage. This is more easily achieved with ACE v12 than any previous release, and allows the use of mocks and other standard techniques as part of ACE application testing. For the tests that cannot be run that way, the tests could be run on the build system (VM or container), which would not be an exact runtime match but might be close enough for the majority of testing.
  • For tests that need to be run in the actual runtime container, the pipeline can create test containers via the ACE operator. An example of this approach may be found in the ACE demo pipeline at https://github.com/ot4i/ace-demo-pipeline/tree/main/tekton/os/cp4i where tests are run in a transient ACE container to ensure database connectivity works as expected in the runtime container.

Integration solutions will always be some level of testing beyond unit testing (connectivity is usually the main purpose of ACE flows) but creating test containers via the operator may not always be needed: a lot depends on how dependent a solution is on the runtime container configuration, and how much of the risk is associated with the ACE code itself rather than the connectivity. It may be sufficient to rely on automated testing using only external interfaces (calling actual HTTP endpoints, etc) once the pipeline has deployed the application container to a test cluster, and other approaches might also suffice.

MQ client and the "ace" image

One other aspect of the move from ace-server-prod to ace in ot4i/ace-docker is the absence of an MQ client in the pre-built ace image. The ace-server-prod image shipped with an MQ client, while the ace equivalent requires the MQ client to be installed on top (see https://github.com/ot4i/ace-docker/tree/main/samples/mqclient). While this keeps the ace image smaller by default, it necessitates the building of an intermediate image for those users wanting an MQ client installed.

This can be inconvenient because it often requires two pipelines: one that runs every few months (when a new ACE fixpack is released) to build a version with an MQ client on top, and another that builds applications and runs a lot more often and has very different inputs. There isn't a good pre-built alternative to this, unfortunately, but raising an "idea" at https://integration-development.ideas.ibm.com/ideas would ensure IBM product management are aware of the need.

Conclusion

The current ACE container landscape presents a simple "ACE operator or not" choice for runtime base images, but the pipeline implications require careful consideration. Due to the extensive history of ACE and IIB in containers, some existing blogs, forum comments, demos, etc also complicate the picture by referring to solutions that no longer work in a post-12.0.4 world. The hope of this blog post is that it will explain which approaches used to work, what works now, and how to think about ACE images and operators, thereby allowing solutions to be built on stable foundations.

Acknowledgements

Thanks to Rob Convery and Tim Dunn for their assistance in the preparation of this post.

0 comments
52 views

Permalink