App Connect

 View Only

App Connect Enterprise Container Image Hierarchies - A Practical Example

By Aiden Gallagher posted Wed November 09, 2022 10:22 AM

  

This blog is part of a series. For the whole series list see here.

In our previous article we explored what content could be baked into an App Connect Enterprise container image, compared to what could be added at deployment time. In this article we are going to work through a specific practical example showing several different container builds, each with different content and show how they would be used for deployment.  

We will use a simple pre-built integration that provides a simple REST API that enables retrieving address data based on an address identifier.  

We will show how this simple integration can be deployed using three different container images, each built using the previous one as a foundation:

  • The “level-1” container image is “integration runtime specific”. It could be used to deploy any integration in any environment. It will include only the ACE product binaries and an example of a configuration that we might want to be common to all integrations we deploy. In our example, we will fix the tracing format to be JSON. The integration code and any environment specific configuration will be applied at deployment time. This image type is ideal when you want to provide an image template for all integrations, but you would prefer not to include an image build in the CICD pipeline.
  • The “level-2” container image is “integration specific”. It deploys a specific integration but could be used for any environment. It will build on the level-1 image and include the integration code itself. Only environment specific configuration needs to then be applied at deployment time. This image type ensures consistency of deployment from one environment to the next since the integration code is baked into the image alongside the version of the runtime it was tested against. 
  • The “level-3” image is “environment specific”. It deploys a specific integration, configured for a specific environment. It will build on the level-2 image by including the environment specific configuration. For our demonstration we will include the certificates used for HTTPS communication in the image, which might well vary from one environment to another. This image type ensures specific environments can be re-created with the highest level of consistency as the environment specific configuration is burnt into the image.

 

Figure 1 - Image Types

Pre-requisites

For simplicity, and to allow us to focus on the building and running of images, in this demo we will use Podman to run our containers, but you could also use Docker with similar commands. The example would of course work on Kubernetes, although the commands and approach would be rather different.  

We will build our container images using the open-source example at https://github.com/ot4i/ace-docker.  

As such, you will need the following:

Note that the instructions are based on Unix environments such as MacOS. The command lines would need to be appropriately translated if using a Windows environment or for the latest version of MacOS M1.

IMPORTANT NOTE for MacOS users. If your computer has the new M1 chip (introduced November 2020), you will need to force backward compatibility by inserting --platform=linux/amd64 in each Dockerfile including the ‘ace-docker’ Dockerfile immediately after the FROM statement, and as an additional parameter to each podman build and podman run command.

For example, the Dockerfile should look like this:

FROM --platform=linux/amd64 registry.access.redhat.com/ubi8/ubi-minimal as builder 

and Podman commands should look like this: 

podman build . -t acebase:latest --platform=linux/amd64

Level-1: Build a Base Image with Common Server Configuration

In this first practical section, we are going to create an image based on the IBM App Connect Enterprise product binaries using the provided ace-docker example. We’ll then give it a tag to enable us to easily reference it. We’ll then create our own level-1 Dockerfile in which we’ll replace the operator server.conf.yaml with our own server configuration template. This would enable us to have reusable patterns for our underlying server configuration e.g., an MQ server config, SAP server config, that would remain consistent across all, or perhaps groups of similar integrations. We will use the image to deploy an integration, with the integration code itself and environment specifics such as certificates added using mounted folders at deployment time.

  1. Start in the home folder:
    cd $HOME
  2. Clone the ACE Docker image from GitHub:
    git clone https://github.com/ot4i/ace-docker
  3. Change into the ace-docker folder using:
    cd ace-docker
  4. Edit the file “Dockerfile” and ensure that we are using at least version 12.0.4.0 of the product. For example, at the time of writing this article you would adjust the line
    ARG
     DOWNLOAD_URL=http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/integration/12.0.2.0-ACE-LINUX64-DEVELOPER.tar.gz
     
    to say
    ARG
    DOWNLOAD_URL=http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/integration/12.0.4.0-ACE-LINUX64-DEVELOPER.tar.gz
    This is because later in the example we will be making use of the newly introduced “StartupScripts” feature. 
  5. Build a base image based on the default ace-docker build file
    podman build . -t acebase:latest --platform=linux/amd64
    The image is relatively large, so may take some minutes to complete depending on your bandwidth. However, once done, you will not need to do it again for the subsequent builds.
    Note: the comment earlier for MacOS users with an M1 chip, regarding changes to the Dockerfile and Podman command line. This will have to be applied throughout the examples.  
  6. Check your Podman images for a “localhost/acebase” image:
    podman images
  7. Come back to the home folder:
    cd $HOME
  8. Clone the resources for this example from GitHub:
    git clone https://github.com/GallagherAiden/ace-image-example
  9. When the container later mounts files such as the BAR file and shell scripts, we need to make sure they are accessible by the ACE admin user. This may mean you need to change their permissions on your local file system such that all users have permissions (not just root) before running the container, for example:,
    cd $HOME/ace-image-example 
    chmod 777 *
  10. Take a look at the level-1 Dockerfile:
    cat level-1.dockerfile
    It should look like the following:
     
    FROM acebase:latest
    COPY server.conf.yaml /home/aceuser/ace-server/server.conf.yaml
    COPY ace_config_bars.sh /home/aceuser/scripts/ace_config_bars.sh
    COPY ace_config_ssl.sh /home/aceuser/scripts/ace_config_ssl.sh
    Notice that it builds from the image we just created, then adds a server.conf.yaml file and two script files into the image.
  11. Take a look at that server.conf.yaml file that will be copied into the image. Although it’s a big file, most of it just describes configurations you “could” use, but they are commented out. The only ones that are important for this example are the ones we have uncommented and changed from the default. 
    1. Look at the log format settings:
      grep -A 2 "^Log:" server.conf.yaml
      and you will notice that the logging format is forced to JSON
    2. You will also notice that we have forced secured HTTPS for all HTTP input nodes.
      grep forceServerHTTPS server.conf.yaml
      A good practice when exposing an API. 
    3. Look at the “StartupScripts” section
      grep -A 4 “^StartupScripts:” server.conf.yaml
      and you will see the two scripts we introduced earlier. These will be called when the integration server starts up.
  12. Take a look at that ace-config-bars.sh file that will be copied into the image.
    cat ace-config-bars.sh
    It processes any BAR files present in the initial-config/bars directory, placing their content into the server’s run directory. 
  13. Take a look at that ace-config-bars.sh file that will be copied into the image.
    cat ace-config-ssl.sh
    It copies any SSL files present in the initial-config/ssl directory, placing their content into the server’s SSL directory. Arguably we could simply mount them directly as the SSL directory in the “podman run” command. However, it is perhaps clearer to have one place (the initial_config directory) where we put all files to be processed on start up. Furthermore, it also ensures any existing files in the current SSL directory of the image will be preserved. 
  14. Create the level-1 container image:
    podman build -f level-1.dockerfile . -t level-1
  15. Now we have our generic level-1 container image that can be used to deploy whatever integration we want to whichever environment we want. We will now use it to deploy our integration “AddressLookupproject.bar”, with a particular set of self-signed certificates found in the SSL folder. Run the level-1 image:
     
    podman run --name level1app -p 7600:7600 -p 7843:7843 --env ACE_SERVER_NAME=L1ACESERVER --mount
    type=bind,src=$(pwd)/ssl,dst=/home/aceuser/initial-config/ssl --mount
    type=bind,src=$(pwd)/AddressLookupproject.bar,dst=/home/aceuser/initial-config/bars/AddressLookupproject.bar level-1
  16. Test the application:
    1. Navigate to http://localhost:7600 to check the L1ACESERVER is up and running
    2. Select the “AddressLookup API”
    3. Select the “GET /userAddress”
    4. Get from here an Example Request CURL, which should looks something like
      curl --request GET --url 
      'https://39997c9ff9b9:7843/AddressLookup/v1/userAddress?id=REPLACE_THIS_VALUE' --header 'accept: application/json'
    5. Update the container ID to ‘localhost’, update ‘REPLACE_THIS_VALUE’ with ‘1’ and add a ‘-k’ to the end as it is a self-signed certificate being used.
    6. Run the command in a new terminal
      curl --request GET --url 
      'https://localhost:7843/AddressLookup/v1/userAddress?id=1' --header 'accept: application/json' -k 
       
      This should result in
      {"Message":" Address: Pudding Lane, London, SW1"}
  17. Stop the container (crtl c)
  18. Delete the stopped container:
    podman rm {containerId}

We made the UI (port 7600) available in this example to make it easier to confirm the server is up and running and for general diagnostics. However, this Web UI would not normally be exposed in a real cloud native deployment. It would enable users to make changes to deployed integrations on specific containers with the obvious risks of configuration drift. In a true GitOps approach, changes should only be made to the source code in version control, then pushed out via a pipeline, likely resulting in new images, or at the very least, restarting of containers. This way, what we have in source control always matches what we have deployed.  

Level-2: Embed an “Unpacked” BAR File Into the Image

In this section, we will use the level-1 image as a starting point and add our integration (AddressLookupproject.bar) on top to create a new level-2 image. This is therefore an image specific to our integration but it can still be deployed into any environment as we will pull in the SSL certificates at deployment time. 

  1. Start in the home folder:
    cd $HOME/ace-image-example
  2. Take a look at the level-2 Dockerfile:
    cat level-2.dockerfile
    It should look like the following:
    FROM level-1 COPY AddressLookupproject.bar /home/aceuser/ace-server/ RUN . /opt/ibm/ace-12/server/bin/mqsiprofile && mqsibar -a /home/aceuser/ace-server/AddressLookupproject.bar -c -w /home/aceuser/ace-server
    It will begin from the level-1 image we created in the previous section, copy our BAR file into the image, then unpack the BAR file into the run directory of the image.
  3. Build the level-2 image
    podman build -f level-2.dockerfile . -t level-2
  4. We now have an image that is specific to our integration, but can be used in any environment. It still requires environment specific properties to be configured on start up, which in our case means mounting the set of SSL certificates. Start the new image:
    podman run --name level2app -p 7600:7600 -p 7843:7843 --env ACE_SERVER_NAME=L2ACESERVER --mount type=bind,src=$HOME/ssl,dst=/home/aceuser/initial-config/ssl level-2
  5. Test the application as you did in the previous section.
  6. Stop the container (crtl c)
  7. Delete the stopped container
    podman rm {containerId}

Note that the ace-config-bars.sh we introduced in the level-1 image will still be run, but will no longer find any BAR files to process in the initial-config/bars folder, so it is actually no longer needed. Instead, during the image build we copy the BAR file into the image and unpack it into run directory so the unpacked content is already prepared within the level-2 image. This will improve the startup time of the container as there is one less thing for it to do.  

Note, the Dockerfile commands are actually run inside the image we are building from, which of course contains the ACE product binaries. This is why the image build has access to the unpacking command. 

Level-3: Add Environment Specific Values to the Image 

In this section we are going to add the SSL certificates such that the resulting container image is specific to the environment we want to deploy to. This approach might be valuable if you want increased certainty around exactly what you deployed into an environment. Everything is in the image – it has no external dependencies. The downside is that you have to create a new image as you move from one environment to another, and so you have more images to manage and move around at deployment time.

  1. Start in the home folder:
    cd $HOME/ace-image-example
  2. Take a look at the level-3 Dockerfile:
    cat level-3.dockerfile
    It should look like the following:
    FROM level-2 
    COPY ssl /home/aceuser/initial-config/ssl/
  3. Build the level-3 image
    podman build -f level-3.dockerfile . -t level-3
  4. We now have an image that is specific to our integration in a specific environment e.g. dev
    podman run --name level3app -p 7600:7600 -p 7843:7843 --env ACE_SERVER_NAME=L3DEVACESERVER level-3
  5. Test the application just as you did for level-1. 
  6. Stop the container (crtl c)
  7. Delete the stopped container
    podman rm {containerId} 

This can be repeated for each environment you want to build e.g. sit, nft, prod etc.

  

How would this look in Kubernetes?

In Kubernetes you would not be able to use the technique of mounting local folders to add configuration at deployment time. You would not know which Kubernetes worker node the container would be deployed to. You could in theory use persistent volumes, but these are more complex than we need as they are more for data that is updated at runtime and those updates need to be replicated across worker nodes. The more logical fit for adding configuration data would be Kubernetes ConfigMaps (and Secrets for confidential items). You would then need to configure your container to mount those into its filesystem on deployment. 

It's worth noting at this point that IBM provide a special container that works in conjunction with something known as a Kubernetes Operator that shields you from having to know about how to manipulate ConfigMaps and Secrets. Note that this image can only be used in conjunction with the IBM ACE Operator, which therefore also necessitates running on Kubernetes. However, there are many benefits to using the ACE Operator, regardless of which level of image you choose to create, such as: 

  • In-build mechanism for pulling in BAR files at runtime from a remote URL, simplifying it’s use as a level-1 image.
  • Concept of a “Configuration” custom resource that provides a standardised way to provide specific configurations at runtime, such as the setdbparms

The Operator provides many other benefits too as described in greater detail in the article “What is an Operator and why did we create one for IBM App Connect?”.

 

Cloud Native Deployment Principles

There’s one final point we should make. Did you notice that throughout the above examples, in keeping with cloud native deployment principles, all configuration was either within the image, or performed as part of deployment? No configuration actions were performed on an image after it was deployed. This is a fundamental principle of cloud native deployment for the following reasons: 

  • It ensures that a container orchestration environment such as Kubernetes can be given everything it needs to deploy the integration on any node. 
  • This encourages a “GitOps” approach whereby the entire definition of the deployment is present within the deployment files within source control, simplifying the creation of CICD pipelines, and DR procedures and operational automation in general. 
  • There is significantly reduced danger of “configuration drift’ when deploying to different “environments” as no changes are made to them once deployment has occurred.

 

For more examples on deploying IBM App Connect in containers, please do take a look at http://ibm.biz/iib-ace.  

 

Acknowledgement and thanks to Kim Clark and Rob Convery for providing valuable input to this article.

4 comments
131 views

Permalink

Comments

Tue August 01, 2023 05:42 AM

update on my last comment, I tried creating the image with docker instead of podman, and it started fine !! 

Mon July 31, 2023 03:30 PM

Hello Dear, 

My container is stuck at below step.. I am running on mabook pro m1.. 

{"type":"ace_message","ibm_product":"IBM App Connect Enterprise","ibm_recordtype":"log","host":"77b213d8b07c","module":"integration_server.ace-server","ibm_serverName":"ace-server","ibm_processName":"","ibm_processId":"1","ibm_threadId":"1","ibm_datetime":"2023-07-31T19:24:10.844242","loglevel":"INFO","message":"9905I: Initializing resource managers. ","ibm_message_detail":"9905I: Initializing resource managers. ","ibm_messageId":"9905I","ibm_sequence":"1690831450844_0000000000001"}

nothing happening after this, any idea ?

Thu November 17, 2022 09:22 AM

Hi @SRINIVAS GORLE, the image is quite large due to the number of capabilities and functionality (that make ACE able to integrate with so many other systems). This is usually a one time download of the image (version).

One way to improve this, is to optimise the integration server to speed up start up time which offsets that to a degree and makes the deployment element more cloud native - more info here: https://community.ibm.com/community/user/integration/blogs/ben-thompson1/2022/05/12/ace-12-0-4-0 and a video here https://www.youtube.com/watch?v=12t-tP8XKBQ

Thu November 10, 2022 11:00 AM

The image size is huge when compared to our comparative products in the market.  Openshift it's self not recommended to use large images. Whats are your thoughts on this?