App Connect

 View Only

Building custom image of ACEcc for troubleshooting memory leak issues

By AMAR SHAH posted Mon December 11, 2023 06:32 AM

  

The App Connect Enterprise certified container (ACEcc) images are built on top of ubi-minimal  image.  These images typically do not contain several debugging packages that you may require for debugging different types of issue when running the application in the container.

For example, if you want to investigate a memory leak issue with an application in the container, you may require tools like gdb, gcore..   OR if you want to investigate a network communication issue with your application, then you may require tools like tcpdump or netstat, and so on.

So in order to troubleshoot such issues in ACEcc,  one approach is to create an image by including necessary packages to debug a particular type of issue.

Here is an example of a typical Dockerfile that you can use to extend the ACE certificate container image with additional packages. The FROM verb specifies the base ACE server image version which we want to extend.  The list of ACE server images and its registry location can be found in IBM Docs here. https://www.ibm.com/docs/en/app-connect/containers_cd?topic=obtaining-app-connect-enterprise-server-image-from-cloud-container-registry

FROM  cp.icr.io/cp/appc/ace-server-prod@sha256:7417b6c460e2d2f9add4ec22cd5df6a3f71ef26d5112c9c0b93ebba514aa8e89

USER root

# install ping, netstat, gdb, gcore tools for deeper diagnostics within containers

RUN microdnf update && microdnf install shadow-utils perl iputils gdb net-tools && microdnf clean all

USER 1000

Next,  build the custom image using command:

docker build -t ace-debug-image -f Dockerfile .

Once the image is built, you can verify it by running it locally and confirm it launches correctly and it has got those necessary tools/commands from the newly installed packages.

For example,

docker run --name aceserver -p 7600:7600 -p 7800:7800 -p 7843:7843 --env LICENSE=accept --env ACE_SERVER_NAME=ACESERVER ace-debug-image

After you have verified, you can tag the image appropriately and push it to your private image registry on your OCP cluster or you may host it in some external repository to which your cluster has access to pull the image from.

For example, I am hosting the debug image on a public repo quay.io :

docker tag localhost/ace-debug-image quay.io/amaribm/demoregistry/ace-debug-image

docker login quay.io/amaribm/demoregistry -u uid -p password

docker push quay.io/amaribm/demoregistry/ace-debug-image

Deploying IntegrationServer/Runtime using custom image

In order to deploy an IntegrationServer / Runtime under the ACE Operator,  a couple of configurations are required in the Custom Resource Definition.

1)     Runtime Container Image name/tag

As shown in the image below,  provide the complete name of the image with tag in the text box provided.

  2) Version and License 

  Specify the exact version of the ACEcc image which was used as a base image to build this custom image and associated License number.  In the example below,  we have selected ACE version 12.0.9.0-r3 as it was the base image we had used in FROM clause of our Dockerfile.  You cannot specify the Channel name when using a custom image.

Capturing native core  inside the ACE container using gcore command

After you have deployed the IS using the above custom image which includes gdb packages, you can now generate native core using gcore command. These core files are useful for extracting eyecatchers and subsequently used for analysis memory leak related issues.

Inside the container,  run ‘ps -eaf’ command to obtain the PID of IntegrationServer process.

Run ‘gcore’ command against the pid to generate the core.

0 comments
9 views

Permalink