Credentials for databases and other resources have always been a critical part of integration flows in App Connect Enterprise (ACE) going back to the original MQSeries Integrator product, and these secrets were managed by the product itself to ensure the credentials would be stored securely and be available when needed.
More recently, the widespread adoption of general-purpose secrets managers ("vaults") from HashiCorp (now part of IBM), Microsoft, CyberArk, and many others has changed how many organizations would like to manage credentials: the previous product-specific approaches (for example, using mqsisetdbparms
or mqsicredentials
) are no longer required for secure credential storage as that could be managed by the vault. Credentials can now be provided at runtime (possibly generated on-demand by vault plugins) and targeted so that ACE servers only have access to the credentials they actually need.
This is most apparent in the container world, where Kubernetes stores secrets and provides them to applications generically (so there isn't a Python- or node.js-specific secret format, for example), but can apply on-premises as well. As described below, ACE has credential read capabilities that allow it to work well in this new world, removing the need for product-specific credentials storage.
Quick summary: Use ACE credentials scripts to read secrets from vault-provided credentials files.
ACE-managed credentials overview
On-prem ACE installations usually have ACE in charge of credentials:
ACE servers load credentials into memory so they can be used by running applications and flows after being loaded into the server's in-memory store. The storage of the credentials on disk is in an ACE format, and managed by ACE. The storage formats can be one of two options:
mqsisetdbparms
, which is obfuscated using a private algorithm, but not encrypted.
mqsivault/mqsicredentials
, which uses industrial-strength encryption.
These are the original credential management solutions for ACE (including older IIB/WMB releases) and work extremely well when ACE is running Integration Nodes on VMs, especially since ACE 11.0.0.6 when the encrypted mqsivault
solution became available.
Note on the word "vault": The terminology is slightly confusing as both ACE and HashiCorp have a "vault" but are referring to separate concepts, and many other products (such as Azure Key Vault) also use the word. For our purposes, the word mqsivault is used for the ACE-specific format, while the word vault means one or more of the security infrastructure providers.
Secrets Manager credentials
In the container world, credential storage is normally managed by the platform itself (for example, Kubernetes secrets) or else by a separate security infrastructure such as HashiCorp Vault or CyberArk Conjur (plus many others).
The desired (simplified) picture in the container case looks more like this:
The security infrastructure is responsible for storage and management of the secrets, and hands the credentials to the ACE server on startup. This allows for central management of secrets without the security admins needing ACE-specific knowledge or commands, and also enables "dynamic secrets" (see the HashiCorp docs at Understand static and dynamic secrets) to be created as needed for database access and other uses. Dynamic secrets do not exist until read, and are destroyed once the application is no longer running, thereby reducing the exposure if the secrets are stolen. (Note that this is different from the ACE concept of "dynamic credentials" as that refers to whether or not a server must be restarted to pick up changed credentials)
HashiCorp Vault as a credential provider for ACE
As an example, HashiCorp Vault can be configured to provide secrets to ACE. Several ways to configure Vault are described in the HashiCorp docs with two of the main approaches being Vault Agent Injection and the Vault Secrets Operator (VSO). Both of these methods have credential storage managed outside of ACE, using Kubernetes secrets as an intermediate stage for VSO.
The Vault Agent Injector avoids intermediate storage at all, with the credentials being provided to the application by an in-memory volume mount from a sidecar container. The credentials are read in using an ACE ExternalCredentialsProviders
script configured in server.conf.yaml, and the overview looks like this:
The details of both this and the VSO approach are described at https://github.com/ot4i/ace-demo-pipeline/blob/main/tekton/README-vault.md with Vault in charge of secrets, but for this article the focus is on the Agent Injector approach. For the injector, the ACE Kubernetes pods pick up the data using annotations:
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-secret-tea: secret/tea
vault.hashicorp.com/agent-inject-template-tea: |
{{- with secret "secret/tea" -}}
type=jdbc
username={{ .Data.data.username }}
password={{ .Data.data.password }}
{{- end }}
vault.hashicorp.com/role: teaapp
This causes the Vault Agent Injector to create files on the in-memory shared filesystem, from where they can be read by ACE.
Although this looks like magic (adding YAML annotations causes secrets to appear!), it is able to work only because the Kubernetes service account is used for auth info (as described at Authenticating with Vault). The platform itself is providing the identity of the pod to Vault:
- Kubernetes is in charge of the containers, and knows the identity (service account) associated with the pod.
- Vault is told to trust Kubernetes to say which service account is used by a pod.
- The Vault access control lists decide which secrets can be accessed by a given service account in a namespace.
In effect, the Kubernetes and Vault administrators configure their systems to trust each other, and the result is that the ACE containers can securely receive credentials on startup without ACE developers having direct access to the Vault.
Files and scripts
Once the annotations are in place, then the resulting in-memory file looks like this:
sh-5.1$ cat /vault/secrets/tea
type=jdbc
username=dummyUser
password=dummyPassword
At this point, the credentials are available to ACE, but ACE needs to be configured to read them. This is achieved by configuring ACE to call a script at startup using a server.conf.yaml stanza like this:
Credentials:
ExternalCredentialsProviders:
LoadJDBCcreds:
loadAllCredentialsCommand: '/home/aceuser/ace-server/read-hashicorp-creds-jdbc.sh'
loadAllCredentialsFormat: 'yaml'
The script reads the credentials in /vault/secret
and converts them to YAML, and it is at this point that people are likely to think "Scripting? That sounds complicated!" but in fact the scripts can be very simple. The script to read JDBC credentials (full source here) has only seven lines of code:
echo "---"
echo "Credentials:"
echo " jdbc:"
for credfile in /vault/secrets/*; do
# YAML spacing is important - two spaces before the type, four in front
# of the name, and six for username/password/etc.
echo " $(basename $credfile):"
# The original lines look like 'username=USERNAME' and we need to convert
# to ' username: "USERNAME"' with sed. Also need to exclude "type".
cat ${credfile} | tr -d '\r' | grep -v 'type=' | sed 's/=/: "/' | sed 's/^/ /' | sed 's/$/"/g'
done
This would only handle JDBC credentials, but other scripts (like this one) can handle any type and are still quite small. The critical part is that the script must produce valid YAML (JSON and XML are also options) in order to provide the credentials to the ACE server. The YAML produced by the JDBC script would look something like this:
---
Credentials:
jdbc:
tea:
username: "dummyUser"
password: "dummyPassword"
The server will read the YAML and load the credentials into memory, with the logs showing how many credentials were loaded:
2025-08-22 15:51:23.426560: BIP1990I: Integration server 'creds-test-work-dir' starting initialization; version '13.0.4.0' (64-bit)
2025-08-22 15:51:23.445748: BIP9530I: External credentials provider 'LoadJDBCcreds' is about to load credentials using command '/home/aceuser/ace-server/read-hashicorp-creds-jdbc.sh'.
2025-08-22 15:51:23.459178: BIP9534I: External credentials provider 'LoadJDBCcreds' has loaded '1' credentials.
At this point, ACE flows will be able to use the loaded credentials to connect to JDBC endpoints.
Configuration alternatives
The example above shows one file because there is only one secret, and is also using a simple "key=value" format for the in-memory file. Most real-world examples would need more than one credential, and possibly use Kubernetes secrets instead of in-memory files from sidecar containers, but all of these cases can be handled with scripts to pick up the correct data and convert it into the right format. See examples here and here for two possibilities.
It would also be possible to create the YAML (or JSON/XML) directly using the Vault Agent Injector templating, at which point the server.conf.yaml stanza can just do
loadAllCredentialsCommand: 'cat /vault/secrets/allcredentials'
and no longer require an actual script to be added to the container. This could be useful in some cases where server.conf.yaml is the only tool readily available.
However, adding a script into a container during the image build (copy) or at runtime (ConfigMap) is not normally difficult, and using the annotations would require careful crafting of the syntax to ensure the output was correct; the same level of care would be needed for any later additions.
Using a simple "key=value" format allows for local testing (create dummy files locally) and the script writing can be aided with generative AI (Watsonx Code Assistant, GitHub Copilot, etc) since both the input (key/value) and output (YAML/JSON/XML) are well-known formats so AI assistance is feasible.
Despite the advantages of the simple format, organizations that have greater skill in the security area (so template creation is easier than shell scripts) may well choose to create the destination ACE formats to eliminate the need for an actual script. The only requirement is that the output of the loadAllCredentialsCommand
be in one of the ACE formats described in the ACE docs at https://www.ibm.com/docs/en/app-connect/13.0.x?topic=cis-configuring-integration-server-use-security-credentials-from-external-source .
Another related alternative is to supply the credentials to mqsisetdbparms
at container startup, before the IntegrationServer starts. This was one of the only ways to provide credentials before ACE 12.0.3 (when credentials scripts became available), and was achieved by modifying the ENTRYPOINT in a Dockerfile to run a script before starting the server.
It is also possible to add the credentials directly to server.conf.yaml, which could (in theory) be entirely generated from annotations. This relies on the ServerCredentials section of server.conf.yaml and is similar to using a credentials script.
Issues around deleting the credential files
For organizations that don't control access to containers very tightly, it may seem like a good plan to remove the plaintext secrets files after they have been in order to prevent users from seeing them. However, Kubernetes does not restart a whole pod if one container crashes, and so the secrets files will not be restored if the ACE server container is restarted after a crash or any other reason. This means that although the initial credential load will succeed, any subsequent load attempts (after the files have been deleted) will fail.
This is also noted in the CyberArk Conjur docs, and even though it might be technically possible to force the pod to restart by getting it evicted (by exceeding the ephemeral storage limit, for example), this is not an ideal way to solve a credentials problem. In general, preventing users from accessing the containers is a more container-native way of solving the problem.
HashiCorp Vault CSI Provider
There is also a Vault CSI provider that achieves similar goals as the Agent Injector but through different means. The demo ignores the CSI provider because HashiCorp say in the docs
We recommend using the Vault agent injector on Openshift instead of the Secrets Store CSI driver.
OpenShift does not recommend using hostPath mounting in production or certify Helm charts using
CSI objects because pods must run as privileged. Pods will have elevated access to other pods on
the same node, which OpenShift does not recommend.
and although the ACE demo pipeline works with any Kubernetes target (and non-container targets also!), it achieves wider coverage by using OpenShift-friendly technology.
IWHI, ACEaaS, and on-premises
While the description above has focused on containers due to their widespread adoption, ACE can also run as a managed service (ACE-as-a-Service on AWS and IBM webMethods Hybrid Integration (IWHI)) and on-premises as integration nodes. These other form factors can run credentials scripts which can load arbitrary credentials at start time, but there is no simple equivalent to the pod annotations or Kubernetes secrets that would provide the data to be read.
The best solution in these cases is likely to be automation that synchronizes credentials to the ACE-managed storage, pulling secrets from the vault and pushing them into ACE using mqsisetdbparms
/mqsicredentials
(on-prem) or via the admin API for ACEaaS and IWHI (see https://community.ibm.com/community/user/blogs/adam-roberts/2023/07/27/introducing-the-app-connect-public-api and the ACEaaS docs linked from there).
While it is technically feasible to use curl to access the Vault API from a credentials script run in an ACEaaS/IWHI server, any such script would require a Vault token to succeed, and that token would have to have been provided somehow. "Infinite lifetime" tokens are not advisable, so the token would need to be refreshed, and any solution to provide tokens could just as easily provide the credentials themselves, thereby avoiding the need for tokens at all.
ACE Dynamic Credentials
Credentials scripts are currently run only at startup, so any changes after that point are not picked up the ACE server. This means ACE containers must be restarted to pick up the new credentials rather than dynamically reading in the new values, and this is true even for ACE credential types that are considered "dynamic": while the server could accept new credentials, the scripts are never run to provide new values.
The ACE team do accept new feature requests at https://integration-development.ideas.ibm.com/ so it may be worth creating a new idea (after searching for any existing ones!) if this behavior is problematic. There are many ways this problem could be solved, including
- Re-running the credentials scripts on a timer
- Monitoring specific directories and re-running scripts if the files in the directories change
- Running a new
loadOneCredentialCommand
to read credentials when a connection is made.
Conclusion
Security managers are quite common across the container landscape, with containerized applications in most languages easily able to interact with them using simple file operations, and ACE likewise provides the capabilities needed to use credentials provided that way. The key piece is the ability to call scripts to load the credentials from whatever format is presented, and this has the potential to be extended to include reloading modified credentials should the ACE product team see enough interest in such an enhancement.