I am having another problem with the ISVD server container and don't understand why. Perhaps someone else has encountered the same problem.
The ISVD server container was running fine, but I had to add another suffix to the server configuration YAML under server.suffixes.
To test it in the customer's test environment, the persistent volume was deleted and emptied so that the container would rebuild everything from scratch.
The containers are running in a Kubernetes and Rancher environment.
However, the ISVD server container ends up in a crash loopback when it tries to start the database manager.
The log says:
Containers with unready status: [isvd-server]
"ibmslapd", "message": {"id": "GLPCCH026I", "text": "Adding change log to directory server instance: 'idsldap'."}
"ibmslapd", "message": {"id": "GLPCTL017I", "text": "Cataloging database instance node: 'idsldap'."}
"ibmslapd", "message": {"id": "GLPCTL018I", "text": "Cataloged database instance node: 'idsldap'."}
"ibmslapd", "message": {"id": "GLPCTL008I", "text": "Starting database manager for database instance: 'idsldap'."}
The message ‘Starting’ is the last one, and the container remains stuck here forever.
The problem persists even when the change with the additional suffix is reversed and the persistent volume is new again.
And the seed container is now stuck here too.
Does anyone have any idea what could be causing this?