Hi,
We are trying to tackle Isva container reloads after LMI publish in controlled way. Our requirements are following
- avoid any toil - in this case needing to get Kube admin to do pod restarts
- avoid service unavailability - e.g reload/restart only affected containers, one-by-one when more than one replica
Changes to the configurations can happen with 3 scenarios
- LMI UI
- Ansible playbooks running as kube jobs
- A block- application which sets backend apps to maintenance mode by rest api calls modifying authzrules to junctions
Originally, we had AUTO_RELOAD_FREQUENCY set but we came to conclusion that there is no way of controlling that enough. It would reload/restart all pods pretty much the same time, causing outage.
Our evolved solution for this was to attach another kube job running script which with help of kubectl commands, loops thru the pods, runs isva_cli reload check and if outcome is need for restart, runs isva_cli reload all. This works fine and we can run it as cronjob periodically. there are certain minor problems with this approach:
- script goes thru pods of deployment one by one. in our prod environment, currently with 12 webseals, dsc, 2 runtimes it takes roughly 30 minutes for the script to complete. time is spent mostly on isva_cli command executions. This means our cronjob can run only with 30 minute interval
- The block application mentioned above can be triggering publish whenever admins set the maintenance modes. publishing may get triggered while the cronjob reload-job is already executing. Potentially this could lead to inconsistent state on some pods for 30 minutes, until job runs again.
It does look like the isva_cli reload check does not understand if the modifications really should affect different roled containers. It just sees that there is published configuration and triggers then basically reload for everything, one-by-one.
Thinking further, we could attach the same pod which is run as job to our LMI deployment, as sidecar and make it sniff the published configurations, then triggering same pod reloads/restarts. This might be better approach than having separate cronjobs.
We are also aware that isva_cli reload is likely not an option later. at the moment we do not have slimlined webseal containers in use. When we do, cronjob based approach becomes useless. the script could just restart pods every 30 minutes but that does not sound good. If we moved the script to sidecar, it would be a bit better, but still, restarting all pods even if one-by-one does not sound very good, albeit something we could live with.
I wonder if there are some plans in future ISVA versions to handle this kind of scenarios ? Some sort of orchestrator which would trigger the pod instances config updates?
------------------------------
Jan Lindstam
------------------------------