Once we update the spec.version and spec.license, the subsystem will move to pending status with reason ‘PreUpgradeCheckInProgress’ and message ‘Preupgrade check job initiated. See status.PreUpgradeCheck for more details’.
status:
conditions:
- lastTransitionTime: "2024-10-25T08:10:04Z"
message: 'Management instance being upgraded. Preupgrade check job initiated. See status.PreUpgradeCheck for more details. Not all services are ready, next pending services: consumer-catalog, s3proxy, analytics-ui, preupgrade, postgresDb'
reason: PreUpgradeCheckInProgress
status: "True"
type: Pending
The status.preUpgradeCheck condition is added and will remain in pending state with message ‘preupgrade check job in progress’
status:
preUpgradeCheck:
- lastTransitionTime: "2024-10-25T08:10:08Z"
message: preupgrade check job is in progress
reason: Running
status: "True"
type: Pending
Here we can see the management-preupgrade job has been created.
> kubectl get job
NAME COMPLETIONS DURATION AGE
management-preupgrade 0/1 53s 53s
> kubectl get pod
NAME READY STATUS RESTARTS AGE
management-preupgrade-v6c6f 1/1 Running 0 63s
If the pre-upgrade check fails the management-preupgrade job will automatically retry until the failing checks are rectified. The status of the subsystem CR will remain in 'Pending' with the reason 'PreUpgradeCheckInProgress'.
status:
conditions:
- lastTransitionTime: "2024-10-25T08:10:04Z"
message: 'Management instance being upgraded. Preupgrade check failed, retrying. See status.PreUpgradeCheck for more details. Not all services are ready, next pending services: consumer-catalog, s3proxy, analytics-ui, preupgrade, postgresDb'
reason: PreUpgradeCheckInProgress
status: "True"
type: Pending
The 'status.preUpgradeCheck' block will contain more details about the failing checks. The message field contains information including the number times the job has been retried, the name of the config-map where the full output can be viewed and up to the first 3 errors returned from the apicops command.
status:
preUpgradeCheck:
- lastTransitionTime: "2024-10-25T08:10:08Z"
message: 'Retrying preupgrade check job (attempt 2). Previous check failed with errors, see config map "management-preupgrade" for more details. Rectify the failing checks for the upgrade to proceed. To abort the upgrade set the spec.Version value back to "10.0.5.7". Set the spec.License back to the previous value as well if that was also changed. Preupgrade check errors: APIC_APICOPS_0060E - Oplock configmap with the name "test-oplock" is not owned by "ManagementCluster". Actual owner: "cluster", name: "undefined".; Error: not all pre-upgrade-check command checks passed'
reason: Retrying
status: "True"
type: Pending
The config-map 'management-preupgrade' will contains 'errors' and 'output' fields. The 'errors' field will contain any error messages returned during the execution of the ‘apicops system:pre-upgrade-check' command. The 'output' field will contain the entire log output of the command. The config map will contain the errors and output of up to the last 3 executions of the preupgrade job.
> kubectl get configmap management-preupgrade
NAME DATA AGE
management-preupgrade 7 22h
Once all failing checks have been rectified the job will automatically retry and complete successfully. The 'status.preUpgradeCheck' condition is transitioned to complete and the operand upgrade will proceed.
status:
preUpgradeCheck:
- lastTransitionTime: "2024-10-25T08:14:28Z"
message: ""
reason: Complete
status: "True"
type: Complete
Once the upgrade is completed the ‘status.preUpgradeCheck’ condition block is removed from subsystem CR.
With the introduction of the automatic pre-upgrade checks across subsystems we hope to have taken another positive step to improving the upgrade experience of API Connect.