Author: James Ressler
Date: May 3, 2024
Updated August 2024
Audience: Everyone
Environment: Self-hosted, Replicated - KOTS
Issue
When changing the hostname and deploying the application, an error occurs where the oauth
and saml
services still try to spin up their old replica sets and the new ones. This could be due to a misconfiguration in the deployment process or a failure to update the necessary settings. This shows up in the admin console as a status showing "Updating" instead of "Ready." Under the details tab, there will be two (2) entries each for deployment/oauth
and deployment/saml
One pair shows ready, and the other shows missing or waiting.
From the command line on the application server, running kubectl get pods
will show two (2) saml
pods and two (2) oauth
pods.
Describing the saml
deployment or the oauth
deployment with kubectl describe deployment/saml
or kubectl describe deployment/oauth
will reveal the fields oldReplicaSets
and newReplicaSet
:
Solution
Delete the current deployments with kubectl delete deployment/saml deployment/oauth
and then redeploy the application from the KOTS Admin Console.
Related to
Comments
0 comments
Please sign in to leave a comment.