Redeploy Trying to Create Old Replica Sets - KOTS

James Ressler
James Ressler
  • Updated

Author: James Ressler

Date: May 3, 2024

Audience: Everyone

Environment: Self-hosted, Replicated - KOTS


When changing the hostname and deploying the application, an error occurs where the oauth and saml services still try to spin up their old replica sets in addition to the new ones. This could be due to a misconfiguration in the deployment process or a failure to update the necessary settings. This shows up in the admin console as a status showing "Updating" instead of "Ready." Under the details tab, there will be two (2) entries each for deployment/oauth and deployment/saml, one pair showing ready and the other showing missing or waiting.

From the command line on the application server, running kubectl get pods will show two (2) saml pods and two (2) oauth pods.

Describing the saml deployment or the oauth deployment with kubectl describe deployment/saml or kubectl describe deployment/oauth will reveal the fields oldReplicaSets and newReplicaSet:


Delete the current deployments with kubectl delete deployment/saml deployment/oauth and then redeploy the application from the KOTS Admin Console.

Related to

Was this article helpful?

0 out of 0 found this helpful

Have more questions? Submit a request



Please sign in to leave a comment.