You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We scaled down and scaled up our master and nodepool autoscaler for our kops cluster running in V1.15.8 . But master node is ready and i am able to run kubectl commands when i run the kubectl get nodes it showing the old nodes that got terminated while i scaled down my cluster. When i run kops validte cluster validation fails showing new nodes with machine not yet joined cluster. I guess some issue with the etcd as it' showing the old ones. What steps can i take to bring back my cluster.
Update: Did kops rolling-update cluster --force --yes. But still nodes didn't joined the cluster
The text was updated successfully, but these errors were encountered:
We scaled down and scaled up our master and nodepool autoscaler for our kops cluster running in V1.15.8 . But master node is ready and i am able to run kubectl commands when i run the kubectl get nodes it showing the old nodes that got terminated while i scaled down my cluster. When i run kops validte cluster validation fails showing new nodes with machine not yet joined cluster. I guess some issue with the etcd as it' showing the old ones. What steps can i take to bring back my cluster.
Update: Did kops rolling-update cluster --force --yes. But still nodes didn't joined the cluster
The text was updated successfully, but these errors were encountered: