-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Active failover mode can not be auto upgraded #667
Comments
Hello! Agent should be restarted with this flag by Operator. Did agent fail during first restart? However, if Operator did his job properly even restart during upgrade should be fine. We test it during QA phase (it is one of standard scenarios). Can you share with us arangodeployment? With current status. And also events of arangodeployment -+ operator logs (grep by -i action). Best Regards, |
Hi Adam, Yes, the agent failed with error. Seems the operator is not adding the flag |
Hi, thanks for the response. I tried a scenario where is only db update (NO operator update), this is what I did
|
Hi @ajanikow , We changed imageDiscoveryMode to
May I have some help? |
I think this is caused by upgrading operator and DB at the same time. Operator upgrade triggers a rolling update. And if we did a DB upgrade at the same time, a pod (could be agent or db pod) will be left in error status, saying I will need to make sure the DB upgrade is after the rolling update. So what would be the recommended way to know that a rolling update is done? |
Dear kube aranogdb team,
Our cluster is using the active failover mode. But there has been an issue when upgrading from 3.6.5 to 3.7.3.
Here are some details
Thus, we are missing
--database.auto-upgrade
From ArangoDB documents here https://www.arangodb.com/docs/stable/deployment-kubernetes-upgrading.html
So I believe
--database.auto-upgrade
should be added by the kube arangodb operator.May I have some help, please? thank you
The text was updated successfully, but these errors were encountered: