-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hold startup until ConfigMaps are ready or die #1172
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1172 +/- ##
==========================================
- Coverage 81.12% 81.04% -0.08%
==========================================
Files 18 18
Lines 1462 1456 -6
==========================================
- Hits 1186 1180 -6
Misses 219 219
Partials 57 57 ☔ View full report in Codecov by Sentry. |
/retest |
func ensureCtxWithConfigOrDie(ctx context.Context) context.Context { | ||
var err error | ||
var cfg *store.Config | ||
if pollErr := wait.PollUntilContextTimeout(ctx, time.Second, 60*time.Second, true, func(context.Context) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to make the timeout configurable just in case someone has a slow env?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm I don't think thats necessary. It is only until all yamls are applied when installing (from kubectl or operator). I don't think that should ever be longer than 60 seconds, and if so, it would just restart the pod and you have another 60s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know about what value is good enough tbh, 1m might be enough.
Restarting is an option but probably you don't want people to have to do that in case they have a big slow cluster (api server under pressure), just in case.
I mean that if they know they have a slow cluster they can configure it high enough (eg. via env var) and forget about it, otherwise they will have to monitor it and restart it.
Thinking out loud here.
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ReToCode, skonto The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Changes
configmap.Watcher
Fixes #968
/assign @skonto