-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust controller lock to be based on controller name #258
Conversation
Signed-off-by: Dan Pock <[email protected]>
7a8ca5c
to
08ac400
Compare
Signed-off-by: Dan Pock <[email protected]>
# This is the cluster scoped example - in k3s/RKE2 this is unnecessary as it will be deployed out of the box. | ||
# To use this example on k3s/RKE2 you should exclude this part, or disable the embedded controller. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused what's going on with this example - we deploy one controller to the default namespace that watches all namespaces, and another to the helm-controller namespace that watches only its own namespace?
Unless I'm missing something this doesn't seem like a valid configuration. You could run one that watches all namespaces, or multiple that watch specific namespaces, but the two should not be mixed otherwise the global one will still step on the namespaced ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could run one that watches all namespaces, or multiple that watch specific namespaces, but the two should not be mixed otherwise the global one will still step on the namespaced ones.
That is essentially the behavior this is seeking to modify - expanding on the prior works around the ManagedBy field to allow co-existence of multiple controllers. Either of helm-controller
type deployment - or when the controller is re-used as a library as with PrometheusFederator.
controllerLockName := "helm-controller-lock" | ||
if controllerName != "helm-controller" { | ||
klog.Infof("Starting helm controller using alias `%s`", controllerName) | ||
controllerLockName = strings.Join([]string{controllerName, controllerLockName}, "-") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to be clear, this is only necessary if you're running multiple in the same namespace, right? I guess it was envisioned that the controllers would run in separate namespaces, and watch whatever namespace they're deployed to. It sounds like you're trying to deploy multiple controllers to the same ns, while watching other namespaces?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is only necessary if you're running multiple in the same namespace, right?
I would clarify this to say: "multiple in the same scope".
Since PrometheusFederator exists (by default) as a cluster-scoped controller. However it sets a managedBy
annotation matching its name, which means any controller named helm-controller
will never deploy those charts.
And similarly the PrometheusFederator controller will never get the lock because an embedded k3s helm-controller will always get the lock first.
The example manifest is just a (admittedly contrived) example to try and show that dynamic without needing to directly involve PrometheusFederator images/charts in testing/fixing this.
Hiya @brandond - Just realized that I think I overlooked some important details when crafting this fix and considering the "bug" I thought I saw. I'm seeing your point now about why this change doesn't make much sense. My initial understanding overlooked that the leases were already So even if when using multiple namespaced controllers, it doesn't matter if the leases/locks all get the named That's also assuming that the package implements the top level Beyond my confusion, the only other valid potential concern is that CRDs are (understandably) installed outside of the controller logic. However that also (potentially sub-optimally) means that every instance updates CRDs every time they start up. I'm just less clear on if there's a simple/obvious solution to that aspect. So out of all of that, before I close this I have two questions:
|
Yeah, that seems useful.
IIRC, if the controller's RBAC doesn't allow it access to manage CRDs it will just skip syncing them on startup. So maybe in your case, just create the CRDs externally (in the embedding controller, or so on) and ensure that the running controllers' RBAC does not include write access to CRD. |
Appreciate the feedback here, I'll rebuild this PR to the smaller scope of adjusting the name. Then at some point soon, but not now, I may still explore options for a CRD focused lease/lock. |
More simple PR to update remaining static |
This PR was created while exploring solutions to an issue with Prometheus Federator's usage for
helm-controller
. More context here: rancher/prometheus-federator#141 (comment)From what I've found the lock mechanism is one of the remaining changes needed to more elegantly support - for lack of better term - "multiple helm-controllers in a single cluster". Our use case is specific to PromFed, however I suspect it's better to generalize the idea of supporting this since that has the same challenges and doesn't involve the external project.