-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust controller lock to be based on controller name #258
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# This is the cluster scoped example - in k3s/RKE2 this is unnecessary as it will be deployed out of the box. | ||
# To use this example on k3s/RKE2 you should exclude this part, or disable the embedded controller. | ||
--- | ||
apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
name: helm-controller | ||
# Note: This will still deploy to a default namespace | ||
labels: | ||
app: helm-controller | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: helm-controller | ||
template: | ||
metadata: | ||
labels: | ||
app: helm-controller | ||
spec: | ||
containers: | ||
- name: helm-controller | ||
image: rancher/helm-controller:v0.12.1 | ||
command: ["helm-controller"] | ||
# Alternatively, on k3s/RKE2, you can deploy an example with 3 instances of helm-controller using: | ||
# (The 3 being: k3s/RKE2 built-in helm-controller, this second cluster-scoped with unique name, and final namespace scoped.) | ||
# args: ["--controller-name", "second-cluster-scoped-instance"] | ||
# This section and under are the namespace scoped examples. | ||
--- | ||
apiVersion: v1 | ||
kind: Namespace | ||
metadata: | ||
name: helm-controller | ||
labels: | ||
name: helm-controller | ||
--- | ||
apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
name: helm-controller | ||
namespace: helm-controller | ||
labels: | ||
app: helm-controller | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: helm-controller | ||
template: | ||
metadata: | ||
labels: | ||
app: helm-controller | ||
spec: | ||
containers: | ||
- name: helm-controller | ||
image: rancher/helm-controller:v0.12.1 | ||
command: ["helm-controller"] | ||
# Note, only one `helm-controller` can have the lock at the same time, so you muse set a unique name. | ||
# The default controller-name bust be overridden for both controllers to work at the same time. | ||
args: ["--namespace", "helm-controller", "--controller-name", "helm-controller-namespaced"] |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,6 +2,7 @@ package controllers | |
|
||
import ( | ||
"context" | ||
"strings" | ||
"time" | ||
|
||
"github.com/k3s-io/helm-controller/pkg/controllers/chart" | ||
|
@@ -106,7 +107,13 @@ func Register(ctx context.Context, systemNamespace, controllerName string, cfg c | |
klog.Infof("Starting helm controller in namespace %s", systemNamespace) | ||
} | ||
|
||
leader.RunOrDie(ctx, systemNamespace, "helm-controller-lock", appCtx.K8s, func(ctx context.Context) { | ||
controllerLockName := "helm-controller-lock" | ||
if controllerName != "helm-controller" { | ||
klog.Infof("Starting helm controller using alias `%s`", controllerName) | ||
controllerLockName = strings.Join([]string{controllerName, controllerLockName}, "-") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. just to be clear, this is only necessary if you're running multiple in the same namespace, right? I guess it was envisioned that the controllers would run in separate namespaces, and watch whatever namespace they're deployed to. It sounds like you're trying to deploy multiple controllers to the same ns, while watching other namespaces? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
I would clarify this to say: "multiple in the same scope". Since PrometheusFederator exists (by default) as a cluster-scoped controller. However it sets a And similarly the PrometheusFederator controller will never get the lock because an embedded k3s helm-controller will always get the lock first. The example manifest is just a (admittedly contrived) example to try and show that dynamic without needing to directly involve PrometheusFederator images/charts in testing/fixing this. |
||
} | ||
|
||
leader.RunOrDie(ctx, systemNamespace, controllerLockName, appCtx.K8s, func(ctx context.Context) { | ||
if err := appCtx.start(ctx); err != nil { | ||
klog.Fatal(err) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused what's going on with this example - we deploy one controller to the default namespace that watches all namespaces, and another to the helm-controller namespace that watches only its own namespace?
Unless I'm missing something this doesn't seem like a valid configuration. You could run one that watches all namespaces, or multiple that watch specific namespaces, but the two should not be mixed otherwise the global one will still step on the namespaced ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is essentially the behavior this is seeking to modify - expanding on the prior works around the ManagedBy field to allow co-existence of multiple controllers. Either of
helm-controller
type deployment - or when the controller is re-used as a library as with PrometheusFederator.