-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logs and organization cleanup #208
Logs and organization cleanup #208
Conversation
e2977d4
to
ba15c1e
Compare
pkg/controller/cluster/cluster.go
Outdated
} | ||
func (c *ClusterReconciler) reconcileCluster(ctx context.Context, cluster *v1alpha1.Cluster) error { | ||
log := ctrl.LoggerFrom(ctx) | ||
log.Info("reconcileCluster") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add more informative massage instead of adding the function name, maybe reconciling cluster and adding a field with the cluster name/namespace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't add any more information because when you get the logger from the controller-runtime it will be already filled with many information, so you just need to take care about the message:
{"level":"info","timestamp":"2025-01-27T23:04:59.854Z","msg":"reconcileCluster","controller":"cluster","controllerGroup":"k3k.io","controllerKind":"Cluster","Cluster":{"name":"cluster1","namespace":"default"},"namespace":"default","name":"cluster1","reconcileID":"eb32a4f6-ca66-49e1-8bf5-2946f83f5e49"}
{
"level": "info",
"timestamp": "2025-01-27T23:04:59.854Z",
"msg": "reconcileCluster",
"controller": "cluster",
"controllerGroup": "k3k.io",
"controllerKind": "Cluster",
"Cluster": {
"name": "cluster1",
"namespace": "default"
},
"namespace": "default",
"name": "cluster1",
"reconcileID": "eb32a4f6-ca66-49e1-8bf5-2946f83f5e49"
}
Also the next log is the one related to the underlying function called (reconcileCluster
):
{
"level": "info",
"timestamp": "2025-01-27T23:04:59.854Z",
"msg": "reconcileCluster",
"controller": "cluster",
"controllerGroup": "k3k.io",
"controllerKind": "Cluster",
"Cluster": {
"name": "cluster1",
"namespace": "default"
},
"namespace": "default",
"name": "cluster1",
"reconcileID": "eb32a4f6-ca66-49e1-8bf5-2946f83f5e49"
}
If the object was going to be finalized then we will see the other log.Info("finalizeCluster")
log.
I agree that the message should be more readable. What about adding a new key-value to match the function caller to easily identify it? Something like:
log := ctrl.LoggerFrom(ctx).WithValues("caller", "finalizeCluster")
log.Info("starting cluster deletion")
I think I like this approach!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah that works!
pkg/controller/cluster/cluster.go
Outdated
podList v1.PodList | ||
) | ||
log := c.logger.With("Cluster", req.NamespacedName) | ||
log := ctrl.LoggerFrom(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the reason I think I pass the logger is that this logger is initialized in the main, to this controller, since we have too many controllers in the same process I initialize the logger as follows
logger: logger.Named(clusterController)
This will basically initialize the logger with the name of the controller as well as adding fields related to that controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually forgot about the other controllers we have. Not using the controller-runtime logger will result in even less info though, and a very long logger name:
{
"level": "info",
"timestamp": "2025-01-27T23:04:59.792Z",
"logger": "k3k-pod-controller.k3k-clusterset-controller.k3k-node-controller",
"msg": "Handle etcd server pod",
"Pod": "default/k3k-cluster1-server"
}
Changing the current implementation using the one from the controller runtime instead:
log := ctrl.LoggerFrom(ctx)
// log := p.logger.With("Pod", req.NamespacedName)
will result in this log:
{
"level": "info",
"timestamp": "2025-01-27T23:55:17.564Z",
"msg": "Handle etcd server pod",
"controller": "k3k-pod-controller",
"namespace": "default",
"name": "k3k-cluster1-server",
"reconcileID": "06cc4bdf-4e7a-4f92-9606-01ccdf0384a9"
}
As you can see we have the controller already injected ("controller": "k3k-pod-controller"
), and a very nice thing is the reconcileID
. With that you can follow all the events that are part of the same reconciliation request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry if I am not very knowledgeable about this, but if I understand correctly the LoggerFrom is using logr? will that result into two sets of loggers in k3k controllers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LoggerFrom
will fetch the logger set in the controller-runtime.
In main.go
we were setting it like this:
ctrlruntimelog.SetLogger(zapr.NewLogger(logger.Desugar().WithOptions(zap.AddCallerSkip(1))))
so we will get a Zap logger. logr
is just an API (or "interface"?), and it's working with whichever logger implementation you set. As you can see we are using zapr.NewLogger
to use zap within the logr interface, but the underlying logger will be the zap one.
The only difference is that now we can get it directly from the controllers, already filled with some information about the reconciliation loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats cool, thank you!
@galal-hussein thanks for the review! As a wrap up using the logger injected in the context will provide already all the core information about the controller, namespace and name:
I think I will need to find out where the What do you think about this? |
ba15c1e
to
a7672ee
Compare
okay as long as we have a structured logging with informative log messages then I am okay with it, one of the reason tho that I went with the first approach is to plan to add more structure loggers to the interface, basically will have the option for example to use zap or logrus, or logr. As for the cluster name/namespace I just missed to add it, but its being added to other controllers for example https://github.com/rancher/k3k/blob/main/pkg/controller/clusterset/clusterset.go#L124 |
a7672ee
to
3b99994
Compare
@galal-hussein I've implemented the requested changes, PTAL! One thing: I have not added the "caller" value because of 2 reasons. First one is that adding everytime the caller function will cause the log to have multiple values, if we reinject the logger in the context. But the main reason is that this value is already provided by default in the Config, it just needs to be enabled. Should I enable it? |
pkg/controller/clusterset/node.go
Outdated
var setNetworkPolicy *networkingv1.NetworkPolicy | ||
for _, cs := range clusterSetList.Items { | ||
if cs.Spec.DisableNetworkPolicy { | ||
continue | ||
} | ||
|
||
clusterSetLog := log.WithValues("clusterset", cs.Namespace+"/"+cs.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a nit here, we can just override the log variable here, instead of creating a new one:
log = log.WithValues(....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep! I thought about this and I wasn't sure. Happy to change it :)
3b99994
to
c9442df
Compare
Trying to following some best practices from this article I tried to cleanup a bit the Reconciliation logic, trying to apply the folliwing pattern:
From the snippet I've seen we can use the logger set in the controller-runtime (
ctrl.LoggerFrom(ctx)
), and I moved the finalize/cleanup in a different file (cluster_finalize.go
).All the logic was kept as before.
This was done to try to simplify a bit the flow before adding some fixes related to the #170.