-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add k8s-daemonset helm-chart - inital cut #134
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good start! I understand this is still WIP, so I focused on the bigger picture in my review. Here are a few high level issues I spotted in the current bundle:
-
AFAIK,
Service
andIngress
are atypical resources forDaemonSet
. I think we can omit them, at least in the initial implementation.- Typically
DaemonSet
are used for deploying node specific managers. What this means is that in most use cases, it is important for Pods on the same nodes to stream to the instance of theDaemonSet
on that node. For example, the nodelocal-dns cache is deployed as aDaemonSet
, and all pods on the node want to talk to the instance on the same node. - On the flip side,
Service
andIngress
provide a common routing endpoint for accessing any Pod in the target set. So using aService
orIngress
endpoint to access aDaemonSet
means you could end up routing to another instance of the Pod on a different Node.
- Typically
-
You can also omit
ServiceMonitor
and googleManagedCertificate
, since they relate toService
andIngress
, so once those are removed, these are moot. -
Similarly,
PodDisruptionBudget
doesn't quite make sense for aDaemonSet
either, and can be removed.PodDisruptionBudget
is used to ensure a minimum number of Pods for the app is up at a time during voluntary disruption (e.g., rotating worker nodes). This concept doesn't make sense for aDaemonSet
, because there is typically one replica of the Pod per node, leading to a situation where the disruption budget is really a budget of the number of nodes that can be up at a time. But Kubernetes can't manage the node disruptions - it can only manage Pods, so the scope is incorrect.
NOTE: I focused on the chart code and ignored the README in this initial review. It looks like the README is still a WIP, so will hold off on going through it until you've got a chance to groom it a little.
- name: SYSLOG_HOST | ||
value: "sysloghost" | ||
- name: SYSLOG_PORT | ||
value: "514" | ||
- name: SYSLOG_PROTOCOL | ||
value: "udp" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These appear to be specific to the fluentd example, and should instead be managed in the values.yaml
for that example instead of hardcoded here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yorinasub17 @rhoboat - I've just refactored and simplified the chart based on your feedback. Also, README.md has been updated with the needful. Also, I've tested the example chart with minikube. Please let me know if you've any further inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a good start! I was going to leave very similar comments to Yori, so I will refrain from restating things.
Description
Added initial helm-chart for k8s-daemonset.
Documentation
TODOs
Please ensure all of these TODOs are completed before asking for a review.
feature/new-vpc-endpoints-955
orbug/missing-count-param-434
.Related Issues
#23