Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a description to the kube-state-metrics sharding example #1091

Merged
merged 1 commit into from
Jan 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,34 @@
(NOTE: Do not edit README.md directly. It is a generated file!)
( To make changes, please modify values.yaml or description.txt and run `make examples`)
-->
# Example: scalability/sharded-kube-state-metrics/values.yaml
# Sharded kube-state-metrics

This example demonstrates how to [shard kube-state-metrics](https://github.com/kubernetes/kube-state-metrics#scaling-kube-state-metrics)
to improve scalability. This is useful when your Kubernetes cluster has a large number of objects and kube-state-metrics
is struggling to keep up. The symptoms of this might be:

* It takes longer than 60 seconds to scrape kube-state-metrics, which is longer than the scrape interval.

Check failure on line 11 in charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md

View workflow job for this annotation

GitHub Actions / runner / markdownlint

[markdownlint] reported by reviewdog 🐶 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1] Raw Output: charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md:11:1 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1]
* The sheer amount of metric data coming from kube-state-metrics is causing Alloy to spike its required resources.

Check failure on line 12 in charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md

View workflow job for this annotation

GitHub Actions / runner / markdownlint

[markdownlint] reported by reviewdog 🐶 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1] Raw Output: charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md:12:1 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1]
* kube-state-metrics itself might not be able to keep up with the number of objects in the cluster.

Check failure on line 13 in charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md

View workflow job for this annotation

GitHub Actions / runner / markdownlint

[markdownlint] reported by reviewdog 🐶 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1] Raw Output: charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md:13:1 MD030/list-marker-space Spaces after list markers [Expected: 3; Actual: 1]

By increasing the number of replicas and enabling [automatic sharding](https://github.com/kubernetes/kube-state-metrics#automated-sharding),
kube-state-metrics will automatically distribute the resources on the cluster across the shards.

## Changing replicas

Whenever the number of replicas changes, there are two scenarios to consider. Your requirements will dictate which one
is best for you.

### RollingUpdate

If the deployment strategy is set to `RollingUpdate`, when kube-state-metrics is updated it is possible for there to be
two running instances for a short period. This means that there shouldn't be a gap in metrics, but could lead to

Check warning on line 26 in charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md

View workflow job for this annotation

GitHub Actions / runner / alex

[alex] reported by reviewdog 🐶 Be careful with `period`, it’s profane in some cases period retext-profanities Raw Output: 26:35-26:41 warning Be careful with `period`, it’s profane in some cases period retext-profanities
duplicate metrics for a short period.

Check warning on line 27 in charts/k8s-monitoring/docs/examples/scalability/sharded-kube-state-metrics/README.md

View workflow job for this annotation

GitHub Actions / runner / alex

[alex] reported by reviewdog 🐶 Be careful with `period`, it’s profane in some cases period retext-profanities Raw Output: 27:31-27:37 warning Be careful with `period`, it’s profane in some cases period retext-profanities

### Recreate

However, if the deployment strategy is set to `Recreate`, the old kube-state-metrics pod is terminated before the new
one is started. This means that there will be a gap in metrics while the new pod is starting.

## Values

Expand Down

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading