You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug a clear and concise description of what the bug is.
After following these instructions to enable FIPS mode in EU-West-1 region in my EKS cluster every service in my cluster seems to work except for the kube-prometheus-stack-operator pod with it constantly restarting. If I disable FIPS mode it works as expected.
What's your helm version?
v3.16.2
What's your kubectl version?
v1.31.1
Which chart?
Kube Prometheus Stack
What's the chart version?
v67.9.0
What happened?
Building a cluster from fresh in Terraform the apply stage gets stuck creating the KPS release. Once it times out and I inspect the cluster the kube-prometheus-stack-operator pod constantly restarts.
What you expected to happen?
I'd expect the pod to come up successfully
How to reproduce it?
Enable FIPS mode with Bottlerocket nodes following [these instructions](these instructions).
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
I'm running this through Terraform
Anything else we need to know?
The pod logs show the following:
{"ts":"2025-01-21T14:19:47.817259022Z","level":"info","caller":{"function":"main.run","file":"/workspace/cmd/operator/main.go","line":214},"msg":"Starting Prometheus Operator","version":"(version=0.79.2, branch=, revision=1d2dca5)","build_context":"(go=go1.23.4, platform=linux/arm64, user=, date=20241218-17:22:57, tags=unknown)","feature_gates":"PrometheusAgentDaemonSet=false"}
{"ts":"2025-01-21T14:19:47.817574355Z","level":"info","caller":{"function":"github.com/prometheus-operator/prometheus-operator/internal/goruntime.SetMaxProcs.func1","file":"/workspace/internal/goruntime/cpu.go","line":27},"msg":"Updating GOMAXPROCS=1: determined from CPU quota"}
{"ts":"2025-01-21T14:19:47.817656727Z","level":"info","caller":{"function":"main.run","file":"/workspace/cmd/operator/main.go","line":227},"msg":"namespaces filtering configuration ","config":"{allow_list=\"\",deny_list=\"\",prometheus_allow_list=\"\",alertmanager_allow_list=\"\",alertmanagerconfig_allow_list=\"\",thanosruler_allow_list=\"\"}"}
{"ts":"2025-01-21T14:19:47.827052298Z","level":"info","caller":{"function":"main.run","file":"/workspace/cmd/operator/main.go","line":268},"msg":"connection established","kubernetes_version":"1.31.4-eks-2d5f260"}
{"ts":"2025-01-21T14:19:47.83824024Z","level":"info","caller":{"function":"main.run","file":"/workspace/cmd/operator/main.go","line":353},"msg":"Kubernetes API capabilities","endpointslices":true}
{"ts":"2025-01-21T14:19:47.927613389Z","level":"warn","caller":{"function":"github.com/prometheus-operator/prometheus-operator/pkg/server.(*TLSConfig).Convert","file":"/workspace/pkg/server/server.go","line":164},"msg":"server TLS client verification disabled","client_ca_file":"/etc/tls/private/tls-ca.crt","err":"stat /etc/tls/private/tls-ca.crt: no such file or directory"}
{"ts":"2025-01-21T14:19:47.928419316Z","level":"info","caller":{"function":"github.com/prometheus-operator/prometheus-operator/pkg/server.(*Server).Serve","file":"/workspace/pkg/server/server.go","line":301},"msg":"starting secure server","address":"[::]:10250","http2":false}
{"ts":"2025-01-21T14:19:47.928499563Z","level":"info","caller":{"function":"k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicServingCertificateController).Run","file":"/go/pkg/mod/k8s.io/[email protected]/pkg/server/dynamiccertificates/tlsconfig.go","line":243},"msg":"Starting DynamicServingCertificateController"}
{"ts":"2025-01-21T14:19:47.928724573Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheus"}
{"ts":"2025-01-21T14:19:47.928788844Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:47.928994901Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for alertmanager"}
{"ts":"2025-01-21T14:19:47.929201326Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for thanos"}
{"ts":"2025-01-21T14:19:47.929238126Z","level":"info","caller":{"function":"github.com/prometheus-operator/prometheus-operator/pkg/kubelet.(*Controller).Run","file":"/workspace/pkg/kubelet/controller.go","line":207},"msg":"Starting controller","component":"kubelet_endpoints","kubelet_object":"kube-system/kube-prometheus-stack-kubelet"}
{"ts":"2025-01-21T14:19:47.929891151Z","level":"info","caller":{"function":"k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicCertKeyPairContent).Run","file":"/go/pkg/mod/k8s.io/[email protected]/pkg/server/dynamiccertificates/dynamic_serving_content.go","line":135},"msg":"Starting controller","name":"servingCert::/cert/tls.crt::/cert/tls.key"}
{"ts":"2025-01-21T14:19:48.028858781Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.028928427Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.028946733Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.028995586Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029012973Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029028079Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029039082Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.02906071Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029084899Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029100702Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029113183Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029128477Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":313},"msg":"Waiting for caches to sync for prometheusagent"}
{"ts":"2025-01-21T14:19:48.029146307Z","level":"info","caller":{"function":"k8s.io/client-go/tools/cache.WaitForNamedCacheSync","file":"/go/pkg/mod/k8s.io/[email protected]/tools/cache/shared_informer.go","line":320},"msg":"Caches are synced for prometheusagent"}
The text was updated successfully, but these errors were encountered:
Describe the bug a clear and concise description of what the bug is.
After following these instructions to enable FIPS mode in EU-West-1 region in my EKS cluster every service in my cluster seems to work except for the
kube-prometheus-stack-operator
pod with it constantly restarting. If I disable FIPS mode it works as expected.What's your helm version?
v3.16.2
What's your kubectl version?
v1.31.1
Which chart?
Kube Prometheus Stack
What's the chart version?
v67.9.0
What happened?
Building a cluster from fresh in Terraform the apply stage gets stuck creating the KPS release. Once it times out and I inspect the cluster the
kube-prometheus-stack-operator
pod constantly restarts.What you expected to happen?
I'd expect the pod to come up successfully
How to reproduce it?
Enable FIPS mode with Bottlerocket nodes following [these instructions](these instructions).
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
I'm running this through Terraform
Anything else we need to know?
The pod logs show the following:
The text was updated successfully, but these errors were encountered: