Replies: 2 comments
-
You can check to see which etcd keys have been written to most frequently by running a SQL query against the database: SELECT COUNT(*), name FROM kine GROUP BY name ORDER BY COUNT(*) DESC LIMIT 25; You shouldn't see the count for any of these being more than a few 10s, perhaps into the low 100s. Are you seeing 5MB/sec in disk IO, or just network throughput against the server? |
Beta Was this translation helpful? Give feedback.
-
I seem to have quite a few things in the few hundres. So I wonder if something is unhappy. Some of those items (eg longhorn) have been removed. Its around 3-5MB/sec in disk IO, reported from the NAS itself running mariaDB. I may re-install the cluster from scratch to see if that helps. Perhaps my fiddling has caused something strange to happen. The kine table seems to have 4169 rows. Unsure if that's high or low. I don't feel like I'm running too many workloads. EDIT: Completely reset the cluster to a new database. Only has coredns, metrics and argocd installed.
So I wonder if this is just 'normal' for kine behind mariadb? |
Beta Was this translation helpful? Give feedback.
-
From
k3s-io/kine#105 (comment)
I have a 2 node, 2 worker mariadb backed cluster. It looks lik it should have 500kib/sec. https://rancher.com/docs/k3s/latest/en/installation/installation-requirements/resource-profiling/
I am experiencing 5mb/sec. I know this is more than the single node, but it seems like a large increase.
Is there any way I can inspect or debug this?. Worried that this might hurt the lifespan of the Ssd backing the database.
Beta Was this translation helpful? Give feedback.
All reactions