You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Kubecost in GKE gave excessive authority when defining Service Account named "kubecost-1-cost-analyzer-serviceaccount-name-dff5" "kubecost-1-cost-analyzer-prometheus-serviceaccounts-server-name-e82d" and "kubecost-1-deployer-kvsqj". Besides, these Service Accounts are mounted into pod, witch makes it possible for attackers to raise rights to administrators.
Detailed Analysis
We deployed Kubecost in the marketplace of Google's GKE cluster by default.
The clusterrole named "default:kubecost-1:cost-analyzer.serviceAccount.name-r0" defines the "*" verb of "pods, deployments, replicationcontrollers and nodes". And this clusterrole is bound to the Service Account named "kubecost-1-cost-analyzer-serviceaccount-name-dff5". The Service Account is mounted into the pod named "kubecost-1-cost-analyzer-789fc48778-xgpkg".
The clusterrole named "default:kubecost-1:cost-analyzer.prometheus.serviceAccounts.server.name-r0" defines the "*" verb of "pods, jobs, deployments, statefulsets, replicationcontrollers and nodes". And this clusterrole is bound to the Service Account named "kubecost-1-cost-analyzer-prometheus-serviceaccounts-server-name-e82d". The Service Account is mounted into the pod named "kubecost-1-prometheus-server-6f9d5c9989-l972j".
The clusterrole named "default:kubecost-1:deployerServiceAccount-r0" defines the "*" verb of "clusterroles and clusterrolebindings". And this clusterrole is bound to the Service Account named "kubecost-1-deployer-sa". The Service Account is mounted into the pod named "kubecost-1-deployer-kvsqj".
Attacking Strategy
If a malicious user controls a specific worker node which has the pod mentioned above, or steals one of the SA token mentioned above.He/She can raise permissions to administrator level and control the whole cluster.
For example,
With the "*" verb of "clusterroles and clusterrolebindings", attacker can elevate privileges by creating a clusterrolebinding resource and binding cluster-admin to their own Service Account.
With the "*" verb of "pods, jobs, deployments, statefulsets, replicationcontrollers", attacker can elevate privileges by creating a pod to mount and steal any Service Account he/she want.
With the "*" verb of nodes, attacker can hijack other components and steal token by adding a "NoExecute" taint to other nodes.
Mitigation Discussion
Developer could use the rolebinding instead of the clusterrolebinding to restrict permissions to namespace.
Developers could define precise permissions for workload resources, including pods, deployments, jobs, statefulsets, replicationcontrollers , rather than using wildcard (*).
The "kubecost-1-deployer" appears to be used for initialization, and developers can delete resources such as the corresponding pod or Service Account after they are no longer needed.
A few questions
Is it a real issue in Kubecost?
If it's a real issue, can Kubecost mitigate the risks following my suggestions discussed in the "mitigation discussion"?
If it's a real issue, does Kubecost plan to fix this issue?
Summary
The Kubecost in GKE gave excessive authority when defining Service Account named "kubecost-1-cost-analyzer-serviceaccount-name-dff5" "kubecost-1-cost-analyzer-prometheus-serviceaccounts-server-name-e82d" and "kubecost-1-deployer-kvsqj". Besides, these Service Accounts are mounted into pod, witch makes it possible for attackers to raise rights to administrators.
Detailed Analysis
We deployed Kubecost in the marketplace of Google's GKE cluster by default.
The clusterrole named "default:kubecost-1:cost-analyzer.serviceAccount.name-r0" defines the "*" verb of "pods, deployments, replicationcontrollers and nodes". And this clusterrole is bound to the Service Account named "kubecost-1-cost-analyzer-serviceaccount-name-dff5". The Service Account is mounted into the pod named "kubecost-1-cost-analyzer-789fc48778-xgpkg".
The clusterrole named "default:kubecost-1:cost-analyzer.prometheus.serviceAccounts.server.name-r0" defines the "*" verb of "pods, jobs, deployments, statefulsets, replicationcontrollers and nodes". And this clusterrole is bound to the Service Account named "kubecost-1-cost-analyzer-prometheus-serviceaccounts-server-name-e82d". The Service Account is mounted into the pod named "kubecost-1-prometheus-server-6f9d5c9989-l972j".
The clusterrole named "default:kubecost-1:deployerServiceAccount-r0" defines the "*" verb of "clusterroles and clusterrolebindings". And this clusterrole is bound to the Service Account named "kubecost-1-deployer-sa". The Service Account is mounted into the pod named "kubecost-1-deployer-kvsqj".
Attacking Strategy
If a malicious user controls a specific worker node which has the pod mentioned above, or steals one of the SA token mentioned above.He/She can raise permissions to administrator level and control the whole cluster.
For example,
With the "*" verb of "clusterroles and clusterrolebindings", attacker can elevate privileges by creating a clusterrolebinding resource and binding cluster-admin to their own Service Account.
With the "*" verb of "pods, jobs, deployments, statefulsets, replicationcontrollers", attacker can elevate privileges by creating a pod to mount and steal any Service Account he/she want.
With the "*" verb of nodes, attacker can hijack other components and steal token by adding a "NoExecute" taint to other nodes.
Mitigation Discussion
Developer could use the rolebinding instead of the clusterrolebinding to restrict permissions to namespace.
Developers could define precise permissions for workload resources, including pods, deployments, jobs, statefulsets, replicationcontrollers , rather than using wildcard (*).
The "kubecost-1-deployer" appears to be used for initialization, and developers can delete resources such as the corresponding pod or Service Account after they are no longer needed.
A few questions
Reporter list
Looking forward to your reply. Regards Xingyu Liu
The text was updated successfully, but these errors were encountered: