-
Notifications
You must be signed in to change notification settings - Fork 600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-tenancy support #6175
Comments
The definition of "multitanency" is actually a bit confusing. Some users use the term to have complete isolation/separation of resources while others and Knative community use it for sharing resources but serving multiple tenants. I think we first need to address this.
A similar discussion is happening for (1) at Serving here: knative/serving#12533.
They cannot do that if the dataplane is handling resources from multiple tenants.
Users are not sure if we need complete separation of dataplane pods or if they are ok with some sublevel separation (like separate threads) |
One option might be provide some abstraction on top of istio, where we find things like: apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: blah
namespace: knative-eventing
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["default", "knative-eventing"]
to:
- operation:
methods: ["POST"] for the different components we could that way ensure that only a given set of namespaces can |
Here is an issue to document the Istio case: knative/docs#4823 |
Tenants could be users. another issue for documentation: knative/docs#4824 |
This issue is stale because it has been open for 90 days with no |
/triage accepted |
Another option we're exploring in Serving for making NetworkPolicy work is to use different destination ports on the same Pod. You can then use L4 (TCP) NetworkPolicy to control access to specific ports on the Broker. Note that this still doesn't provide any type of authn/authz about which identities can send which types of events, but it at least ensures that entities not in the same Namespace aren't authorized to send to that endpoint. Kubernetes Services can point to a |
We're going to have some kind of traffic limiting instead with the internal TLS work we're doing. @creydr any issues can you link here? |
Problem
We have seen requests like these from users:
These are the the things we have heard from our users over the past couple of months.
There are additional requests like these but I would like to skip them for now:
Persona:
Which persona is this feature for?
Exit Criteria
A measurable (binary) test that would indicate that the problem has been resolved.
Time Estimate (optional):
How many developer-days do you think this may take to resolve?
Additional context (optional)
Add any other context about the feature request here.
The text was updated successfully, but these errors were encountered: