-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complex-filters recap #2182
Comments
Here is my first recap of a bunch of relevant points, for how they are currently implemented in v2.11. EDIT: we updated this comment, after #2187, #2190 and #2191 A review would be most welcome - cc @jluethi Which objects have filters
Valid attribute filters / typical casesStandard valid examples:
Valid attribute filters / unusual cases (TBD)There are two edge cases, which are currently considered as valid: First
This one formally correct but meaningless, and we could transform it into an invalid case. Second
This is due to:
The latter reason does not hold any more, and I guess the first one is not very relevant in real-life usage. Thus I'd be in favor of deprecating this option. Either way, we'll have to review this (because it's currently not fully backwards compatible, in the API schemas), and update the data-migration script. Invalid attribute filtersThe following set of attribute filters are currently considered invalid
Net effect of running a job on dataset filters
Which filters are consumed within the runner
Which filters are produced/updated within the runnerAfter a task has run, there is a single possible source for updates to the dataset |
We edited the comment above (#2182 (comment)) as per the latest discussions and updates. @jluethi: a review would be useful (this can go together with actual tests on the staging server). A minor point which is still to be confirmed is the one in the "Valid attribute filters / unusual cases" section. (cc @ychiucco) |
@tcompa Agreed to all! Looks good to me. I'll review additional type filter questions on that issue separately
I have no strong opinion on this. In general, we should not allow people to actually set such filters in the interface.
For job attribute filters, we can remove this without an issue. Given that the only source for defaults for job attribute filters are the dataset attribute filters, I don't see a use for keeping the support for None = unset the filter anymore as well.
That was the old datastructure, right? As long as we transition over all existing dataset attribute filters that may have been {"key1": 1} to {"key1": [1]}, I'm fine with that. I don't think they will have been used very often. |
Yes.
Yes, this is part of a data-migration script for v2.11.0 |
This was a recap issue for 2.11 work. Closing. |
This placeholder issue will collect all remaining open questions or TBD from:
The text was updated successfully, but these errors were encountered: