Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reporting options which blend data across organizations for a given shared schema #84

Open
Ardaglash opened this issue Apr 27, 2022 · 2 comments
Labels
Complexity - 3 Large Issue estimated to require a significant amount of dev effort, potentially broken into sub-issues. enhancement New feature or request. Importance - 2 Moderate An issue that makes the app more difficult to use.

Comments

@Ardaglash
Copy link
Collaborator

Reporting which blends data across organizations for a given “core” schema

@Ardaglash Ardaglash added the enhancement New feature or request. label Apr 27, 2022
@MatPersson
Copy link

See also comment in #91

@Ardaglash
Copy link
Collaborator Author

Bringing your comment from #91 across to here...

I am not familiar enough with the app to know if this is a way to assess inter-rater reliability or just another statistic. Concern, as scouting Orgs are added, that several scouts may score the same bot, same match. How is variance in scouting results managed in such cases? This might be a new enhancement.

Teams will still have all the "our-org-data-only" reports they have today. This is to add the option to also generate reports combining data from other organizations who have agreed to use the same data schema.

In the cases of scouting alliances where multiple teams are at the same event, we would expect to have multiple scouts from different teams scouting the same robot. Ideally combining the data would act as a sanity-check on each other's values, but it does depend on a level of trust within the scouting alliance that everyone's scouts are making best efforts to do a reasonably good job (and if a scouting lead sees a glitch, they're going to go & fix it or remove it, etc.)

Mathematically & in terms of database queries, the most straightforward way is just extend the DB-based metrics we're using today - averages, standard deviations, and so forth. One DB-based metric which will need some extra attention, though, will be the exponential moving average calculations, as those can't just be bolted on to a dataset where you name more than 1 data point per time slice... so my first thought would be to pass the values through a stage where you do simple averages grouped by timestamp, then do EMA calculations on the resulting 1-data-point-per-timestamp set.

@JL102 JL102 added Complexity - 3 Large Issue estimated to require a significant amount of dev effort, potentially broken into sub-issues. Importance - 2 Moderate An issue that makes the app more difficult to use. labels Mar 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Complexity - 3 Large Issue estimated to require a significant amount of dev effort, potentially broken into sub-issues. enhancement New feature or request. Importance - 2 Moderate An issue that makes the app more difficult to use.
Development

No branches or pull requests

3 participants