We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I feel it could be similar to checkcif, i.e. highlight potential issues for an expert reviewer (and required by journals).
I think one could check off many of the items on the JOSS reviewer checklist automatically.
Additionally, one might try:
As you suggested, one might want to also add subject-specific checks. For instance, for ML
Quite domain specific would then be to check if the train/test set are indeed independent.
The text was updated successfully, but these errors were encountered:
Here's a tool that might already address some of the checkmarks https://github.com/fair-software/howfairis
Sorry, something went wrong.
I saw that. It doesn’t do what I want, and I don’t like how it does it. With this kind of stuff I would much rather rebuild it from scratch.
cthoyt
No branches or pull requests
I feel it could be similar to checkcif, i.e. highlight potential issues for an expert reviewer (and required by journals).
I think one could check off many of the items on the JOSS reviewer checklist automatically.
Additionally, one might try:
As you suggested, one might want to also add subject-specific checks. For instance, for ML
Quite domain specific would then be to check if the train/test set are indeed independent.
The text was updated successfully, but these errors were encountered: