Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update PR template #37
Update PR template #37
Changes from 2 commits
6c86015
630859c
872c1db
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it help a reviewer if the justification for not running a test was under a subheader?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My read is no, just in the interest of keeping things streamlined.
If I did run the tests, the "test skip justification" header is just one more chunk of text that I need to remove.
If I didn't run the tests, the existence of that header would remove a little bit of the social pressure to go back and do so, because clearly other people skip the tests too.
The fact that the CHANGELOG checkbox says "I have added the changes included in this pull request to the
CHANGELOG.md
" without any weasel words ("I have added the major changes", "I have added the important changes", etc.) has made me go back and update it at the last second. That's the tone I'm going for here with test running.Totally a debatable topic though! Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer the current single heading format. In my mind, it's really streamlined for review because you can look in one place and see either the tests or the absence of tests with justification. They're both the same step of "checking tests"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think the note about "How the results demonstrate the changes in this PR" is great to include Nick. It's good to know why any additional cases outside of the standard NFTest checks were important.
Have you considered adding this justification as one of the explicit fields below 'Additional Case 1'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't think of a good formalization of this that would fit into the existing bulleted list - every prompt I could come up with was just repeating this comment again. Do you have any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it's a bit tough to pin down. In general NFTest would(should) catch any changes that would break the pipeline or corrupt the output, so additional tests would have to be for changes that don't change the final output but change how the final output is produced? right? or am I missing a scenario where additional tests would be used?
candidate field names:
'why' --succinct but maybe too undirected.
'necessity of test' --straightforward but wordy.
'signal tested' --if it's not an change to output, what was the signal that was used to see if the test passed?
am I on the right track here?