-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update PR template #37
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,37 +1,43 @@ | ||
> This is a template for UCLA-CDS pipeline developers to create a github pull request template. Things should be adjusted for individual pipeline including: | ||
> 1. additional checklist items sepecific to the pipeline | ||
> 2. a description of how testing is expected to be done | ||
> 3. a template list or table for testing results | ||
> 4. additional notes wrapped in \<!--- ---> (or \<!-- --\> for inline comments) that help PR submitters to fill in. | ||
> 5. delete this block of instructional text. | ||
|
||
# Description | ||
<!--- Briefly describe the changes included in this pull request and the paths to the test cases below | ||
!--- starting with 'Closes #...' if appropriate ---> | ||
<!-- Briefly describe the changes included in this pull request, starting with 'Closes #... if appropriate. --> | ||
|
||
### Closes #... | ||
|
||
## Testing Results | ||
|
||
- Case 1 | ||
- sample: <!-- e.g. A-mini S2.T-1, A-mini S2.T-n1 --> | ||
- input csv: <!-- path/to/input.csv --> | ||
- config: <!-- path/to/xxx.config --> | ||
- output: <!-- path/to/output --> | ||
- Case 2 | ||
- sample: <!-- e.g. A-mini S2.T-1, A-mini S2.T-n1 --> | ||
- input csv: <!-- path/to/input.csv --> | ||
- config: <!-- path/to/xxx.config --> | ||
- output: <!-- path/to/output --> | ||
<!-- If you did not run NFTest, please justify _why_ you feel your changes | ||
don't need to be tested. --> | ||
|
||
- NFTest | ||
- log: /hot/software/pipeline/pipeline-recalibrate-BAM/Nextflow/development/unreleased/\<branchname\>/log-nftest-\<datestamp\>.log | ||
- cases: default set <!-- update this if you made any changes to nftest.yml or ran default disabled test cases explicitly --> | ||
|
||
<!-- | ||
Include any non-NFTest test details here, using the "Additional Case" | ||
template below as appropriate. Please make sure that a reviewer can | ||
understand: | ||
* How you tested the pipeline | ||
* How others can test it (including paths to config and YAML files) | ||
* How the results demonstrate the changes in this PR. Examples of this | ||
might include: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Think the note about "How the results demonstrate the changes in this PR" is great to include Nick. It's good to know why any additional cases outside of the standard NFTest checks were important. Have you considered adding this justification as one of the explicit fields below 'Additional Case 1'? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I couldn't think of a good formalization of this that would fit into the existing bulleted list - every prompt I could come up with was just repeating this comment again. Do you have any suggestions? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah it's a bit tough to pin down. In general NFTest would(should) catch any changes that would break the pipeline or corrupt the output, so additional tests would have to be for changes that don't change the final output but change how the final output is produced? right? or am I missing a scenario where additional tests would be used? candidate field names: am I on the right track here? |
||
* The run completed successfully, so nothing was broken | ||
* An output file changed (this might require two pipeline runs, before | ||
and after your changes, to demonstrate the difference) | ||
--> | ||
- Additional Case 1 | ||
- sample: <!-- e.g. A-mini S2.T-1, A-mini S2.T-n1 --> | ||
- input csv: <!-- path/to/input.csv --> | ||
- config: <!-- path/to/xxx.config --> | ||
- output: <!-- path/to/output --> | ||
|
||
# Checklist | ||
<!--- Please read each of the following items and confirm by replacing the [ ] with a [X] ---> | ||
<!-- Please read each of the following items and confirm by replacing the [ ] with a [X] --> | ||
|
||
- [ ] I have read the [code review guidelines](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3187646/Code+Review+Guidelines) and the [code review best practice on GitHub check-list](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3189956/Code+Review+Best+Practice+on+GitHub+-+Check+List). | ||
|
||
- [ ] I have reviewed the [Nextflow pipeline standards](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3193890/Nextflow+pipeline+standardization). | ||
|
||
- [ ] The name of the branch is meaningful and well formatted following the [standards](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3189956/Code+Review+Best+Practice+on+GitHub+-+Check+List), using \[AD_username (or 5 letters of AD if AD is too long)]-\[brief_description_of_branch]. | ||
- [ ] The name of the branch is meaningful and well formatted following the [standards](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3189956/Code+Review+Best+Practice+on+GitHub+-+Check+List), using \[AD\_username (or 5 letters of AD if AD is too long)]-\[brief\_description\_of\_branch\]. | ||
|
||
- [ ] I have set up or verified the branch protection rule following the [github standards](https://uclahs-cds.atlassian.net/wiki/spaces/BOUTROSLAB/pages/3190380/GitHub+Standards#GitHubStandards-Branchprotectionrule) before opening this pull request. | ||
|
||
|
@@ -42,4 +48,4 @@ already, or do not wish to be listed. (*This acknowledgement is optional.*) | |
|
||
- [ ] I have updated the version number in the `metadata.yaml` and `manifest` block of the `nextflow.config` file following [semver](https://semver.org/), or the version number has already been updated. (*Leave it unchecked if you are unsure about new version number and discuss it with the infrastructure team in this PR.*) | ||
|
||
- [ ] I have tested the pipeline on at least one A-mini sample. | ||
- [ ] I have tested the pipeline using NFTest, _or_ I have justified why I did not need to run NFTest above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it help a reviewer if the justification for not running a test was under a subheader?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My read is no, just in the interest of keeping things streamlined.
If I did run the tests, the "test skip justification" header is just one more chunk of text that I need to remove.
If I didn't run the tests, the existence of that header would remove a little bit of the social pressure to go back and do so, because clearly other people skip the tests too.
The fact that the CHANGELOG checkbox says "I have added the changes included in this pull request to the
CHANGELOG.md
" without any weasel words ("I have added the major changes", "I have added the important changes", etc.) has made me go back and update it at the last second. That's the tone I'm going for here with test running.Totally a debatable topic though! Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also prefer the current single heading format. In my mind, it's really streamlined for review because you can look in one place and see either the tests or the absence of tests with justification. They're both the same step of "checking tests"