Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start investigating test coverage #124

Closed
1 of 4 tasks
gomezzz opened this issue Aug 20, 2021 · 11 comments · Fixed by #167
Closed
1 of 4 tasks

Start investigating test coverage #124

gomezzz opened this issue Aug 20, 2021 · 11 comments · Fixed by #167
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers help wanted Extra attention is needed

Comments

@gomezzz
Copy link
Collaborator

gomezzz commented Aug 20, 2021

Feature

Desired Behavior / Functionality

Currently, there are several tests for a multitude of components of torchquad. However, we have never investigated test coverage. This may be desirable, especially, as when the project grows over time we are bound to face regressions if there is a large number of untested parts.

What Needs to Be Done

  • Find a good way to investigate test coverage of current tests.
  • Implement and add to docs how it can be used
  • Add some feedback on test coverage to CI if it makes sense
  • Improve test coverage (optional)
@gomezzz gomezzz added documentation Improvements or additions to documentation good first issue Good for newcomers help wanted Extra attention is needed labels Aug 20, 2021
@wirrell
Copy link

wirrell commented Apr 13, 2022

Hi Pablo,

I have read your current test suite and would be interested in picking up this issue.

Would you be open to a pytest implementation of the tests? This would make investigation of coverage much easier and all tests could be run within pytest as opposed to being called in a if name section at the end of each test script.

Cheers,

George

@gomezzz
Copy link
Collaborator Author

gomezzz commented Apr 14, 2022

Hi @wirrell ,

great to hear! Go ahead :)

Would you be open to a pytest implementation of the tests?

We are already using pytest in fact! The if main stuff is just for convenience in case you want to try out an individual test without pytest. You can run pytest in the tests folder and it is what we use in the CI as well.

@wirrell
Copy link

wirrell commented Apr 15, 2022

OK, great. I'll get started.

@wirrell
Copy link

wirrell commented Apr 15, 2022

Here is the coverage report for the main branch, as generated by coverage.py.

image

and coverage for the current develop branch with all backends (some tests currently failing).

image

Attached are html reports which show coverage line-by-line.
coverage_developbranch.zip
coverage_mainbranch.zip

@wirrell
Copy link

wirrell commented Apr 15, 2022

@gomezzz What did you have in mind for a docs addition? Something like an additional .rst page with a quick tutorial on using coverage.py to investigate coverage?

@gomezzz
Copy link
Collaborator Author

gomezzz commented Apr 15, 2022

Oh, nice reports! Thanks! :) Seems, we lost quite some coverage in some integrators in develop 🙈

What did you have in mind for a docs addition? Something like an additional .rst page with a quick tutorial on using coverage.py to investigate coverage?

Maybe, we could add a Github Action to automatically compute this? Just found a nice example to run it and automatically create a comment on new PRs about changes in coverage similar to this py-cov-action/python-coverage-comment-action-v2-example#2 (comment) ?

Alternatively, short instruction in the docs would also work but automating may be more convenient? :)

@wirrell
Copy link

wirrell commented Apr 21, 2022

Sounds good. I'll look into implementing a Github Action.

@wirrell
Copy link

wirrell commented Jun 5, 2022

@gomezzz Here's an update on this.

I have two actions running that generates coverage reports and posts a comment on PR. There was a clash between the way the current tests are written to import the source code, how coverage.py writes coverage reports, and the way the python-coverage-comment-action reads those reports that needed to be circumvented using a .cfg file. Details on that below.

The new actions are
get_test_coverage_on_push_or_PR.yml
post_coverage_comment.yml

As well as a new setup.cfg which has config options for coverage.py and pytest that are required to run tests in a way that can interact with python-coverage-comment.

Here is an example PR with coverage comment.

One final note:
The coverage generator runs all the tests that the existing run_tests.yml action runs. Could be some scope to merge these two but I have kept them separate for now.

Coverage Reports clash info

The clash comes from the fact that currently, pytest must be run in the torchquad/tests/ directory, because tests are written to import source code using sys.append to make source code available within a test, e.g. in vegas_test.py

import sys

sys.path.append("../")

This way of doing things (versus full imports e.g. from torchquad.integration.vegas) means that we have to run pytest and generate reports in the test directory, but the python-coverage-comment action only reads .coverage files in the top directory without any option to change that behavior.

To get around this, I have added a step in the new action that installs conda build and adds the source directory to the file path, so that we can run pytest in top directory. There is also a new setup.cfg file which direct pytest and coverage to run in way conducive to python-coverage-comment.

@wirrell
Copy link

wirrell commented Jun 5, 2022

@gomezzz Also please ignore the PR I just submitted. Was a mistake while creating an example PR in my fork.

Let me know thought on the use of setup.cfg or if you think I could structure this any better.

@gomezzz
Copy link
Collaborator Author

gomezzz commented Jun 7, 2022

Hi @wirrell ,

thanks first off for the nice work, looks pretty neat!

post_coverage_comment.yml

that link doesn't work for me , 404 :)

The setup.cfg is solution is fine, I would just maybe rename the file if possible to make clear this is only for the tests, so maybe test_setup.cfg?

Here is an wirrell#5

Looks good! Would just change some strings, call it Test coverage report

The coverage rate is 75%
The branch rate is 68%

I presume coverage rate is on original code before PR? So maybe call it 'Old overall test coverage rate' or something? Similarly, branch, I reckon, refers to current branch? So maybe 'PR branch test coverage'?

The coverage generator runs all the tests that the existing run_tests.yml action runs. Could be some scope to merge these two but I have kept them separate for now.

If you have an elegant solution in mind that avoids running the tests twice (does the coverage action fail if the tests fail? that would be easiest, I guess?) we could go for that?

The clash comes from the fact that currently, pytest must be run in the torchquad/tests/ directory, because tests are written to import source code using sys.append to make source code available within a test, e.g. in vegas_test.py

Yea this is a bit of a mistake we made early on with torchquad. If you have an easy fix for it in mind we could also go for that. Otherwise, your solution is also ok! (but maybe we create a separate issue to rework the imports in the tests so that this can be remedied? 🤔 )

Overall, feel free to create a PR for this, then I will have a look and give more concrete feedback!

Thanks!

@wirrell
Copy link

wirrell commented Jun 15, 2022

@gomezzz OK, great. I'll look into implementing these string changes and the file rename and then create a PR.

Re: tests running twice and imports - separate issues for both of those sounds good. I'd be happy to get to them after finishing this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants