-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Start investigating test coverage #124
Comments
Hi Pablo, I have read your current test suite and would be interested in picking up this issue. Would you be open to a pytest implementation of the tests? This would make investigation of coverage much easier and all tests could be run within pytest as opposed to being called in a if name section at the end of each test script. Cheers, George |
Hi @wirrell , great to hear! Go ahead :)
We are already using pytest in fact! The if main stuff is just for convenience in case you want to try out an individual test without pytest. You can run pytest in the tests folder and it is what we use in the CI as well. |
OK, great. I'll get started. |
Here is the coverage report for the main branch, as generated by coverage.py. and coverage for the current develop branch with all backends (some tests currently failing). Attached are html reports which show coverage line-by-line. |
@gomezzz What did you have in mind for a docs addition? Something like an additional .rst page with a quick tutorial on using coverage.py to investigate coverage? |
Oh, nice reports! Thanks! :) Seems, we lost quite some coverage in some integrators in develop 🙈
Maybe, we could add a Github Action to automatically compute this? Just found a nice example to run it and automatically create a comment on new PRs about changes in coverage similar to this py-cov-action/python-coverage-comment-action-v2-example#2 (comment) ? Alternatively, short instruction in the docs would also work but automating may be more convenient? :) |
Sounds good. I'll look into implementing a Github Action. |
@gomezzz Here's an update on this. I have two actions running that generates coverage reports and posts a comment on PR. There was a clash between the way the current tests are written to import the source code, how coverage.py writes coverage reports, and the way the python-coverage-comment-action reads those reports that needed to be circumvented using a .cfg file. Details on that below. The new actions are As well as a new setup.cfg which has config options for coverage.py and pytest that are required to run tests in a way that can interact with python-coverage-comment. Here is an example PR with coverage comment. One final note: Coverage Reports clash infoThe clash comes from the fact that currently, pytest must be run in the torchquad/tests/ directory, because tests are written to import source code using
This way of doing things (versus full imports e.g. To get around this, I have added a step in the new action that installs conda build and adds the source directory to the file path, so that we can run pytest in top directory. There is also a new setup.cfg file which direct pytest and coverage to run in way conducive to python-coverage-comment. |
@gomezzz Also please ignore the PR I just submitted. Was a mistake while creating an example PR in my fork. Let me know thought on the use of setup.cfg or if you think I could structure this any better. |
Hi @wirrell , thanks first off for the nice work, looks pretty neat! that link doesn't work for me , 404 :) The setup.cfg is solution is fine, I would just maybe rename the file if possible to make clear this is only for the tests, so maybe
Looks good! Would just change some strings, call it
I presume coverage rate is on original code before PR? So maybe call it 'Old overall test coverage rate' or something? Similarly, branch, I reckon, refers to current branch? So maybe 'PR branch test coverage'?
If you have an elegant solution in mind that avoids running the tests twice (does the coverage action fail if the tests fail? that would be easiest, I guess?) we could go for that?
Yea this is a bit of a mistake we made early on with torchquad. If you have an easy fix for it in mind we could also go for that. Otherwise, your solution is also ok! (but maybe we create a separate issue to rework the imports in the tests so that this can be remedied? 🤔 ) Overall, feel free to create a PR for this, then I will have a look and give more concrete feedback! Thanks! |
@gomezzz OK, great. I'll look into implementing these string changes and the file rename and then create a PR. Re: tests running twice and imports - separate issues for both of those sounds good. I'd be happy to get to them after finishing this one. |
Feature
Desired Behavior / Functionality
Currently, there are several tests for a multitude of components of torchquad. However, we have never investigated test coverage. This may be desirable, especially, as when the project grows over time we are bound to face regressions if there is a large number of untested parts.
What Needs to Be Done
The text was updated successfully, but these errors were encountered: