-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add verification tests #428
base: main
Are you sure you want to change the base?
Conversation
Unit tests are failing because of #427. I need some guidance on the pre-commit failures. Most seem to be a result of path arguments being accepted as strings which are converted in the post_init, but the conversion isn't being recognized. |
b0b464a
to
1950217
Compare
1950217
to
7b2fb2f
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #428 +/- ##
==========================================
+ Coverage 94.59% 95.09% +0.50%
==========================================
Files 28 28
Lines 1627 1794 +167
==========================================
+ Hits 1539 1706 +167
Misses 88 88 ☔ View full report in Codecov by Sentry. |
@delucchi-cmu, this is finally ready for your review. I tested it on the ZTF DR20 Lightcurves dataset (29176 partitions; 10TB) several times. It requires about 60GB RAM with a bit over half of that coming from
It does not currently use Dask. If the times above are reasonably representative for other large catalogs (which they may or may not be; idk) then I don't think it needs it. Let me know what you think there. |
) | ||
|
||
# check user-supplied total, if provided | ||
if self.args.truth_total_rows is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also a catalog.catalog_info.total_rows
attribute that should match the user provided value. And that should match the _metadata
-derived value.
INPUT_CATALOG_DIR = DATA_DIR / "small_sky_object_catalog" | ||
|
||
|
||
def run(input_catalog_dir: Path = INPUT_CATALOG_DIR, data_dir: Path = DATA_DIR) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, test data is generated via this notebook, so that if we make other format changes, we can quickly re-generate all unit test data in one go.
https://github.com/astronomy-commons/hats-import/blob/main/tests/data/generate_data.ipynb
So there are a few options for this PR (in no particular order):
1 - incorporate into the existing notebook
2 - create a second notebook in that same directory so folks can easily find it when re-generating unit test data
3 - skip for this PR, and maybe move it in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This kind of thing is generally the trickiest part of writing verification (making data that's bad enough to trip the checks), so I really appreciate that you took the time to do it, and make it reproducible!!
Co-authored-by: Melissa DeLucchi <[email protected]>
str(COMMON_DIR / "Npix=11.missing_file.parquet"), | ||
expected_bad_file_names = {"Npix=11.extra_file.parquet", "Npix=11.missing_file.parquet"} | ||
actual_bad_file_names = { | ||
file_name.replace("\\", "/").split("/")[-1] for file_name in verifier.results_df.bad_files.squeeze() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can use os.path.basename
here
Change Description
Closes #118, closes #373, closes #374
Adds the following:
Verifier
classhats.io.validation.is_valid_catalog
, schemas, file sets, row countsStil to-do:
Code Quality
New Feature Checklist