Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add verification tests #428

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open

Add verification tests #428

wants to merge 14 commits into from

Conversation

troyraen
Copy link
Collaborator

@troyraen troyraen commented Nov 5, 2024

Change Description

  • My PR includes a link to the issue that I am addressing

Closes #118, closes #373, closes #374

Adds the following:

  • Verifier class
    • Includes these tests: hats.io.validation.is_valid_catalog, schemas, file sets, row counts
    • Outputs this file: verifier_results.csv
  • Unit tests
  • Test data
    • Two malformed datasets: 'bad_schemas' and 'wrong_files_and_rows'
    • Code to regenerate those datasets in case that is needed in the future.

Stil to-do:

  • Test a large catalog and determine whether tests may benefit from parallelization with dask.

Code Quality

  • I have read the Contribution Guide and LINCC Frameworks Code of Conduct
  • My code follows the code style of this project
  • My code builds (or compiles) cleanly without any errors or warnings
  • My code contains relevant comments and necessary documentation

New Feature Checklist

  • I have added or updated the docstrings associated with my feature using the NumPy docstring format
  • I have updated the tutorial to highlight my new feature (if appropriate)
  • I have added unit/End-to-End (E2E) test cases to cover my new feature
  • My change includes a breaking change
    • My change includes backwards compatibility and deprecation warnings (if possible)

Copy link

github-actions bot commented Nov 5, 2024

Before [57aaa9d] After [0e37cc0] Ratio Benchmark (Parameter)
failed failed n/a benchmarks.BinningSuite.time_read_histogram

Click here to view all benchmarks.

@troyraen troyraen mentioned this pull request Nov 5, 2024
14 tasks
@troyraen
Copy link
Collaborator Author

troyraen commented Nov 5, 2024

Unit tests are failing because of #427.

I need some guidance on the pre-commit failures. Most seem to be a result of path arguments being accepted as strings which are converted in the post_init, but the conversion isn't being recognized.

@troyraen troyraen force-pushed the raen/add/verification branch from 1950217 to 7b2fb2f Compare January 4, 2025 08:30
Copy link

codecov bot commented Jan 4, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.09%. Comparing base (c3fb952) to head (a7053a2).
Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #428      +/-   ##
==========================================
+ Coverage   94.59%   95.09%   +0.50%     
==========================================
  Files          28       28              
  Lines        1627     1794     +167     
==========================================
+ Hits         1539     1706     +167     
  Misses         88       88              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@troyraen
Copy link
Collaborator Author

troyraen commented Jan 4, 2025

@delucchi-cmu, this is finally ready for your review.

I tested it on the ZTF DR20 Lightcurves dataset (29176 partitions; 10TB) several times. It requires about 60GB RAM with a bit over half of that coming from hats.io.validation.is_valid_catalog. Total runtime is about 15 minutes, with this breakdown:

  • 30 sec : Verifier.from_args (load datasets)
  • 11.5 min : test_is_valid_catalog (just calls hats.io.validation.is_valid_catalog)
  • 0.1 sec : test_file_sets
  • 3 min : test_num_rows
  • 2 sec : test_schemas

It does not currently use Dask. If the times above are reasonably representative for other large catalogs (which they may or may not be; idk) then I don't think it needs it. Let me know what you think there.

@troyraen troyraen requested a review from delucchi-cmu January 4, 2025 11:45
src/hats_import/verification/arguments.py Outdated Show resolved Hide resolved
src/hats_import/verification/arguments.py Outdated Show resolved Hide resolved
src/hats_import/verification/run_verification.py Outdated Show resolved Hide resolved
src/hats_import/verification/run_verification.py Outdated Show resolved Hide resolved
)

# check user-supplied total, if provided
if self.args.truth_total_rows is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also a catalog.catalog_info.total_rows attribute that should match the user provided value. And that should match the _metadata-derived value.

INPUT_CATALOG_DIR = DATA_DIR / "small_sky_object_catalog"


def run(input_catalog_dir: Path = INPUT_CATALOG_DIR, data_dir: Path = DATA_DIR) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, test data is generated via this notebook, so that if we make other format changes, we can quickly re-generate all unit test data in one go.

https://github.com/astronomy-commons/hats-import/blob/main/tests/data/generate_data.ipynb

So there are a few options for this PR (in no particular order):

1 - incorporate into the existing notebook
2 - create a second notebook in that same directory so folks can easily find it when re-generating unit test data
3 - skip for this PR, and maybe move it in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of thing is generally the trickiest part of writing verification (making data that's bad enough to trip the checks), so I really appreciate that you took the time to do it, and make it reproducible!!

Co-authored-by: Melissa DeLucchi <[email protected]>
str(COMMON_DIR / "Npix=11.missing_file.parquet"),
expected_bad_file_names = {"Npix=11.extra_file.parquet", "Npix=11.missing_file.parquet"}
actual_bad_file_names = {
file_name.replace("\\", "/").split("/")[-1] for file_name in verifier.results_df.bad_files.squeeze()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can use os.path.basename here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants