Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor to make all algorithms use the same formats, simplify parallelism, direct pre-conditioning and produce uniformly blurred images #111

Merged
merged 235 commits into from
Aug 30, 2024

Conversation

landmanbester
Copy link
Collaborator

  • Mostly remove reliance on Dask collections except for the init and degrid applications since they interact with measurement set like objects via dask-ms
  • Simplify parallelism parameters to --nworkers and --nthreads where the former controls the number of Dask workers and therefore chunks that are processed in parallel and the latter the number of (vertical) threads available to each worker
  • Homogenize data structures between distributed and non-distributed applications. In particular introduce xds_from_url which returns a lazy view of all datasets contained in the zarr store and xds_from_list that eagerly loads a list of zarr datasets into memory in parallel (optionally dropping unnecessary variables)
  • Add --bda-decorr and --max-field-of-view options to perform time based BDA using the init application
  • Introduce new form of pre-conditioning which assumes periodic boundary conditions so that the (regularized) hessian can be inverted directly without sub-iterations and make this the default. This removes the need to compute an oversized PSF but introduces a tapering border since the edges need to be smoothly tapered to zero to avoid edge effects
  • Add option to produce uniformly blurred images using the restore application. This is still WIP progress the aim being to allow homogenizing resolution across frequency for MAP estimates which do not have an intrinsic resolution of zero (as assumed for CLEAN based models for example). The main idea is to derive the intrinsic resolution of the model and the update separately by inferring the natural response to point sources under different forms of regularization and then correcting for this by homogenizing the "convolution kernels". This is not perfect but potentially allows using more naturally weighted images without sacrificing resolution
  • Remove attempt to horizontally parallelize wavelets transforms over bases since this results in sub-optimal resource utilization (differing kernel support leading to idle threads for wavelets with smaller kernel support)
  • Add option to output a noise image from the grid application by imaging a realization of random noise drawn with variance set by the "corrected weights"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant