This is where 'setup' scripts live. These generate runfiles containing sequential calls to various radio astronomy packages, or other python tools that form part of oxkat
. Executing the resulting script will run these calls in order, or if you are using the cluster at IDIA or CHPC submit an interdependent batch of jobs to the queue.
The computing infrastructure must be specified when a setup script is run, for example:
$ python setups/1GC.py idia
where idia
can be replaced with chpc
or node
, the latter being when you want to execute the jobs on your own machine or a standalone node.
Rather than having a single go-pipeline
script they are partitioned in stages (1GC, FLAG, 2GC, 3GC), after each of which it is prudent to pause and examine the state of the processing before continuing. If you are gung-ho and don't pay the electricity bill then you can just execute them in order and collect your map at the end.
To process your MeerKAT data they must be in a Measurement Set (MS), containing your target scans as well as suitable calibrator scans. Copy (or place a symlink to) this Measurement Set into an empty folder, then copy (or make symlinks to) the contents of the root oxkat
folder and you will be ready to go. There follows a description of the setup scripts in this folder, in the order in which they should be run.
This script will perform the following steps. This stage mostly involves using CASA
to execute the provided processing scripts.
-
Duplicate your source MS, averaging it down to 1,024 channels (if necessary).
-
Examine the contents of the MS to identify target and calibrator fields. The preferred primary calibrator is either PKS B1934-608 or PKS B0408-65, but others should work as long as
CASA
knows about them. Targets are paired with the secondary calibrator that is closest to them on the sky. -
Rephase the visibilities of the primary calibrator to correct for erroneous positions that were present in the open time data (this has no effect on observations that did not have this issue).
-
Apply basic flagging commands to all fields.
-
Run autoflaggers on the calibrator fields.
-
Split the calibrator fields into a
*.calibrators.ms
with 8 spectral windows. -
Derive intrinsic spectral models for the secondary (or secondaries) based on the primary calibrator, using the above MS.
-
Derive delay (K), bandpass (B), gain (G) calibrations from the primary and secondary calibrators and apply them to all calibrators and targets. K, B, and G corrections are derived in an iterative way, with rounds of residual flagging in between.
-
Plot the gain tables using
ragavi-gains
. -
Plot visibilities of the corrected calibrator data using
shadeMS
. -
Split the target data out into individual Measurement Sets, with the reference calibrated data in the
DATA
column of the MS. Note that only basic flagging commands will have been applied to the target data at this stage.
Flagging operations are saved to the .flagversions
tables at every stage. Products for examination are stored in the GAINTABLES
and VISPLOTS
folders.
This script will perform the following steps for every target in the source MS:
-
Autoflag the target data using
tricolour
. -
Image the targets, unconstrained deconvolution using
wsclean
. -
Generate a FITS mask of the field using local RMS thresholding.
If you are running on IDIA or CHPC hardware then the steps above will be submitted for each field in parallel. The resulting image(s) will be available for examination in the IMAGES
folder.
This script will perform the following steps for every target in the source MS:
-
Masked deconvolution using
wsclean
and the FITS mask generated byFLAG.py
. -
Predict model visibilities based on the resulting clean component model using
wsclean
. Note that this is a separate step as baseline dependent averaging is applied during imaging by default to speed up the process. -
Self calibrate the data.
-
Masked deconvolution of the
CORRECTED_DATA
usingwsclean
. -
Refine the FITS mask based on the self-calibrated image.
-
Predict model visibilities based on the refined model from the
CORRECTED_DATA
image.
If you are running on IDIA or CHPC hardware then the steps above will be submitted for each field in parallel. The resulting image(s) will be available for examination in the IMAGES
folder.
Big caveat: The software required for this stage cannot generally be run via the IDIA slurm queue. It will not run at all on CHPC's Lengau cluster due to the configuration of the worker nodes. Using salloc
to acquire a HighMem worker node at IDIA and then running the resulting jobs in node
mode is probably the best approach if you do not have access to a suitable machine of your own. Note that DD-calibration and imaging is very resource heavy. Note also that there generally isn't a single approach for DD cal (or even DI cal) that works for every field.
[PENDING]