The im tool handles synthesis imaging operations.
This chapter describes how to make and deconvolve images starting from calibrated interferometric data, possibly supplemented with single-dish data or an image made from single-dish data. This data must be available in CASA (see § 2 on importing data). See § 4 for information on calibrating synthesis data. In the following sections, the user will learn how to make various types of images from synthesis data, reconstruct images of the sky using the available deconvolution techniques, include single-dish information in the imaging process, and to prepare to use the results of imaging for improvement of the calibration process (“self-calibration”).
The current imaging and deconvolution tasks are:
There are also tasks that help you set up the imaging or interface imaging with calibration:
The full “tool kit” that allows expert-level imaging must still be used if you do not find enough functionality within the tasks above.
Information on other useful tasks and parameter setting can be found in:
The im.setimage method is used to set many of the common image parameters. The im.advise method gives helpful advice for setting up for imaging.
We now describe some parameters that are common to the imaging
tasks. These should behave the same way in any imaging task
that they are found in. These are in alphabetical order.
ALERT: clean tries to use up to four cores on the computer that it is running on. If this is not desired, the environment variable OMP_NUM_THREAD can be set to a lower value.
The cell parameter defines the pixel size in the x and y axes for the output image. If given as floats or integers, this is the cell size in arc seconds, e.g.
make 0.5″ pixels. You can also give the cell size in quantities, e.g.
If a single value is given, then square pixels of that size are assumed.
The field parameter selects the field indexes or names to be used in imaging. Unless you are making a mosaic, this is usually a single index or name:
field = '0' # First field (index 0) field = '1331+305' # 3c286 field = '*' # all fields in dataset
The syntax for field selection is given in § 2.3.2.
The value of the imagename parameter is used as the root name of the output image. Depending on the particular task and the options chosen, one or more images with names built from that root will be created. For example, the clean task run with imagename=’ngc5921 a series of output images will be created with the names ngc5921.clean, ngc5921.residual, ngc5921.model, etc.
If an image with that name already exists, it will in general be overwritten. Beware using names of existing images however. If the clean is run using an imagename where <imagename>.residual and <imagename>.model already exist then clean will continue starting from these (effectively restarting from the end of the previous clean). Thus, if multiple runs of clean are run consecutively with the same imagename, then the cleaning is incremental (as in the difmap package).
The output image may also have a different beam per plane. For datasets with very large fractional bandwidth, clean will use a different PSF for each channel when the PSF changes by more than half a pixel as a function of frequency. To smooth to a common resolution, one can either use the parameter resmooth (§ 5.2.6) to smooth to the smallest common possible beam, restoringbeam for an arbitrary, larger beam, (§ 5.3.11), or the task imsmooth (§ 6.18) after cleaning. Data analysis tasks such as immoments in CASA support changing beams per plane.
The image size in numbers of pixels on the x and y axes is set by imsize. For example,
imsize = [640, 640]
makes a square image 640 pixels on a side. If a single value is given, then a square image of that dimension is made. The underlying algorithms work best for certain image sizes. If you pick a size where that algorithm will be particularly slow, the logger will send a warning message, suggesting the nearest optimal values. In general, the best performance is obtained with image sizes that are even and factorizable to 2,3,5,7 only. An easy rule of thumb would be 2n×10 where n is an integer number, like 160, 320, 640, 1280, 2560, etc.
The mode parameter defines how the frequency channels in the synthesis MS are mapped onto the image. The allowed values are: mfs, channel, velocity, frequency. The mode parameter is expandable, with some options uncovering a number of sub-parameters, depending upon its value.
mode = 'mfs' # Spectral gridding type (mfs, channel, # velocity, frequency) nterms = 1 # Number of terms used to model the sky # frequency dependence (Note: nterms>1 # is under development) reffreq = '' # Reference frequency for MFS (relevant # only if nterms > 1),'' defaults to # central data-frequency
The default mode=’mfs’ emulates multi-frequency synthesis in that each visibility-channel datum k with baseline vector Bk at wavelength λk is gridded into the uv-plane at uk = Bk/λk. The result is one or more images (depending on nterms), regardless of how many channels are in the input dataset. The first image plane is at the frequency given by the midpoint between the highest and lowest frequency channels in the input spw(s). Currently, there is no way to choose the center frequency of the output image plane independently.
WideBand imaging (mfs with nterms>1) is available in CASA. This algorithm models the wide-band sky brightness as a linear combination of Gaussian-like functions whose amplitudes follow a Taylor-polynomial in frequency. The output images are a set of Taylor-coefficient images, from which spectral index and curvature maps are derived. The reffreq parameter sets the reference frequency ν0 about which the Taylor expansion is done. The Taylor expansion is a polynomial in frequency:
Itsky an image of the tth coefficient of the Taylor-polynomial expansion.
When Eq. 5.1 is applied on a source with a spectral index
The Taylor terms Itsky can be used to constrain the sky brightness, α, and β through
For more information, please see Rau, U. & Cornwell, T. J. 2011, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry”, A&A, 532, 71
Alert: The MS-MFS (multiscale-multifrequency) algorithm in the current release is new and is still being developed/tested/debugged. Its basic operation has been tested on wide-band JVLA data for Stokes I imaging.
Explanation of the Parameters:
nterms: The number of terms in the Taylor polynomial used to model the frequency structure. nterms>1 triggers MS-MFS. nterms=1 triggers standard point-source clean or multi-scale-clean. Note: The choice of nterms follows the same rules used while fitting a polynomial to a 1D set of noisy data points. To prevent overfitting, the order of the polynomial needs to depend on the available signal-to-noise in the data. A very rough rule-of-thumb is as follows: For high SNR data (single channel SNR>100), and fields dominated by point-sources with spectral indices around −1.0 across a 2:1 bandwidth, choose nterms=3 or 4. For lower SNR data (5<SNR<100), flatter spectra, or when there is significant extended emission, nterms=2 is a much safer option. For very low SNR data (SNR<5), choose nterms=1).
reffreq: The reference frequency used to compute Taylor functions [ (freq − reffreq)/(reffreq) ]i. If left blank (reffreq=”), it defaults to the middle frequency of the selected data. Note : For the current release, the use of reffreq=” is recommended.
multiscale: The MS-MFS algorithm always uses scale sizes set via the multiscale parameter. For point-source deconvolution, set multiscale= (also the default). Note: Unlike standard msclean (multiscale = [0,6,10,....] with nterms=1), with higher nterms the largest specified scale size must lie within the sampled range of the interferometer. If not, there can be an ambiguity in the spectral reconstruction at very large spatial scales.
gridmode: Wideband W-Projection is supported, and can be triggered via gridmode=’widefield’.
modelimage: Supply a list of Taylor-coefficient images, to start the deconvolution from. If only one image is specified, it will be used as the model for the ’tt0’ image.
Output images: [xxx.image.tt0, xxx.image.tt1,... ] : Images of Taylor coefficients that describe the frequency-structure. The "tt0" image is the total-intensity image at the reference frequency, and is equivalent to "xxx.image" obtained via standard imaging.
[xxx.image.alpha, xxx.image.beta] : Spectral index and spectral curvature at the reference-frequency. These are computed from tt0, tt1, tt2 only for regions of the image where there is sufficient signal-to-noise to compute them. These regions are chosen via a threshold on the intensity image (tt0) computed as MAX( userthreshold*5 , peakresidual/10 ) ). This threshold is reported in the logger. Elsewhere, the values are currently set to zero.
[xxx.image.alpha.error] contains the errors of the spectral index solutions.
The following is a list of differences between MS-MFS (nterms>1) and standard imaging, in the current CASA release.
ALERT: Note that mode=’channel’ is intended as a shortcut to produce a cube based on the input MS channelization. It will be in the frame of the input MS. We recommend that users instead use the ’velocity’ and ’frequency’ modes which will produce cubes in other frames with more control of the cube spacing. These modes have defaults that will work from the MS spacing, reproducing the action of mode=’channel’.
If mode=’channel’ is chosen, then an image cube will be created. This is an expandable parameter, with dependent parameters:
mode = 'channel' # Spectral image definition(mfs, # channel, velocity,frequency) nchan = -1 # Number of channels (planes) in output image start = 0 # first input channel to use width = 1 # Number of input channels to average interpolation = 'nearest' # Spectral interpolation(nearest, linear, cubic)
The default nchan=-1 will automatically produce a cube with the number of channels needed to span the (regridded) spectral windows of the MS. If multiple MSs are used, the spectral frames of these need to be identical, e.g. LSRK1. ALERT: This often results in extra blank channels at the beginning and end of the image cube, so it is usually more precise to specify nchan and start to get what you want. For best results, we also recommend ’nearest’ interpolation for the mode=channel.
The channelization of the resulting image is determined by the channelization in the MS of vis of the first spw specified (the “reference spw”). The actual channels to be gridded and used in the clean are selected via the spw parameter as usual. The resulting image cube will have nchan channels spaced evenly in frequency. The first output channel will be located at the frequency of channel start in the (first) reference spw (independent of what channels are selected using spw). If width > 1, then input MS channels with centers within a frequency range given by (width+1)/2 times the reference spw spacing will be gridded together (as in mode = ’mfs’ above) into the channels of the output image cube. The output channel spacing is thus given by width channels in the reference spw of the MS.
The interpolation sub-parameter (§ 220.127.116.11) sets how channels are gridded into the image cube planes. For ’nearest’, the channels in spw beyond the first are mapped into the nearest output image channel within half a channel (if any). Otherwise, the chosen interpolation scheme will be used. Image channels that lie outside the MS frequency range or have no data mapped to them will be blank in the output image, but will be in the cube.
mode = 'channel' nchan = 46 start = 5 width = 1
which will produce a 46-channel cube starting with channel 5 of the MS with the same channel width as the MS. Note: the start channel is in reference to the channels in the MS, not the subset selected by spw.
For mode=’frequency’, an output image cube is created with nchan channels spaced evenly in frequency.
mode = 'frequency' # Spectral image definition(mfs, # channel, velocity,frequency) nchan = -1 # Number of channels (planes) in output image start = '' # Frequency of first image channel: # e.q. '1.4GHz'(''=default) width = '' # Image channel frequency width: # e.g '1.0kHz'(''=default) interpolation = 'linear' # Spectral interpolation(nearest, linear, cubic) outframe = '' # velocity frame of output image
The frequency of the first output channel is given by start and spacing by width. Output channels have width also given by width. The sign of width determines whether the output channels ascend or descend in frequency. Data from the input MS with centers that lie within one-half an input channel overlap of the frequency range of ±width/2 centered on the output channels are gridded together.
The defaults are designed to safely choose output cube channels to span the input MS(s). The default nchan=-1 will choose the number of channels needed to span the frequencies of the channels in the MS. The defaults start=’’ and width=’’ will use the channel frequency and width of the first channel of the first specified spectral window selected in spw. ALERT: As in “channel” mode, this is currently the first channel of the first spw, not the first channel selected from that spw.
The interpolation sub-parameter (§ 18.104.22.168) sets how channels are gridded into the image cube planes.
Using the NGC5921 dataset as an example:
mode = 'frequency' nchan = 21 start = '1412.830MHz' width = '50kHz' outframe = 'LSRK'
would produce a 21-channel output cube with 50 kHz wide channels rather than the default channelization of the MS (24.4 kHz).
If mode=’velocity’ is chosen, then an output image cube with nchan channels will be created, with channels spaced evenly in velocity. Parameters are:
mode = 'velocity' # Spectral image definition(mfs, # channel, velocity,frequency) nchan = -1 # Number of channels (planes) in output image start = '' # Velocity of first image channel: # e.g '0.0km/s'(''=default) width = '' # Image channel velocity width: e.g # '-1.0km/s'(''=default) interpolation = 'linear' # Spectral interpolation(nearest, # linear, cubic) outframe = '' # velocity reference frame of output # image; '' =input veltype = 'radio' # velocity definition
Note that velocities are calculated with respect to the rest frequency in the MS or specified through the restfreq parameter (§ 5.2.8).
The velocity of the first output channel is given by start and spacing by width. Averaging is as in mode=’frequency’. The interpolation sub-parameter (§ 22.214.171.124) sets how channels are gridded into the image cube planes.
The defaults are designed to safely choose output cube channels to span the input MS(s). The default nchan=-1 will choose the number of channels needed to span the velocities of the channels in the MS. The defaults start=’’ and width=’’ will use the channel velocity and width of the first channel of the first specified spectral window selected in spw. ALERT: As in “channel” mode, this is currently the first channel of the first spw, not the first channel selected from that spw.
Again, using the NGC5921 dataset as an example:
mode = 'velocity' nchan = 21 start = '1383.0km/s' width = '10km/s' outframe = 'LSRK'
Note that in this case the velocity axis runs forward, as opposed to the default channelization for ’channel’ or ’frequency’.
The interpolation sub-parameter controls how spectral channels in the MS are gridded into the output image cube. This is available in all modes except ’mfs’. The options are: ’nearest’, ’linear’, ’cubic’.
For ’nearest’, the channels in spw beyond the first are mapped into the nearest output image channel within half a channel (if any).
For ’linear’, the channels are gridded into the planes using weights given by a linear function of the frequency of the MS channel versus the plane. Each input channel will be mapped to 1 or 2 output planes. For most users, this is the best choice.
For ’cubic’, the channels are gridded using a cubic interpolation function.
’Linear’ and ’cubic’ interpolation methods require that there are two datapoints that sandwich your new, regridded bin. This can introduce edge effects like in the first or last channel or adjacent to flagged channels where data is only available on one side of the spectrum. interpolation=’nearest’ will avoid such edge effects but may not work so well for data with spws that overlap. For mode=’velocity’ or ’frequency’, ’linear’ interpolation usually works best and for mode=’channel’ the ’nearest’ interpolation method is superior. But this could be different for your dataset and you should carefully check your results.
For large cubes, the psf will change as a function of frequency. clean will produce cubes with different synthesized beams per plane. All CASA analysis tasks can deal with such cubes. If one would like a common psf for all planes, typically the smallest possible beam, one can invoke the resmooth Boolean parameter. Alternatively, the cube can be convolved to the smallest common beam in a separate step vis imsmooth (see Sect. 6.18).
The phasecenter parameter indicates which of the field IDs should be used to define the phase center of the mosaic image, or what that phase center is in RA and Dec. The default action is to use the first one given in the field list.
phasecenter='5' # field 5 in multi-src ms phasecenter='J2000 19h30m00 -40d00m00' # specify position
Note that the format for angles prefers to use hm for RA/time units and dm for Dec/Angle units as separators. The colon :: separator is interpreted as RA/time even if its used for the Dec, so be careful not to copy/paste from other sources.
The value of the restfreq parameter, if set, will over-ride the rest frequency in the header of the first input MS to define the velocity frame of the output image.
ALERT: The restfreq parameter takes the options of transitions and frequencies as in the corresponding plotxy parameter (§ 126.96.36.199), but the frame information is controlled under the mode parameter (§ 5.2.5).
will set the rest frequency to that of the CO 1-0 line.
ALERT: Setting restfreq explicitly here in clean is good practice, and may be necessary if your MS has been concatenated from different files for different spectral windows (§ 2.2.14).
The spw parameter selects the spectral windows that will be used to form the image, and possibly a subset of channels within these windows.
The spw parameter is a string with an integer, list of integers, or a range, e.g.
spw = '1' # select spw 1 spw = '0,1,2,3' # select spw 0,1,2,3 spw = '0~3' # same thing using ranges
You can select channels in the same string with a : separator, for example
spw = '1:10~30' # select channels 10-30 of spw 1 spw = '0:5~55,3:5;6;7' # chans 5-55 of spw 0 and 5,6,7 of spw 3
This uses the standard syntax for spw selection is given in § 2.3.3. See that section for more options.
Note that the order in which multiple spws are given is important for mode = ’channel’, as this defines the origin for the channelization of the resulting image.
The stokes parameter specifies the Stokes parameters for the resulting images. Note that forming Stokes Q and U images requires the presence of cross-hand polarizations (e.g. RL and LR for circularly polarized systems such as the VLA) in the data. Stokes V requires both parallel hands (RR and :LL) for circularly polarized systems or the cross-hands (XY and YX) for linearly polarized systems such as ALMA and ATCA.
This parameter is specified as a string of up to four letters and can indicate stokes parameters themselves, Right/Left hand polarization products, or linear polarization products (X/Y). For example,
stokes = 'I' # Intensity only stokes = 'IQU' # Intensity and linear polarization stokes = 'IV' # Intensity and circular polarization stokes = 'IQUV' # All Stokes imaging stokes = 'RR' # Right hand polarization only stokes = 'XXYY' # Both linear polarizations
are common choices (see the inline help of clean for a full range of possible options). The output image will have planes (along the “polarization axis”) corresponding to the chosen Stokes parameters.
If as input to deconvolution tasks such as clean, the stokes parameter includes polarization planes other than I, then choosing psfmode=’hogbom’ (§ 188.8.131.52) or psfmode=’clarkstokes’ (§ 184.108.40.206) will clean (search for components) each plane sequentially, while psfmode=’clark’ (§ 220.127.116.11) will deconvolve jointly.
Alert: As of Release 3.2, clean expects that all input polarizations are present. E.g. if you have RR and LL dual polarization data and you flagged parts of RR but not LL, clean will ignore both polarizations in slice. It is possible to split out a polarization product with split and image separately. But you will not be able to combine these part-flagged data in the uv-domain. We will remove that restriction in a future CASA release.
This controls the radial weighting of visibilities in the uv-plane (see § 5.2.12 below) through the multiplication of the visibilities by the Fourier transform of an elliptical Gaussian. This is itself a Gaussian, and thus the visibilities are “tapered” with weights decreasing as a function of uv-radius.
The uvtaper parameter expands the menu upon setting uvtaper=True to reveal the following sub-parameters:
uvtaper = True # Apply additional uv tapering of visibilities. outertaper =  # uv-taper on outer baselines in uv-plane innertaper =  # uv-taper in center of uv-plane (not implemented)
The sub-parameters specify the size and optionally shape and orientation of this Gaussian in the uv-plane or optionally the sky plane. The outertaper refers to a Gaussian centered on the origin of the uv-plane.
outertaper= # no outer taper applied outertaper=['5klambda'] # circular uv taper FWHM=5 kilo-lambda outertaper=['5klambda','3klambda','45.0deg'] # elliptical Gaussian outertaper=['10arcsec'] # on-sky FWHM 10" outertaper=['300.0'] # 300m in aperture plane
Note that if no units are given on the taper, then the default units are assumed to be meters in aperture plane.
ALERT: The innertaper option is not yet implemented.
The im.weight method has more weighting options than available in the imaging tasks. See the User Reference Manual for more information on imaging weights.
In order to image your data, we must have a map from the visibilities to the image. Part of that map, which is effectively a convolution, is the weights by which each visibility is multiplied before gridding. The first factor in the weighting is the “noise” in that visibility, represented by the data weights in the MS (which is calibrated along with the visibility data). The weighting function can also depend upon the uv locus of that visibility (e.g. a “taper” to change resolution). This is actually controlled by the uvtaper parameter (see § 5.2.11). The weighting matrix also includes the convolution kernel that distributes that visibility onto the uv-plane during gridding before Fourier transforming to make the image of the sky. This depends upon the density of visibilities in the uv-plane (e.g. “natural”, “uniform”, “robust” weighting).
The user has control over all of these.
ALERT: You can find a weighting description in the online User Reference Manual at:
The weighting parameter expands the menu to include various sub-parameters depending upon the mode chosen:
For weighting=’natural’, visibilities are weighted only by the data weights, which are calculated during filling and calibration and should be equal to the inverse noise variance on that visibility. Imaging weight wi of sample i is given by
|wi = ωi =|
where the data weight ωi is determined from σi is the rms noise on visibility i. When data is gridded into the same uv-cell for imaging, the weights are summed, and thus a higher uv density results in higher imaging weights. No sub-parameters are linked to this mode choice. It is the default imaging weight mode, and it should produce “optimum” image with the lowest noise (highest signal-to-noise ratio). Note that this generally produces images with the poorest angular resolution, since the density of visibilities falls radially in the uv-plane
For weighting = ’uniform’, the data weights are calculated as in ’natural’ weighting. The data is then gridded to a number of cells in the uv-plane, and after all data is gridded the uv-cells are re-weighted to have “uniform” imaging weights. This pumps up the influence on the image of data with low weights (they are multiplied up to be the same as for the highest weighted data), which sharpens resolution and reduces the sidelobe level in the field-of-view, but increases the rms image noise. No sub-parameters are linked to this mode choice.
For uniform weighting, we first grid the inverse variance ωi for all selected data onto a grid with uv cell-size given by 2/FOV where FOV is the specified field of view (defaults to the image field of view). This forms the gridded weights Wk. The weight of the i-th sample is then:
The weighting = ’superuniform’ mode is similar to the ’uniform’ weighting mode but there is now an additional npixels sub-parameter that specifies a change to the number of cells on a side (with respect to uniform weighting) to define a uv-plane patch for the weighting renormalization. If npixels=0 you get uniform weighting.
The weighting = ’radial’ mode is a seldom-used option that increases the weight by the radius in the uv-plane, i.e.
|wi = ωi ·||√|
Technically, I would call that an inverse uv-taper since it depends on uv-coordinates and not on the data per-se. Its effect is to reduce the rms sidelobes for an east-west synthesis array. This option has limited utility.
The weighting = ’briggs’ mode is an implementation of the flexible weighting scheme developed by Dan Briggs in his PhD thesis. See:
This choice brings up the sub-parameters:
weighting = 'briggs' # Weighting to apply to visibilities robust = 0.0 # Briggs robustness parameter npixels = 0 # number of pixels to determine uv-cell size 0=> field of view
The actual weighting scheme used is:
where Wk is defined as in uniform and superuniform weighting, and
and R is the robust parameter.
The key parameter is the robust parameter, which sets R in the Briggs equations. The scaling of R is such that R = 0 gives a good trade-off between resolution and sensitivity. The robust R takes value between −2.0 (close to uniform weighting) to 2.0 (close to natural).
Superuniform weighting can be combined with Briggs weighting using the npixels sub-parameter. This works as in ’superuniform’ weighting (§ 18.104.22.168).
For weighting=’briggsabs’, a slightly different Briggs weighting is used, with
where R is the robust parameter and σR is the noise parameter.
This choice brings up the sub-parameters:
weighting = 'briggsabs' # Weighting to apply to visibilities robust = 0.0 # Briggs robustness parameter noise = '0.0Jy' # noise parameter for briggs weighting when rmode='abs' npixels = 0 # number of pixels to determine uv-cell size 0=> field of view
Otherwise, this works as weighting=’briggs’ above (§ 22.214.171.124).
The value of the vis parameter is either the name of a single MS, or a list of strings containing the names of multiple MSs, that should be processed to produce the image. The MS referred to by the first name in the list (if more than one) is used to determine properties of the image such as channelization and rest frequency.
vis = 'ngc5921.ms'
set a single input MS, while
vis = ['ngc5921_day1.ms', 'ngc5921_day2.ms', 'ngc5921_day3.ms']
points to three separate measurement sets that will be gridded together to form the image. This means that you do not have to concatenate datasets, for example from different configurations, before imaging.
For the multiple MS case, all selection commands like field, spw, etc. are lists that refer to the list of input MSs, like
will use the first entry of each selection criterion and apply it to
the first dataset (
'ngc5921_day1.ms'), the second selection criterion to the
second dataset etc.
The CASA imaging task and tools use primary beams based on models for each observatory’s antenna types. In addition to different antenna diameters, different functions may be used.
The voltage patterns are based on the following antenna primary beams, based on the TELESCOPE_NAME keyword in the OBSERVATION table:
If the telescope name is unknown, or is CARMA or ALMA, then the DISH_DIAMETER in the ANTENNA table is used with a scaled VLA pattern.
In mosaicking mode, clean will use frequency-dependent primary beams. It also appears that Airy or spheroidal beams are best behaved for mosaics (see § 5.3.15).
To create an image and then deconvolve it with the CLEAN algorithm, use the clean task. This task will work for single-field data, or for multi-field mosaics (§ 5.3.15), in both narrow and wide-field imaging modes.
ALERT: For large fractional bandwidths the psf in clean may vary considerably with frequency in data cubes. To accommodate this fact we have introduced a per-plane psf (dirty beam) when the change is larger than half the size of a pixel. Analysis tasks in CASA can deal with such beam variation. If a single beam size is requested, imsmooth can be invoked on the clean products to smooth to a common, uniform beam for all channels.
Toolkit Note: MEM is not included in clean, but is available in the toolkit.
clean will use the CORRECTED_DATA column from your measurement set if it exists. If that column is not available, it will use DATA. The clean task utilizes many of the common imaging parameters. These are described above in § 5.2. There are also a number of parameters specific to clean. These are listed and described below.
The default inputs to clean are:
# clean :: Deconvolve an image with selected algorithm vis = '' # name of input visibility file imagename = [''] # Pre-name of output images outlierfile = '' # Text file with image names, sizes, centers field = '' # Field Name spw = '' # Spectral windows:channels: '' is all selectdata = False # Other data selection parameters mode = 'mfs' # Type of selection (mfs, channel, velocity,frequency) nterms = 1 # Number of taylor terms to use # for modeling the sky frequency dependence reffreq = '' # Reference frequency for MFS # (relevant only if nterms > 1) gridmode = '' # The kind gridding kernel to be # used for FFT-based transforms niter = 500 # Maximum number of iterations gain = 0.1 # Loop gain for cleaning threshold = '0.0mJy' # Flux level to stop cleaning. Must include units psfmode = 'clark' # method of PSF calculation to use during minor cycles imagermode = '' # Use csclean or mosaic. If '', use psfmode multiscale =  # deconvolution scales (pixels); #  = default standard clean interactive = False # use interactive clean (with GUI viewer) mask =  # cleanbox(es), mask image(s), # and/or region(s) used in cleaning imsize = [256, 256] # x and y image size in pixels, # symmetric for single value cell = ['1.0arcsec', '1.0arcsec'] # x and y cell size. default unit arcsec phasecenter = '' # Image phase center: position or field index restfreq = '' # rest frequency to assign to image (see help) stokes = 'I' # Stokes params to image (eg I,IV, QU,IQUV) weighting = 'natural' # Weighting of uv (natural, uniform, briggs, ...) uvtaper = False # Apply additional uv tapering of visibilities. modelimage = '' # Name of model image(s) to initialize cleaning restoringbeam = [''] # Output Gaussian restoring beam for CLEAN image pbcor = False # Output primary beam-corrected image minpb = 0.1 # Minimum PB level to use usescratch = False # True if to save model # visibilities in MODEL_DATA column
The clean task will produce a number of output images based on the root name given in imagename. These include:
<imagename>.clean.image # the restored image <imagename>.clean.flux # the effective response (e.g. for pbcor) <imagename>.clean.flux.pbcoverage # the PB coverage (ftmachine='mosaic' only) <imagename>.clean.model # the model image <imagename>.clean.residual # the residual image <imagename>.clean.psf # the synthesized (dirty) beam
The mode, psfmode, imagermode, and weighting parameters open up other sub-parameters. These are detailed in the common imaging task parameters section (§ 5.2). The gridmode parameter (§ 5.3.13) is available to select more advanced imaging options such as widefield imaging and beam squint correction.
A typical setup for clean on the NGC5921 dataset, after setting parameter values, might look like:
vis = 'ngc5921.usecase.ms.contsub' # Name of input visibility file imagename = 'ngc5921.usecase.clean' # Pre-name of output images field = '0' # Field Name spw = '' # Spectral windows:channels: '' is all selectdata = False # Other data selection parameters mode = 'channel' # Type of selection (mfs, channel, velocity, frequency) nchan = 46 # Number of channels (planes) in output image start = 5 # first input channel to use width = 1 # Number of input channels to average interpolation = 'linear' # Spectral interpolation (nearest, linear, cubic) gridmode = '' # The kind gridding kernel to be used for # FFT-based transforms niter = 6000 # Maximum number of iterations gain = 0.1 # Loop gain for cleaning threshold = 8.0 # Flux level to stop cleaning. Must include units psfmode = 'clark' # method of PSF calculation to use during minor cycles imagermode = '' # Use csclean or mosaic, or image-plane only if '' multiscale =  # set deconvolution scales (pixels) interactive = False # use interactive clean (with GUI viewer) mask = [108, 108, 148, 148] # cleanbox(es), mask image(s), # and/or region(s) imsize = [256, 256] # x and y image size in pixels cell = [15.0, 15.0] # x and y cell size. default unit arcsec phasecenter = '' # Image phase center: position or field index restfreq = '' # rest frequency to assign to image (see help) stokes = 'I' # Stokes params to image (eg I,IV, QU,IQUV) weighting = 'briggs' # Weighting to apply to visibilities robust = 0.5 # Briggs robustness parameter npixels = 0 # uv-cell size in pixels 0=> field of view uvtaper = False # Apply additional uv tapering of visibilities. modelimage = '' # Name of model image(s) to initialize cleaning restoringbeam = [''] # Output Gaussian restoring beam for CLEAN image pbcor = False # Output primary beam-corrected image minpb = 0.1 # Minimum PB level to use
An example of the clean task to create a continuum image from many channels is given below:
clean(vis='ggtau.1mm.split.ms', # Use data in ggtau.1mm.split.ms imagename='ggtau.1mm', # Name output images 'ggtau.1mm.*' on disk psfmode='clark', # Use the Clark CLEAN algorithm imagermode='', # Do not mosaic or use csclean mask='', # Do not use clean box or mask niter=500, gain=0.1, # Iterate 500 times using gain of 0.1 mode='mfs', # multi-frequency synthesis (combine channels) spw='0~2:2~57', # Combine channels from 3 spectral windows field='0', # stokes='I', # Image stokes I polarization weighting='briggs', # Use Briggs robust weighting rmode='norm',robust=0.5, # with robustness parameter of 0.5 cell=[0.1,0.1], # Using 0.1 arcsec pixels imsize=[256,256]) # Set image size = 256x256 pixels
This example will clean the entire inner quarter of the primary beam. However, if you want to limit the region over which you allow the algorithm to find clean components then you can make a deconvolution region (or mask). To use a deconvolution region, box, or mask, set the mask parameter (§ 5.3.6).
The im.clean method is used for CLEANing data. There are a number of methods used to set up the clean, including im.setoptions.
For example, you can set up a simple ’cleanbox’ region. To do this, make a first cut at the image and clean the inner quarter. Then use the viewer to look at the image and get an idea of where the emission is located. You can use the viewer adjustment panel to view the image in pixel coordinates and read out the pixel locations of your cursor.
Then, you can use those pixel read-outs you just go to define a clean box region with the CASA region format described in Chapter D. For example, say you have a continuum source near the center of your image between the pixel coordinates [80,80] and [120,120], you may use the rectangular region:
For more complicated and multiple clean regions, it will be best to use the viewer to create them interactively or to create a region file (Chapter D) and use that file as an input like:
The following are the clean specific parameters and their allowed values, followed by a description of carrying out interactive cleaning.
The psfmode parameter chooses the “algorithm” that will be used to calculate the synthesized beam for use during the minor cycles in the image plane. The value types are strings. Allowed choices are ’clark’ (default) and ’hogbom’.
In the ’clark’ algorithm, the cleaning is split into minor and major cycles. In the minor cycles only the brightest points are cleaned, using a subset of the point spread function. In the major cycle, the points thus found are subtracted correctly by using an FFT-based convolution. This algorithm is reasonably fast. Also, for polarization imaging, Clark searches for the peak in I2+Q2+U2+V2.
The hogbom algorithm is the “Classic” image-plane CLEAN, where model pixels are found iteratively by searching for the peak. Each point is subtracted from the full residual image using the shifted and scaled point spread function. In general, this is not a good choice for most imaging problems (clark or csclean are preferred) as it does not calculate the residuals accurately. But in some cases, with poor uv-coverage and/or a PSF with bad sidelobes, the Hogbom algorithm will do better as it uses a smaller beam patch. For polarization cleaning, Hogbom searches for clean peak in I, Q, U, and V independently.
In the ’clarkstokes’ algorithm, the Clark psf (§ 126.96.36.199) is used, but for polarization imaging the Stokes planes are cleaned sequentially for components instead of jointly as in ’clark’. This means that this is the same as ’clark’ for Stokes I imaging only. This option can also be combined with imagermode=’csclean’ (§ 5.3.4).
The im.setscales method sets the multi-scale Gaussian widths. In addition to choosing a list of sizes in pixels, you can just pick a number of scales and get a geometric series of sizes.
To activate multi-scale mode, specify a non-blank list of scales in
the multiscale parameter. A good rule of thumb for starters is
[ 0, 2xbeam, 5xbeam ], and maybe adding larger scales up to the
maximum scale the interferometer can image. E.g. for a 2 arcsecond
multiscale = [0,6,10,30] # Four scales including point sources
These are given in numbers of pixels, and specify FWHM of the Gaussians used to compute the filtered images.
Setting the multiscale parameter to a non-empty list opens up the sub-parameter:
multiscale = [0, 6, 10, 30] # set deconvolution scales (pixels) negcomponent = -1 # Stop cleaning if the # largest scale finds this number of neg # components smallscalebias = 0.6 # a bias to give more weight # toward smaller scales
The negcomponent sub-parameter is here to set the point at which the clean terminates because of negative components. For negcomponent > 0, component search will cease when this number of negative components are found at the largest scale. If negcomponent = -1 then component search will continue even if the largest component is negative.
Increasing smallscalebias gives more weight to small scales. A value of 1.0 weighs the largest scale to zero and a value <0.2 weighs all scales nearly equally. The default of 0.6 is usually a good number as it corresponds to a weighting that approximates the normalization of each component by its area. Depending on the image, however, it may be necessary to tweak the smallscalebias for a better convergence of the algorithm. Note that currently, this parameter is ignored by the MS-MFS algorithm. It will be available in a future release.
Multi-scale cleaning is also not as sensitive to the loop gain as regular cleaning algorithms. A loop gain of 0.3 may still work fine and will considerably speed up the processing time. Increasing the cyclefactor by a few (e.g. 5) may provide better stability in the solution, in particular when the data exhibit a severely non-Gaussian dirty beam.
The CASA multi-scale algorithm uses “Multi-scale CLEAN” to deconvolve using delta-functions and circular Gaussians as the basis functions for the model, instead of just delta-functions or pixels as in the other clean algorithms. This algorithm is still in the experimental stage, mostly because we are working on better algorithms for setting the scales for the Gaussians. The sizes of the Gaussians are set using the scales sub-parameter.
We are working on defining a better algorithm for scale setting. In the toolkit, there is an nscale argument which sets scales
|θi = θbmin 10(i−N/2)/2 (5.12)|
where N=nscales and θbmin is the fitted FWHM of the minor axis of the CLEAN beam.
The gain parameter sets the fraction of the flux density in the residual image that is removed and placed into the clean model at each minor cycle iteration. The default value is gain = 0.1 and is suitable for a wide-range of imaging problems. Setting it to a smaller gain per cycle, such as gain = 0.05, can sometimes help when cleaning images with lots of diffuse emission. Larger values, up to gain=1, are probably too aggressive and are not recommended.
This choose the mode of operation of clean, either as single-field deconvolution using image-plane major and minor cycles only (imagermode=’’), single-field deconvolution using Cotton-Schwab (CS) residual visibilities for major cycles (imagermode=’csclean’), or multi-field mosaics using CS major cycles (imagermode=’mosaic’).
The default imagermode=’csclean’ choice specifies the Cotton-Schwab algorithm. This opens up the sub-parameters
imagermode = 'csclean' # Options: 'csclean' or # 'mosaic', '', uses psfmode cyclefactor = 1.5 # Controls how often major # cycles are done. (e.g. 5 for # frequently) cyclespeedup = -1 # Cycle threshold doubles in # this number of iterations
These options are explained below. In the CS mode, cleaning is split into minor and major cycles. For each field, a minor cycle is performed using the PSF algorithm specified in psfmode (§ 5.3.1). At major-cycle breakpoints, the points thus found are subtracted from the original visibilities. A fast variant does a convolution using a FFT. This will be faster for large numbers of visibilities. If you want to be extra careful, double the image size from that used for the Clark clean and set a mask to clean only the inner quarter or less (this is not done by default). This is probably the best choice for high-fidelity deconvolution of images without lots of large-scale structure.
Note that when using the Cotton-Schwab algorithm with a threshold (§ 5.3.12), there may be strange behavior when you hit the threshold with a major cycle. In particular, it may be above threshold again at the start of the next major cycle. This is particularly noticeable when cleaning a cube, where different channels will hit the threshold at different times.
In the empty mode (imagermode=’’), the major and minor clean cycles work off of the gridded FFT dirty image, with residuals updated using the PSF calculation algorithm set by the psfmode parameter (§ 5.3.1). This method is not recommended for high dynamic range or high fidelity imaging applications, but can be significantly faster than CS clean (the default). Note that for this option only, if mask=’’ (no mask or box set) then it will clean the inner quarter of the image by default.
ALERT: You will see a warning message in the logger, similar to this:
Zero Pixels selected with a Flux limit of 0.000551377 and a maximum Residual of 0.00751239
whenever it find 0 pixels above the threshold. This is normal, and not a problem, if you’ve specified a non-zero threshold. On the other hand, if you get this warning with the threshold set to the default of ’0Jy’, then you should look carefully at your inputs or your data, since this usually means that the masking is bad.
The option imagermode=’mosaic’ is for multi-field mosaics. This choice opens up the sub-parameters
imagermode = 'mosaic' # Use csclean or mosaic. If '', use psfmode mosweight = False # Individually weight the fields of the mosaic ftmachine = 'mosaic' # Gridding method for the image scaletype = 'SAULT' # Controls scaling of pixels in the image plane. cyclefactor = 1.5 # change depth in between of csclean cycle cyclespeedup = -1 # Cycle threshold doubles in this number of iteration
These options are explained below.
The im.setmfcontrol method sets the parameters that control the cycles and primary beam used in mosaicing.
This sub-parameter is activated for imagermode=’csclean’ and ’mosaic’.
The cyclefactor parameter allows the user to change the threshold at which the deconvolution cycle will stop and then degrid and subtract the model from the visibilities to form the residual. This is with respect to the breaks between minor and major cycles that the clean part would normally force. Larger values force a major cycle more often.
This parameter in effect controls the threshold used by CLEAN to test whether a major cycle break and reconciliation occurs:
cycle threshold = cyclefactor * max sidelobe * max residual
If mosaic or csclean diverges on your data, try a larger cyclefactor. A larger value typically increases the robustness of your deconvolution. The price, however, will be a slower algorithm. On the other hand, if you find that the cleaning is slow due to taking too many major cycle breaks, then reduce cyclefactor.
Note that currently the cycle_threshold will saturate at a maximum value of 0.80 even when you set cyclefactor to a very high value or you have very high PSF sidelobes. This means that with a gain = 0.1 you will get 3 minor cycles per major cycle when hitting the limit.
Some rules of thumb:
If you have data taken with a small number of antennas, for example from ALMA in the commissioning and early-science phase, then you will have high sidelobes in the PSF. In this case, you will have to reduce cyclefactor considerably, likely into the range 0.25 to 0.5, if you want efficient cleaning of simple source structures (e.g. point sources). You can use the viewer to look at your PSF image and see what the maximum sidelobe level is and judge accordingly.
However, if your uv-coverage results in a poor PSF and you have complex source structure, then you should reconcile often (a cyclefactor of 4 or 5). For reasonable PSFs, use cyclefactor in the range 1.5 to 2.0. For good PSFs, or for faster cleaning at the expense of some fidelity, we recommend trying a lower value, e.g. cyclefactor = 0.25, which at least in some of our mosaicing tests led to a speedup of around a factor of two with little change in residuals.
This sub-parameter is activated for imagermode=’csclean’ and ’mosaic’.
The cyclespeedup parameter allows the user to let clean to raise the threshold at which a major cycle is forced if it is not converging to that threshold. To do this, set cyclespeedup to an integer number of iterations at which if the threshold is not reached, the threshold will be doubled. See cyclefactor above for more details. By default this is turned off (cyclespeedup = -1). In our tests, a value like cyclespeedup = 50 has been used successfully.
This sub-parameter is activated for imagermode=’mosaic’. Note: The actual “ftmachine” used may be overridden by choices made to other parameters, such as gridmode.
The ftmachine parameter controls the gridding method and kernel to be used to make the image. A string value type is expected. Choices are: ’ft’, ’sd’, ’both’, or ’mosaic’ (the default).
The ’ft’ option uses the standard gridding kernel (as used in clean).
The ’sd’ option forces gridding as in single-dish data.
For combining single-dish and interferometer MS in the imaging, the ’both’ option will allow clean to choose the ‘ft’ or ’sd’ machines as appropriate for the data.
The im.setoptions method sets the parameters relevant to mosaic imaging, such as the ftmachine.
The ’mosaic’ option (the default) uses the Fourier transform of the frequency-dependent primary beam (the aperture cross-correlation function in the uv-plane) as the gridding kernel. This allows the data from the multiple fields to be gridded down to a single uv-plane, with a significant speed-up in performance in most (non-memory limited) cases. The effect of this extra convolution is an additional multiplication (apodization) by the primary beam in the image plane. This can be corrected for, but does result in an image with optimal signal to noise ratio across it.
The primary beams used in CASA are described in § 5.2.14.
ALERT: Note that making a non-square image (e.g. using unequal values in imsize) for ftmachine=’mosaic’ will grid the data into a uv-plane with correspondingly non-square cells. This has not been extensively tested, and may results in undesired image artifacts. We recommend that the user make square mosaic images when possible, but in principle non-square images should work.
If mosweight=False (default) then the data will be weighted for optimal signal to noise ratio across the mosaic image. This should be used for most cases.
If mosweight=True then individual mosaic field relative weights will be readjusted on a per visibility basis (much like uniform gridding weights). This may give better performance in cases where one or a few fields in the mosaic have drastically different weights and/or integration time, and it is desired that the noise be more “uniform” across the mosaic image. Use this with care, we have not explored its use fully.
The im.setmfcontrol method gives more options for controlling the primary beam and noise across the image.
The scaletype parameter controls weighting of pixels in the image plane. This sub-parameter is activated for imagermode=’mosaic’.
The default scaletype=’PBCOR’ scales the image to have the correct flux scale across it (out to the beam level cutoff minpb). This means that the noise level will vary across the image, being increased by the inverse of the weighted primary beam responses that are used to rescale the fluxes. This option should be used with care, particularly if your data has very different exposure times (and hence intrinsic noise levels) between the mosaic fields.
If scaletype=’SAULT’ then the image will be scaled so as to have constant noise across it. This means that the point source response function varies across the image attenuated by the weighted primary beam(s). However, this response is output in the .flux image and can be later used to correct this.
Note that this scaling as a function of position in the image occurs after the weighting of mosaic fields specified by mosweight and implied by the gridding weights (ftmachine and weighting).
For mosaics, the specification of the threshold is not straightforward, as it is in the single field case. This is because the different fields can be observed to different depths, and get different weights in the mosaic. We now provide internal rescaling (based on scaletype) so clean does its component search on a properly weighted and scaled version of the sky.
For ftmachine=’ft’, the minor cycles of the deconvolution are performed on an image that has been weighted to have constant noise, as in ’SAULT’ weighting (see § 188.8.131.52). This is equivalent to making a dirty mosaic by coadding dirty images made from the individual pointings with a sum of the mosaic contributions to a given pixel weighted by so as to give constant noise across the image. This means that the flux scale can vary across the mosaic depending on the effective noise (higher weighted regions have lower noise, and thus will have higher “fluxes” in the ’SAULT’ map). Effectively, the flux scale that threshold applies to is that at the center of the highest-weighted mosaic field, with higher-noise regions down-scaled accordingly. Compared to the true sky, this image has a factor of the PB, plus a scaling map (returned in the .flux image). You will preferentially find components in the low-noise regions near mosaic centers.
When ftmachine=’mosaic’ and scaletype=’SAULT’, the deconvolution is also performed on a “constant noise image”, as detailed above for ’ft’.
ALERT: The intrinsic image made using ftmachine=’mosaic’ is equivalent to a dirty mosaic that is formed by coadding dirty images made from the individual fields after apodizing each by the PB function. Thus compared to the true sky, this has a factor of the PB2 in it. You would thus preferentially find components in the centers of the mosaic fields (even more so than in the ’ft’ mosaics). We now rescale this image internally at major-cycle (and interactive) boundaries based on scaletype, and do not have a way to clean on the raw unscaled dirty image (as was done in previous released versions).
If interactive=True is set, then an interactive window will appear at various “cycle” stages while you clean, so you can set and change mask regions. These breakpoints are controlled by the npercycle sub-parameter which sets the number of iterations of clean before stopping.
interactive = True # use interactive clean (with GUI viewer) npercycle = 100 # Number of iterations before interactive prompt
Note that npercycle is currently the only way to control the breakpoints in interactive clean.
For spectral cube imaging, it is often easier to deal with each channel in turn, rather than cleaning all channels in each cycle. We therefore provide the chaniter=True option under ’mode’, where it will clean a channel fully before moving to the next one. You will set masks for each channel.
See the example of interactive cleaning in § 5.3.14.
The mask parameter takes a list of elements, each of which can be a list of coordinates specifying a box, or a string pointing to the name of a cleanbox file, mask image, or region file. These are used by CLEAN to define a region to search for components.
Note that for imagermode=’’ (§ 5.3.4) the default with mask=’’ is to restrict clean to the inner quarter of the image.
mask can be a list of CASA regions. For example,
mask = 'box[[80pix, 80pix],[120pix,120pix]],circle[[150pix,150pix],10pix]'
defines a box and a circle. They will be applied to all channels. To define different regions for different channel ranges, it will be best to use interactive mode in clean, the viewer (note that the viewer still created old format regions - they are still supported in CASA 3.3) or to create a CASA region file that contain the different regions. Chapter D describes the syntax of CASA regions. They can be specified by;
mask = 'regionfile.rgn, regionfile2.rgn'
You can give the mask parameter a string containing the name of a mask image to be used for CLEAN to search for components. You can use interactive=True to create such a mask for your image (§ 5.3.5).
The minpb parameter sets the level down to which the primary beam (or more correctly the voltage patterns in the array) can go and have a given pixel included in the image. This is important as it defines where the edge of the visible image or mosaic is. The default is 0.1 or equivalent to the 10% response level. If there is a lot of emission near the edge, then set this lower if you want to be able to clean it out.
NOTE: The minpb parameter is the level in the “primary beam” (PB) at which the cut is made. If you are using ftmachine=’mosaic’ (§ 184.108.40.206), this will show up in the .flux.pbcoverage image (new in version 2.4.0). See the discussion of threshold (§ 220.127.116.11) for related issues.
The modelimage parameter specifies the name(s) of one or more input starting image(s) to use to calculate the first residual before cleaning. These are used in addition to any image with a name defaulting from the imagename root (e.g. on a restart). The output model will contain this model plus clean components found during deconvolution.
If the units of the image are Jy/pixel, then this is treated as a model image.
If the units of the image are Jy/beam or Jy per solid angle, then this is treated as a “single-dish” image and rescaled by the resolution (in the ’beam’ image header keyword). Inclusion of the SD image here is superior to feathering it in later. See § 5.6 for more information on feathering.
The niter parameter sets the maximum total number of minor-cycle CLEAN iterations to be performed during this run of clean. If restarting from a previous state, it will carry on from where it was. Note that the threshold parameter can cause the CLEAN to be terminated before the requested number of iterations is reached.
The pbcor parameter controls whether the final .image is scaled to correct for the Primary Beam of the array or not.
If pbcor=False (the default), then no such scaling is done and the image is in whatever “raw” scaling used by the imagermode algorithm underneath. For single-field cleaning with imagermode=’’ or ’csclean’, this is the standard constant-noise image. If imagermode=’mosaic’, then this is the ’SAULT’ scaled image (regardless of what scaletype is set to).
If pbcor=True, the at the end of deconvolution and imaging the “raw” image is rescaled by dividing by the noise and PB correction image. This is what is output by clean as the .flux image.
Note that regardless of what you set pbcor to, you can recover the other option using immath (§ 6.7) to either multiply or divide by the .flux image.
The restoringbeam parameter allows the user to set a specific Gaussian restoring beam to make the final restored .image from the final .model and residuals.
If restoringbeam=’’ (the default), then the restoring beam is calculated by fitting to the PSF (e.g. the .psf image). For a mosaic, this is at the center of the field closest to the phasecenter.
The restoring beam can also be used to establish a single beam for large fractional bandwidths. If the PSF changes more than half a pixel across all channels in a cube, the PSF itself will be stored in the form of a cube, changing size from channel to channel. A specified restoring beam will output all planes at the same resolution and thus collapse to a single PSF (note that this can also be done in hindsight using imsmooth).
To specify a restoring beam, provide restoringbeam a list of [bmaj, bmin, bpa] which are the parameters of an elliptical Gaussian. The default units are in arc-seconds for bmaj, bmin components and degrees for the bpa component.
restoringbeam=['10arcsec'] # circular Gaussian FWHM 10" restoringbeam=['10.0','5.0','45.0deg'] # 10"x5" at PA=45 degrees
The threshold parameter instructs clean to terminate when the maximum absolute residual reaches this level or below. Note that it may not reach this residual level due to the value of the niter parameter which may cause it to terminate early.
If threshold is given a floating-point number, then this is the threshold in milli-Jansky.
You can also supply a flux density quanta to threshold, e.g.
threshold = '8.0mJy' threshold = '0.008Jy'
(these do the same thing).
The gridmode parameter is now provided to access more advanced deconvolution capabilities. The default gridmode=’’ is recommended for most cases.
The gridmode=’widefield’ option allows imaging in the wide-field regime where the W-term is not negligible. The CASA implementation allows both standard uv-plane faceting as well as the W-Projection algorithm 2 or a combination of the two. Its sub-parameters are:
gridmode = 'widefield' # Gridding kernel for FFT-based # transforms, default='' None wprojplanes = 1 # Number of w-projection planes for convolution facets = 1 # Number of facets along each axis (main image only)
The wprojplanes parameter sets the number of pre-computed w-planes used for the W-Projection algorithm (wprojplanes=1 disables w-projection). The facets parameter sets the number of facets used. W-Projection, if used, is done for each facet. See § 5.3.18 below for more on wide-field imaging.
gridmode=’aprojection’: A-Projection is an algorithm to account for the effects of the antenna primary beam (PB) during imaging. The time-dependent effects of the PB are projected-out during the imaging phase and the PB is included in the prediction phase of the iterative image deconvolution (see Bhatnagar, Cornwell, Golap & Uson 2008, A&A, 487, 419) for more details. Please also refer to this publication in your papers if this algorithm is used for imaging. The narrow-band A-Projection can be used by setting the gridmode=’aprojection’ in the clean task. This opens up the following new parameters:
gridmode = 'aprojection' # Gridding kernel for FFT-based # transforms, default='' None cfcache = 'cfcache.dir' # Convolution function cache directory rotpainc = 5.0 # Parallactic angle increment # (degrees) for OTF A-term rotation painc = 360.0 # Parallactic angle increment # (degrees) for computing A-term
cfcache is used to cache functions required in the A-Projection algorithm. The PB is rotated on-the-fly when a change of greater than rotpainc is detected. Alternatively, PB is re-computed if the P.A. changes by greater than painc.
Note that this code is still in the development and testing stage and should be used on shared-risk basis. Note also that the cost of imaging will be higher when using A-Projection. Therefore make a careful evaluation of whether you need to invoke it.
If interactive=True is set, then an interactive window will appear at various “cycle” stages while you clean, so you can set and change mask regions. These breakpoints are controlled by the npercycle sub-parameter which sets the number of iterations of clean before stopping.
The window controls are fairly self-explanatory. It is basically a form of the viewer. A close-up of the controls are shown in Figure 5.1, and an example can be found in Figures 5.2–5.4. You assign one of the drawing functions (rectangle or polygon, default is rectangle) to the right-mouse button (usually), then use it to mark out regions on the image. Zoom in if necessary (standard with the left-mouse button assignment). Double-click inside the marked region to add it to the mask. If you want to reduce the mask, click the Erase radio button (rather than Add), then mark and select as normal. When finished setting or changing your mask, click the green clockwise arrow “Continue Cleaning” Next Action button. If you want to finish your clean with no more changes to the mask, hit the blue right arrow “Apply mask edits and proceed with non-interactive clean” button. If you want to terminate the clean, click the red X “Stop deconvolving now” button.
While stopped in an interactive step, you can change a number of control parameters in the boxes provided at the left of the menu bar. The main use of this is to control how many iterations before the next breakpoint (initially set to npercycle), how many cycles before completion (initially equal to niter/npercycle), and to change the threshold for ending cleaning. Typically, the user would start with a relatively small number of iterations (50 or 100) to clean the bright emission in tight mask regions, and then increase this as you get deeper and the masking covers more of the emission region. For extended sources, you may end up needing to clean a large number of components (10000 or more) and thus it is useful to set niter to a large number to begin with — you can always terminate the clean interactively when you think it is done. Note that if you change iterations you may also want to change cycles or your clean may terminate before you expect it to.
For strangely shaped emission regions, you may find using the polygon region marking tool (the second from the right in the button assignment toolbar) the most useful.
The sequence of cleaning starting with the “raw” externally calibrated data is shown in Figures 5.2 – 5.4.
The final result of all this cleaning for Jupiter is shown in Figure 5.5. The viewer (§ 7) was used to overplot the polarized intensity contours and linear polarization vectors calculated using immath (§ 6.7) on the total intensity. See the following chapters on how to make the most of your imaging results.
For spectral cube images you can use the tapedeck to move through the channels. You also use the panel with radio buttons for choosing whether the mask you draw applies to the Displayed Plane or to All Channels. See Figure 5.6 for an example. Note that currently the Displayed Plane option is set by default. This toggle is unimportant for single-channel images or mode=’mfs’.
Advanced Tip: Note that while in interactive clean, you are using the viewer. Thus, you have the ability to open and register other images in order to help you set up the clean mask. For example, if you have a previously cleaned image of a complex source or mosaic that you wish to use to guide the placement of boxes or polygons, just use the Open button or menu item to bring in that image, which will be visible and registered on top of your dirty residual image that you are cleaning on. You can then draw masks as usual, which will be stored in the mask layer as before. Note you can blink between the new and dirty image, change the colormap and/or contrast, and carry out other standard viewer operations. See § 7 for more on the use of the viewer.
ALERT: Currently, interactive spectral line cleaning is done globally over the cube, with halts for interaction after searching all channels for the requested npercycle total iterations. It is more convenient for the user to treat the channels in order, cleaning each in turn before moving on. This will be implemented in an upcoming update.
The clean task contains the capability to image multiple pointing centers together into a single “mosaic” image. This ability is controlled by setting imagermode=’mosaic’ (§ 5.3.4).
The key parameter that controls how clean produces the mosaic is the ftmachine sub-parameter (§ 18.104.22.168). For ftmachine=’ft’, clean will perform a weighted combination of the images produced by transforming each mosaic pointing separately. This can be slow, as the individual sub-images must be recombined in the image plane. NOTE: this option is preferred for data taken with sub-optimal mosaic sampling (e.g. fields too far apart, on a sparse irregular pattern, etc.).
The primary beams used in CASA are described in § 5.2.14.
If ftmachine=’mosaic’, then the data are gridded onto a single uv-plane which is then transformed to produce the single output image. This is accomplished by using a gridding kernel that approximates the transform of the primary beam pattern. Note that for this mode the <imagename>.flux image includes this convolution kernel in its effective weighted response pattern (needed to “primary-beam correct” the output image). For this mode only, an additional image <imagename>.flux.pbcoverage is produced that is the primary-beam coverage only used to compute the minpb cutoff (§ 5.3.7).
The flatnoise parameter determines whether the minor cycle performs on the the residual with or without a primary beam correction. Whereas the former has the correct fluxes, the latter has a uniform noise, which allows for a simpler deconvolution in particular at the the edges of the mosaic where the primary beam correction is largest.
ALERT: In order to avoid aliasing artifacts for ftmachine=’mosaic’ in the mosaic image, due to the discrete sampling of the mosaic pattern on the sky, you should make an image in which the desired unmasked part of the image (above minpb) lies within the inner quarter. In other words, make an image twice as big as necessary to encompass the mosaic.
It is also important to choose an appropriate phasecenter for your output mosaic image (§ section:im.pars.phasecenter). The phase center should not be at the edge of an image with pointings around it. In that case, FFT aliasing may creep into the image.
An example of a simple mosaic clean call is shown below:
clean(vis='n4826_tboth.ms', imagename='tmosaic', mode='channel', nchan=30,start=46, # Make the output cube 30 chan width=4, # start with 46 of spw 0, avg by 4 chans spw='0~2', field='0~6', cell=[1.,1.], imsize=[256,256], stokes='I', psfmode='clark', niter=500, imagermode='mosaic', scaletype='SAULT', cyclefactor=0.1)
The clean task and underlying tools can handle cases where there are multiple dish sizes, and thus voltage patterns and primary beams, in the array. This is effected by using the dish sizes stored in the ANTENNA sub-table of the MS. Depending on how the data was written and imported into CASA, the user may have to manually edit this table to insert the correct dish sizes (e.g. using browsetable or the tb table tool).
The clean task handles full and partial Stokes polarization imaging through the setting of the stokes parameter (§ 5.2.10). The subsequent deconvolution of the polarization planes of the image and the search for clean components is controlled by the psfmode parameter (§ 5.3.1). If the stokes parameter includes polarization planes other than I, then choosing psfmode=’hogbom’ (§ 22.214.171.124) or psfmode=’clarkstokes’ (§ 126.96.36.199) will clean (search for components) each plane sequentially, while psfmode=’clark’ (§ 188.8.131.52) will deconvolve jointly.
The interactive clean example given above (§ 5.3.14) shows a case of polarization imaging.
When imaging sufficiently large angular regions, the sky can no longer be treated as a two-dimensional plane and the use of the standard clean task will produce distortions around sources that become increasingly severe with increasing distance from the phase center. In this case, one must use a “wide-field” imaging algorithm such as w-projection or faceting.
When is wide-field imaging needed? The number of required facets N depends on the maximum baseline Bmax, the dish diameter D and the wavelength λ as:
and w-projection is required when N>1. (For details, see “Synthesis Imaging in Radio Astronomy II”, ed. Taylor, G., Carilli, C., Perley, R. 1999). With 25 m diameter JVLA dishes (which implies that imaging is requested out to the primary beam FWHM), w-projection is required for array configurations as listed in Table 5.1.
Receiver Band Wavelength [cm] Array configurations 4 430 A/B/C/D L 20 A/B/C S 10 A/B C 5 A X 3 A Ku 2 A K 1.4 – Ka 0.9 – Q 0.7 –
The relevant inputs for clean for wide-field imaging are:
gridmode = 'widefield' # The kind gridding kernel to # be used for FFT-based transforms wprojplanes = 1 # Number of w-projection planes for convolution facets = 1 # Number of facets along each axis (main image only)
Most of the clean parameters behave as described previously.
Wide-field imaging can be carried out using two major modes: First, the w-projection mode as chosen with ftmachine deals with the w-term (the phase associated with the sky/array curvature) internally. Secondly, the image can be broken into many facets, each small enough so that the w-term is not significant. These two basic methods can be combined, as discussed below in § 184.108.40.206.
When using wide-field imaging, the position and image size of any independent images must be specified. Those positions will be used to add additional cleaning components to strong sources that may reside in that area and influence the central image.
There are a two options to specify the outlier fields:
Direct listing of fields The outlier field directions are provided via their centers (phasecenter parameter), and their sizes as a second entry in the imsize parameter, e.g. 128 pixels in the example below. clean will derive two additional images and their names are to be provided in the imagename field that will then be a list of the main field name plus the outlier field names:
vis = 'wfield.ms' # name of input visibility file imagename=['n5921','outlier1','outlier2'] # Pre-name of output images outlierfile = '' # Text file with image names, sizes, centers mask = [['image_setup.rgn'],[''],['']] imsize = [[2048,2048],[128,128],[128,128]] # Image size in pixels (nx,ny) cell = '1.0arcsec' # The image cell size in arcseconds [x,y]. phasecenter = ['','J2000 13h27m20.98 43d26m28.0', 'J2000 13h30m52.158 43d23m08.00']
Outlier file For many outlier fields, it may be easier to setup the main interface to clean for the main field only and list outlier fields in an additional outlierfile:
imagename='n5921' outlierfile = 'outliers.txt' imsize=[1024,1024] phasecenter = ''
outliers.txt provides all outlier fields with a syntax that is similar to the direct input, but separated by field. Below is an example for an outlierfile:
#content of outliers.txt # #outlier field1 imagename='outlier1' imsize=[512,512] phasecenter = 'J2000 13h30m52.15 43d23m08.0' mask='box[[245pix,245pix],[265pix,265pix]]' # #outlier field2 imagename='outlier2' imsize=[512,512] phasecenter = 'J2000 13h24m08.16 43d09m48.0'
The syntax rules for the outlier files are:
The older AIPS-style convention (and box definition) that was used in CASA 3.2 and earlier is still supported in CASA 3.3 but will be deprecated for CASA 3.4 and higher.
The w-projection mode is controlled using wprojplanes sub-parameter, e.g.
gridmode = 'widefield' # The kind gridding kernel to # be used for FFT-based transforms wprojplanes = 64 # Number of w-projection planes for convolution facets = 1 # Number of facets along each axis (main image only)
will construct 64 w-projection planes.
The w-projection algorithm is much faster than using faceting, but it does consume a lot of memory. On most 32-bit machines with 1 or 2 Mbytes of memory, images larger than about 4000× 4000 cannot be made.
Faceting will break the image into many small parts. This is invoked using facets:
gridmode = 'widefield' # The kind gridding kernel to be used for FFT-based transforms wprojplanes = 1 # Number of w-projection planes for convolution facets = 7 # Number of facets along each axis (main image only)
In this example the image is broken into 49 (7×7) facets.
A reasonable value of facets is such that the image width of each facet does not need the w-term correction. The computation method with pure faceting is slow, so that w-projection is recommended
You can also use a combination of w-projection and faceting:
gridmode = 'widefield' # The kind gridding kernel to be used for FFT-based transforms wprojplanes = 32 # Number of w-projection planes for convolution facets = 3 # Number of facets along each axis (main image only)
This hybrid method allows for a smaller number of wprojplanes in order to try to conserve memory if the image size approached the memory limit of the computer. However, there is a large penalty in execution time.
tclean is a refactored version of clean with a better interface, more possible combinations of algorithms and new algorithms. tclean also allows to process the imaging and deconvolution on parallelized computing methods. tclean modes that have been commissioned include mfs with nterms=1&2 for both, the standard and mosaic gridders, and spectral line mode cube with the hogbom deconvolver in the LSRK frame. We also tested the common option of restoringbeam and as well as multi-scale clean. Commissioning of additional modes are currently underway, including support for parallel computing. We refer to the inline help for a detailed task description. More detailed documentation will be available in CASA 5.0 and later.
Note that eventually tclean will replace the current clean task.
The primary beam correction can be applied during the imaging with clean. It is also possible to correct after imaging using the task impbcor for ’regular’ data sets, or widebandpbcorr for those that used the Taylor-term expansion function in clean (nterms>1). pbcor has the following inputs:
# impbcor :: Construct a primary beam corrected image from an image and a primary beam pattern. imagename = '' # Name of the input image pbimage = '' # Name of the primary beam # image which must exist or # array of values for the pb response. Default "" outfile = '' # Output image name. If empty, no image is written. # Default "" box = '' # One or more boxes to use # for fit region(s). Default is # to use the entire directional plane. region = '' # The region to correct. Default is entire image. If # both box and region are specified, box is used and # region is not. chans = '' # The frequency planes to correct. Default is all # frequencies. stokes = 'I' # The correlations to correct. Default is all. mask =  # Boolean LEL expression or mask region. Default is # none. mode = 'velocity' # Divide or multiply the image by the primary beam # image. Minimal match supported. Default "divide" cutoff = -1.0 # PB cutoff. If mode is "d", all values less than this # will be masked. If "m", all values greater will be # masked. Less than 0, no cutoff. Default no cutoff wantreturn = False # Return an image tool # referencing the corrected image?
The main inputs are the input image and the image of a primary beam (usually your “image.flux” output image from clean) in the pbimage parameter. The mode parameter will typically be ’divide’ but it is also possible to multiply with the beam pattern.
widebandpbcor has the following options
# widebandpbcor :: Wideband PB-correction on the output of the MS-MFS algorithm vis = '' # Name of measurement set. imagename = '' # Name-prefix of multi-termimages to operate on. nterms = 2 # Number of taylor terms to use threshold = '' # Intensity above which to # re-calculate spectral index action = 'pbcor' # PB-correction (pbcor) or # only calc spectral-index (calcalpha) reffreq = '' # Reference frequency (if specified in clean) pbmin = 0.2 # PB threshold below which to not correct field = '' # Fields to include in the PB calculation spwlist = '' # List of N spw ids chanlist =  # List of N channel ids weightlist =  # List of N weights (relative)
action=’pbcor’ computes Taylor-coefficient images that represent the primary beam spectrum and applies them to the input Taylor coefficient images. The action=’calcalpha’ will recalculate spectral index maps based on the primary beam correction polynomials.
The term “feathering” is used in radio imaging to describe how to combine or “feather” two images together by forming a weighted sum of their Fourier transforms in the (gridded) uv-plane. Intermediate size scales are down-weighted to give interferometer resolution while preserving single-dish total flux density.
The feathering technique does the following:
The feather task is analogous to the AIPS IMERG task and the MIRIAD immerge task with option ’feather’. The term feathering derives from the tapering or down-weighting of the data in this technique; the overlapping, shorter spacing data from the deconvolved interferometer image is weighted down compared to the single dish image while the overlapping, longer spacing data from the single-dish are weighted down compared to the interferometer image.
The tapering uses the transform of the low resolution point spread function. This can be specified as an input image or the appropriate telescope beam for the single-dish. The point spread function for a single dish image may also be calculated using clean.
Advice: Note that if you are feathering large images, be advised to have the number of pixels along the X and Y axes to be composite numbers and definitely not prime numbers. In general FFTs work much faster on even and composite numbers. You may use subimage function of the image tool to trim the number of pixels to something desirable.
The inputs for feather are:
# feather :: Combine two images using their Fourier transforms imagename = '' # Name of output feathered image highres = '' # Name of high resolution (interferometer) image lowres = '' # Name of low resolution (single dish) image sdfactor = 1.0 # Scale factor to apply to Single Dish image effdishdiam = -1.0 # New effective SingleDish diameter to use in m lowpassfiltersd = False # Filter out the high spatial frequencies of the SD # image
The single-dish data cube is specified by the lowres and the interferometric data cube by the highres keyword. The combined, feathered output cube name is given by the imagename parameter. sdfactor can be used to adjust the flux calibration of the images. Since single-dish processing typically involves the fit of a baseline level, it might be the one with the most uncertain calibration and sdfactor will multiply with the single-dish image values for any needed correction.
The weighting functions for the data are usually the Fourier transform of the Single Dish beam FFT(PBSD) for the Single dish data, and the inverse, 1-FFT(PBSD) for the interferometric data. It is possible, however, to change the weighting functions by pretending that the SD is smaller in size via the effdishdiameter parameter. This tapers the high spatial frequencies of the SD data and adds more weight to the interferometric data. The lowpassfiltersd can take out artifacts at very high spatial frequencies that are often present but non-physical in SD data.
Note that the only inputs are for images and feather will attempt to regrid the images to a common shape, i.e. pixel size, pixel numbers, and spectral channels. feather does not do any deconvolution but combines presumably deconvolved images after the fact. This implies that the short spacings extrapolated by the deconvolution process will be those that are down-weighted the most. The single dish image must have a well-defined beam shape and the correct flux units for a model image (Jy/beam instead of Jy/pixel) so use the tasks imhead and immath first to convert if needed.
Starting with a cleaned synthesis image and a low resolution image from a single dish telescope, the following example shows how they can be feathered:
feather(imagename='feather.im', # Create an image called feather.im highres='synth.im', # The synthesis image is called synth.im lowres='single_dish.im' # The SD image is called single_dish.im )
CASA also provides a visual interface to the feather task. The interface is run from a command line outside CASA by typing casafeather in a shell. Fig. 5.7 shows an example. As a first step, one needs to specify a high and a low resolution image, typically an interferometric and a single dish map. Note that the single dish map needs to be in units of Jy beam−1. An output image is usually specified, too, and an additional image, such as a non-deconvolved (dirty) interferometric image can be specified, too. On the main GUI, press “Feather” to start the feathering process, which includes regridding the low resolution image to the high resolution image.
“casafeather” has the ability to show two major rows of displays (see Fig. 5.7) that can be turned on or off. A good visualization is usually obtained by making both axes logarithmic. This can be specified in the “Customize menu”, the toothed wheel symbol at the top of the panel. The two rows of displays are: 1) “Original Data Slice”: Cuts through the u and v directions of the Fourier transformed input images. A vertical line shows the location of the effective dish diameter(s). 2) “Feathered Data Slice”: The same cuts, but scaled by the “low resolution scale factor” and weighted by the weighting functions (see § 5.6). In this display, the weighting functions themselves are shown, too.
At the top of the display effdshdiameter for u and v and sdfactor can be provided in the “Effective Dish Diameter” and “Low Resolution Scale Factor” input boxes.
The data can be visualized in different forms. The data type to be displayed can be selected in the “Color Options” menu. The data can be the unmodified, original data, or data that have been convolved with the high or low resolution beams. One can also select to display data that were weighted and scaled by the functions discussed above.
The data can also be displayed in the form of a “scatter plot” (Fig. 5.8). This allows one to check for differences in flux. In particular, the scaling parameter should be adjusted such that the flux of the Low-resolution data, convolved with the High beam, weighted and scaled, is the same as the Dirty data, convolved with the Low beam, weighted (use the High resolution data instead of the Dirty data if the latter are not available). If that can be achieved, the flux adjustments should be roughly correct. The “scatter plot” can display any two data sets on the two axes, selected from the “Color Preferences” menu.
The “Customize” button at the top (toothed wheel), allows one to set the display parameters as seen in Fig. 5.9. Options are to show the slice plot, the scatter plot, or the legend. One can also select between logarithmic and linear axes, and whether the x-axis for the slices are in the u, or v, or both directions, or, alternatively a radial average in the uv-plane can be used. For data cubes one can also select a particular velocity plane, or to average the data across all velocity channels.
For most careful imaging, you will want to restrict the region over which you allow CLEAN components to be found. To do this, you can create a ’deconvolution region’ or ’mask’ image using the boxit or the viewer. Note that clean can take simple boxes or box files as direct input to its mask parameter, so these tasks are most useful when direct input to clean (or use of interactive clean) will not suffice.
There are two ways to construct region files or mask images for use in deconvolution. The boxit task will find a set of box regions based upon an input image and control parameters.
The boxit task creates “cleanbox” deconvolution regions automatically from an image. It searches the image to find “islands”: all contiguous sets of pixels above the given threshold. The extreme x- and y-pixels of the island are used to determine the corners of a rectangular box that covers each island. The set of boxes are written out into a single region file with extension .rgn. Boxit works on single-plane images as well as multi-channel images: in the latter case, the thresholding and boxing is done separately in each plane of the image. The output region file from boxit can be used as the mask input parameter for the clean task (§ 5.3).
The parameter inputs for boxit are:
# boxit :: Box regions in image above given threshold value. imagename = '' # Name of image to threshold regionfile = '' # Output region file threshold = '0.0mJy' # Threshold value. Must include units. minsize = 2 # Minimum number of pixels for a boxable island diag = False # Count diagonal connections? boxstretch = 1 # Increase box sizes by this many pixels beyond thresholded pixels. overwrite = False # Overwrite existing region file?
The regionfile parameter specifies the root name of the region file. It will automatically be given .rgn as the file extension. The minsize parameter specifies the smallest island that qualifies to be boxed. It refers to the total number of pixels in the island. To include pixels connected only on the diagonal as being part of the same island, set the diag parameter to True. The boxstretch parameter increases the size of the boxes beyond the extent of the island, and can range from -1 to 5. For a value of 1 (the default), the box is stretched by one pixel in each outward direction; therefore, each side of the box lengthens by two pixels. Finally, the parameter overwrite specifies whether an existing region file can be overwritten.
ALERT: The boxit task is a prototype under active development and coded in Python. Eventually we will add functionality to deal with the creation of non-rectangular regions and with multi-plane masks, as well as efficiency improvements.
The ft task will add a source model (units should be Jy/pixel) or a clean component list to a Measurement Set. This is especially useful if you have a resolved calibrator and you want to start with a model of the source before you derive accurate gain solutions. This is also helpful for self-calibration (see § 5.11 below).
The inputs for ft are:
# ft :: Insert a source model a visibility set: vis = '' # Name of input visibility file (MS) field = '' # Field selection spw = 'all' # Spw selection model = '' # Name of input model image(s) nterms = 1 # Number of terms used to model the sky frequency # dependence complist = '' # Name of component list incremental = False # Add to the existing model visibility? usescratch = False # If True predicted visibility is stored in MODEL_DATA # column
An example on how to do this:
ft(vis='n75.ms', # Start with the visibility dataset n75.ms field='1328', # Select field name '1328+307' (minimum match) model='1328.model.image') # Name of the model image you have already
This example will add the source model ’1328.model.imag’ to all entries that match the field name ’1328’. If the parameter usescratch is set to ’True’, ft will Fourier transform the source model and fill the MODEL_DATA column with the data. This, however, is only needed in special applications and usescratch=F is the default.
Alternatively, one can add a clean component list to be used as a model to the MS. The following procedure is an example:
# for a point source with no spectral index cl.addcomponent(flux=0.39, fluxunit='Jy',shape='point', dir='J2000 19h33m09s 15d01m20s') # for a Gaussian with a spectral index cl.addcomponent(flux=1.25, fluxunit='mJy', polarization='Stokes', dir='J2000 19h30m00s 15d00m00s', shape='gaussian', majoraxis='10arcsec', minoraxis='6arcsec', positionangle='0deg', freq='1.25GHz', spectrumtype='spectral index', index=-0.8) ###you can add more components if you wish by calling addcomponent repeatedly with different params ##save it to disk cl.rename('my_component.cl') cl.close() ## write the model into the measurement set ('myms') ft(vis='myms', complist='my_component.cl')
The uv-data can also be gridded with msuvbin before imaging with clean to average large datasets. In particular multi-epoch observations with similar array configurations and frequency setups can be gridded on the same MS along with the data being obtained. msuvbin is currently experimental and at this time has the following parameters:
# msuvbin :: grid the visibility data onto a defined uniform grid (in the form of an ms); multiple MS's can be done onto the same grid vis = '' # Name of input visibility file (MS) field = '' # Field selection of input ms spw = '' # Spw selection taql = '' # TaQl string for data selection outvis = '' # name of output uvgrid phasecenter = '' # phase center of uv grid nx = 1000 # Number of pixels of grid along the x-axis ny = 1000 # Number of pixels of grid along the y-axis cell = '1arcsec' # pixel cell size defined in sky dimension ncorr = 1 # number of correlations to store in grid nchan = 1 # Number of spectral channels in grid fstart = '1GHz' # Frequency of first spectral channel fstep = '1kHz' # spectral channel width wproject = False # Do wprojection correction while gridding memfrac = 0.5 # Limit how much of memory to use
where field, spw, and taql are data selection parameters. The grid is defined by nx, ny, cell, as well as nchan and, fstart and fstep for the spectral axis. The phasecenter defines the reference phase center for the grid.
When wproject=F, the grid will use a spheroid function to map the visibilities to the grid, wproject=T will apply W-projection (see § 5.3.18) for wide field imaging.
msuvbin will create the specified grid and map the visibilities on it. If the outputvis already exists, it will use that grid instead and add the visibilities specified by vis to be added to the existing, gridded data. This allows one to pre-grid and thus average very large datasets, in particular those that are observed over many epochs.
If you have only an image (obtained from some telescope) and an image of its point spread function, then you can attempt a simple image-plane deconvolution. Note that for interferometer data, full uv-plane deconvolution using clean or similar algorithm is superior!
The default inputs for deconvolve are:
# deconvolve :: Deconvolving a point spread function from an image imagename = '' # Name of image to deconvolve model = '' # Name of output image to which deconvolved components are stored psf = '' # Name of psf or gaussian parameters if psf is assumed gaussian alg = 'clark' # Deconvolution algorithm to use niter = 10 # number of iteration to use in deconvolution process gain = 0.1 # CLEAN gain parameter threshold = '0.0Jy' # level below which sources will not be deconvolved mask = '' # Name of image that has mask to limit region of deconvolution
The algorithm (alg) options are: ’clark’, ’hogbom’, ’multiscale’ or ’mem’. The ’multiscale’ and ’mem’ options will open the usual set of sub-parameters for these methods.
Once you have a model image or set of model components reconstructed from your data using one of the deconvolution techniques described above, you can use it to refine your calibration. This is called self-calibration as it uses the data to determine its own calibration (rather than observations of special calibration sources).
In principle, self-calibration is no different than the calibration process we described earlier (§ 4). In effect, you alternate between calibration and imaging cycles, refining the calibration and the model as you go. The trick is you have to be careful, as defects in early stages of the calibration can get into the model, and thus prevent the calibration from improving. In practice, it is best to not clean very deeply early on, so that the CLEAN model contains correct components only.
One important thing to keep in mind is that the self-calibration relies upon having the most recent source model inside the MS. This is indeed the case if you follow the imaging (using clean) directly by the self-calibration. If you have done something strange in between and have lost or overwritten source model (for example done some extra cleaning that you do not want to keep), then use the ft task (see § 5.8 above), which adds a source model image or clean component lists to an MS.
Likewise, during self-calibration (once you have a new calibration solution) the imaging part relies upon having the CORRECTED_DATA column contain the self-calibrated data. This is done with the applycal task (§ 4.6.1).
The clearcal command can be used during the self-calibration if you need to clear the CORRECTED_DATA column and revert to the original DATA. If you need to restore the CORRECTED_DATA to any previous stage in the self-calibration, use applycal again with the appropriate calibration tables.
ALERT: In later patches we will change the tasks so that users need not worry what is contained in the MS scratch columns and how to fill them. CASA will handle that underneath for you!
For now, we refer the user back to the calibration chapter for a reminder on how to run the calibration tasks.
Parallelized cleaning on multi-core systems will be integrated in the new task tclean which is currently experimental.
The data reduction tutorials on casaguides.nrao.edu provide walkthroughs for high and low frequency, spectral line and polarization imaging techniques.