Imaging Algorithms
Imaging is the process of converting a list of calibrated visiblities into a raw or 'dirty' image. There are three stages to inteferometric imageformation: weighting, convolutional resampling, and a Fourier transform.
Data Weighting
During imaging, the visibilities can be weighted in different ways to alter the instrument's natural response function in ways that make the imagereconstruction tractable.
Natural
The natural weighting scheme gives equal weight to all samples. Since usually, lower spatial frequencies are sampled more often than the higher ones, the inner uvplane will have a significantly higher density of samples and hence signaltonoise than the outer uvplane. The resulting "densityupweighting" of the inner uvplane will produce the largest angular resolution and can sometimes result in undesirable structure in the PSF which reduces the accuracy of the minor cycle. However, at the location of a source, this method preserves the natural pointsource sensitivity of the instrument.
Uniform
Uniform weighting gives equal weight to each measured spatial frequency irrespective of sample density. The resulting PSF has the narrowest possible main lobe (i.e. smallest possible angular resolution) and suppressed sidelobes across the entire image and is best suited for sources with high signaltonoise ratios to minimize sidelobe contamination between sources. However, the peak sensitivity is significantly worse than optimal (typically ~20% worse for reasonably large number of antenna interferometers), since data points in densely sampled regions have been weighted down to make the weights uniform. Also, isolated measurements can get artifically high relative weights and this may introduce further artifacts into the PSF.
Briggs
Briggs or Robust weighting [1] creates a PSF that smoothly varies between natural and uniform weighting based on the signaltonoise ratio of the measurements and a tunable parameter that defines a noise threshold. High signaltonoise samples are weighted by sample density to optimize for PSF shape, and low signaltonoise data are naturally weighted to optimize for sensitivity.
Citation Number  1 

Citation Text 
Briggs D., 1995, PhD Thesis, New Mexico Institude of Mining and Technology

uvTaper
uvtapering applies a multiplicative Gaussian taper to the spatial frequency grid, to weight down high spatialfrequency measurements relative to the rest. This suppresses artifacts arising from poorely sampled regions near and beyond the maximum spatial frequency. This is important for deconvolution because visibilities estimated for these regions would have poor or no constraints from the data. Also, the natural PSF is smoothed out and this tunes the sensitivity of the instrument to scale sizes larger than the angularresolution of the instrument by increasing the width of the main lobe. There are limits to how much uvtapering is desirable, however, because the sensitiivty will decrease as more and more data is downweighted. uvtapering is usually applied in combination with one of the above weighting methods.
Gridding + FFT
Imaging weights and weighted visibilities are first resampled onto a regular uvgrid (convolutional resampling) using a prolatespheroidal function as the gridding convolution function (GCF). The result is then Fourierinverted and gridcorrected to remove the imagedomain effect of the GCF. The PSF and residual image are then normalized by the sumofweights.
Directiondependent corrections
Basic gridding methods use prolate spheroidals for gridding (gridder='standard') along with imagedomain operations to correct for directiondependent effects. More sophiticated, and computationallyintesitve methods (gridder='wproject','widefield','mosaic','awproject') apply directiondependent, timevariable and baselinedependent corrections during gridding in the visibilitydomain, by choosing/computing the appropriate gridding convolution kernel to use along with the imagingweights.
The figure below shows examples of kernels used for the following gridding methods: Standard, WProjection, and AProjection. Combinations of widefield corrections are done by convolving these kernels together. For example, AWProjection will convolve Wkernels with baseline aperture functions and possibly include a prolate spheroidal as well for its antialiasing properties. Mosaicing is implemented as a phase gradient across the gridding convolution kernel calculated at the uvcell resolution dictated by the full mosaic image size.
In tclean, gridder='mosaic' uses Airy disk or polynomial models to construct azimuthally symmetric beams per antenna that are transformed into baseline convolution functions and used for gridding. gridder='awproject' uses raytraced models of antenna aperture illumination functions to construct GCFs per baseline and time (including azimuthal asymmetry, beam squint, and rotation with time). More details are given in the Wide Field Imaging page.
Computing costs during gridding scale directly with the number of pixels needed to accurately describe each convolution kernel. The standard gridding kernel (prolate spheroid) typically has 3x3 pixels. WProjection kernels can range from 5x5 to a few hundred pixels on a side. AProjection kernels typically range from 8x8 to 50x50 pixels. When effects are combined by convolving together different kernels (for example A and W Projection), the kernel sizes increase accordingly.
Memory (and onetime computing costs) also scale with the number of distinct kernels that one must operate with. For example, a large number of different WProjection kernels, or an array whose antenna illumination patterns are different enough between antennas that they need to be treated separately. In the case of a heterogenous array, each baseline illumination function can be different. Additionally, if any of these aperture illumination based kernels are rotationally asymmetric, they will need to be rotated (or recomputed at different parallactic angles) as a function of time.
Normalization
After gridding and the FFT, images must be normalized (by the sum of weights, and optionally by some form of the primary beam weights) to ensure that the flux in the images represents skydomain flux.
SumOfWeights and Weight Images
The tclean task produces a number of output images used for normalization. The primary reason these are explicit images on disk (and not just internal temporary files in memory) is that for continuum paralellization, there is the need to accumulate numerators and denominators separately before the normalization step. For the most part, end users can safely ignore the output .weight, .sumwt and .gridwt images. However, their contents are documented here.
.sumwt
A singlepixel image containing the sumofweights (or, the peak of the PSF). For natural weighting, this is just the sum of the data weights. For other weighting schemes it contains the effect of the weighting algorithm. For instance, uniform weighting will typically produce a smaller sumofweights than natural weighting. An approximate theoretical sensitivity can be computed as sqrt( 1/sumwt ). A more accurate calculation requires a different calculation (LINK to some docs from GM on this). In tclean, facetted imaging will produce one value of sumwt per facet as the normalizations are to be done separately per facet. Also, for cube imaging, .sumwt will contain one value per image channel and it can be used to visualize the relative weights across the spectrum (and therefore expected image rms). This theoretical sensitivity information is printed to the logger after the PSF generation stage.
.weight
Projection gridders such as 'mosaic' and 'awproject' use baseline aperture illumination functions for gridding. The quantity in the .weight image represents the square of the PB, accumulated over baselines, time and frequency. For mosaics, it includes a combination across pointing as well (although as can be seen from the equations in the mosaicing section, this is not accurate when weights between pointings differ considerably).
.gridwt
A series of temporary images (within the parallel .workdirectory) to accumulate binned natural weights before the calculation of imaging weghts. This is not used for normalization anywhere after the initial image weighting stage.
Normalization Steps
Standard Imaging
For gridders other than 'mosaic' and 'awproject', normalization of the image formed after gridding and the FFT is just the division by the sum of weights (read from the .sumwt image). This suffices to transform the image into units of sky brightness. This is the typical flatnoise normalization (see below).
Imaging with primary beams (and mosaics)
For gridder='mosaic' and 'awproject' that use baseline aperture illumination functions during gridding, the result is an additional instance of the PB in the images, which needs to be divided out. Normalization involves three steps (a) division by the sumofweights (b) division by an average PB given by sqrt(weightimage) and (c) a scaling to move the peak of the PB = sqrt(weightimage) to 1.0. This ensures that fluxes in the dirty image (and therefore those seen by the minor cycle) represent true sky fluxes in regions where the primary beam is at its peak value, or where the mosaic has a relatively constant flat sensitivity pattern. The reverse operations of (b) and (c) are done before predicting a model image in the major cycle. ( This description refers to flatnoise normalization, and corresponding changes are done for the other options ).
Types of normalization
There are multiple ways of normalizing the residual image before beginning minor cycle iterations. One is to divide out the primary beam before deconvolution and another is to divide out the primary beam from the deconvolved image. Both approaches are valid, so it is important to clarify the difference between the two. A third option is included for completeness.
For all options, the 'pblimit' parameter controls regions in the image where PBcorrection is actually computed. Regions below the pblimit cannot be normalized and are set to zero. For standard imaging, this refers only to the pbcorrected output image. For gridder='mosaic' and 'awproject' it applies to the residual, restored and pbcorrected images. A small value (e.g. pblimit=0.01) can be used to increase the region of the sky actually imaged.
Flatnoise
The dirty image represents $I^{dirty} = I^{psf} \star \left( I^{PB} \cdot I^{sky} \right)$
Primarybeam correction is not done before the minor cycle deconvolution. The dirty image is the instrument's response to the product of the sky and the primary beam, and therefore the model image will represent the product of the sky brightness and the average primary beam. The noise in the image is related directly to the measurement noise due to the interferometer, and is the same all across the image. The minor cycle can give equal weight to all flux components that it finds. At the end of deconvolution, the primary beam must be divided out of the restored image. This form of normalization is useful when the primary beam is the dominant directiondependent effect because the images going into the minor cycle satisfy a convolution equation. It is also more appropriate for singlepointing fieldsofview.
Imaging with the prolate spheroidal gridder will automatically give flat noise images.
Flatsky
The dirty image represents $I^{dirty} = \frac{1}{I^{PB}} \cdot \left[I^{psf} \star \left( I^{PB} \cdot I^{sky} \right) \right]$
Approximate Primarybeam correction is done on the dirty image, before the minor cycle iterations. The amplitude of the flux components found during deconvolution will be free of the primary beam, and will represent the true sky. However, the image going into the minor cycle will not satisfy a convolution equation and the noise in the dirty image will be higher in regions where the primarybeam gain is low. Therefore, the minor cycle needs to account for this while searching for flux components (a signaltonoise dependent CLEAN). This form of normalization is particularly useful for mosaic imaging where the sky brightness can extend across many pointings, or if there is an uneven distribution of weights across pointings. This is because joint mosaics are usually done for sources with spatial scales larger than the fieldofview of each antenna and which are not explicitly present in the measured data. In this situation, allowing the minor cycle to use flux components that span across beams of adjacent pointings is likely to provide a better constraint on the reconstruction of these unmeasured spatial frequencies, and produce smoother largescale emission.
PBsquare normalization
The dirty image represents $I^{dirty} = I^{PB} \cdot \left[ I^{psf} \star \left( I^{PB} \cdot I^{sky} \right) \right]$
This third option (not currenly available for use, but supported internally) is to not do any PBbased divisions after the gridding and FFT (using gridder='mosaic' or 'awproject', but to let the minor cycle proceed as is. Advantages of this approach are the elimination of errorinducing divisions by the primary beam (especially in low gain regions and near PB cutoff edges).