Previous Up Next

Chapter 8  Single-dish Data Processing

This chapter deals with a description of the single-dish toolsets of CASA. The goal of CASA single-dish development is to produce useable and intuitive software on the CASA platform, to calibrate, reduce and image data taken with the ALMA single-dish telescopes the Nobeyama, and the ASTE single-dish telescope.

The CASA single-dish tool suite is built separately to the interferometer suite, having an early development history based on the ATNF Spectral Analysis Package (ASAP) which operated on a ’scantable’ format. Work has been ongoing towards transiting ’Scantable’ format to the "Measurement Set" format in use by interferometer CASA. In doing so, both software and development effort of CASA is shared more openly between CASA single-dish and Interferometer developers and the longer-term future development is more straightforward, without requiring very specialised knowledge.

The basic reduction and imaging steps of single-dish data could be accomplished entirely in Measurement Set format since CASA 4.5, and we continue to build the extended functionalities of many of the basic steps in CASA 4.7: fitting of sinusoid families to baselines (to remove standing-waves), min/max clipping in the imaging stage (to improve robustness to spurious data), and automated line-finding and characterisation.

We anticipate that CASA 4.7 will be the last CASA version where support for scantable format will be maintained - although CASA single dish will continue to maintain a ’filler’ task that converts scantable format to Measurement set (importasap), the remaining Scantable-based tasks will not be explicitly supported beyond CASA 4.7.

We also notify the users of our future intention to re-organise the naming of the CASA-SD tasks (to be implemented in CASA 5.0), where tasks prefixed with ’tsd*’ will be changed to simply ’sd’. Any tasks superseded by these new tasks will be renamed to oldsd*

In this chapter, we detail the current status of CASA SD implementation and functionality. Covering CASA SD-specific environments, recent progress in design development, a brief overview of reducing ALMA single-dish data with CASA 4.7, followed by a more detailed description of the CASA single-dish Measurement set format task operation. Finally, we will describe somewhat more briefly, the operation of the ASAP-based tasks.

For details on ASAP – including the User Guide, Reference Manual, and tutorial – see the ASAP home page at ATNF:

For trouble-shooting, please be sure to check the list of known issues and features of ASAP and the SDtasks presented in Sect. 1.2.1 and\_ki.shtml first.

8.1  CASA-SD status, setup and current-issues

8.1.1  Transition from ASAP scantable format

All ALMA raw data, including data from the Single dish (also ’Total Power’; TP) array of ALMA are in the ALMA Science Data Model (ASDM) format. For reasons of efficiency, prior to CASA 4.4, Single Dish ASDMs were processed with some employ of the ASAP toolkit, which required data conversion to scantable format. Differences of the formats mainly arise from the richness of ancillary data, the way pointing and antenna information is stored, and the way the integrations are stored and sorted in the data itself. As was previously mentioned, the steps for basic reduction and calibration can be completed without conversion to scantable format since CASA 4.5, and we continue to build towards complete dependence on operating in measurement set format for the coming cycles with the view to dispensing with scantable format, in approximately CASA 5.0.

8.1.2  Correlator non-linearity: single dish data before Cycle 3.

In 2014 it was determined ASDM products of the Atacama Compact Array correlator must be scaled to compensate for nonlinear terms. From cycle 3 and onwards, the scaling is done at the point of data-taking (i.e. online calibration). Data taken before Cycle 3 should be checked to see if it is necessary to scale the data in CASA. The adjustment to compensate for the non-linearity is effected at any stage after calibration, simply by multiplying the calibrated data by a factor of 1.25. This can be achieved using the gencal and applycal tasks.

8.2  SD data-taking in brief

Here we briefly overview the way ALMA obtains single-dish observational data. It is not intended to be exhaustive, and the reader should refer to the Technical handbook for a more complete description.

Both Interferometer and Single-dish data from ALMA is initially in a format called the ALMA-Science Data Model. These ASDMs must be converted at least to Measurement Set for many single-dish tasks (Some CASA single-dish tasks also requires conversion to scantable format, but this happens transparently to the operator, and is not necessary throughout the conventional data reduction process). Other observatories (i.e. Nobyemama, ASTE) have their own native format, which is converted to MS format using fillers (e.g. importnro).

All single-dish target observations need to be ’referenced’ with measurements of ’blank’ sky (that is, a measurement that does away from any cosmological source). We refer to the target and blank-sky observations as ON and OFF (or REFERENCE) respectively.

The native unit for properly-referenced and calibrated single-dish observations is brightness temperature: TB, in units of K. ALMA single-dish data are always associated with interferometer observations, and the combination of the single-dish and interferometer data requires that the single-dish units of brightness temperature be converted into flux density. The factors for conversion from brightness temperature to flux density is computed empirically, using observations of an ’amplitude calibrator’; a quasar for band 3, or otherwise Uranus. It is currently ALMA policy that the measurements of the Jy/K calibrator be taken within a few days of the science observations for all bands (although bands 3 and 6 may have a more relaxed constraint, of approximately a week). Both the calibration and science observations also have their own pointing, focus, Tsys and reference (short-term calibration) observations in the following order - with the following exceptions:

The observing modes for science target and flux calibrator observations is shown schematically in Figure 8.1

Figure 8.1: Observations modes for science observations A) Left where the OFF position is distant from the target field, and calibrator observations B), Right where the OFF positions are extracted from the edges of the scanned area.

  1. Focus optimisation (data discarded from data reduction)
  2. Pointing optimisation (data discarded from data reduction)
  3. Atmosphere measurement (Tsys calibration)
  4. Pointing Measurement of Reference (OFF) science target observations only
  5. Atmosphere measurement (Tsys calibration)
  6. OTF Measurement of science target.
  7. Repeat from #3 until integration time is achieved.

The OFF position is not observed during the observations of the calibrator (quasar/Uranus), because the calibrators are necessarily compact, and so the mapped are is much larger than the source size. Calibrator observation data are calibrated relative to "edge of map" data, rather than explicit OFF data.

8.3  Overview of SD tools and tasks

In the following, we describe the core single-dish tasks, used most frequently in the contexts of reduction tools and analysis tools. Non-core tasks (i.e. tasks non-essential for basic data reduction, image generation and analysis) as well as tasks that are expected to be depreciated in the near future are briefly described in Section 8.5, and the remainder of the entire single-dish ASAP-based SD tool set are detailed in Section 8.7.
Core CASA SD reduction task descriptions

Core CASA SD analysis task descriptions

Other CASA tasks relevant for SD data reduction and analysis

8.4  Overview for Reducing ALMA Single-dish Data

As described in Section 8.2, Single-dish science data taken in Cycle 3 onwards are calibrated with separate observations of a quasar (band 3 observations), or Uranus (all other bands). Both the science and calibrator datasets are separately processed (i.e. they have their own measurements of Tsys separately applied). The Calibrator observations are compared with ALMA databases to determine a factor to convert the native Kelvin units of the single-dish observations, into the Janskys units of the measurements held in the ALMA database. This conversion step is necessary, to properly scale for combination of the single-dish and interferometer datasets.

The entire reduction workflow is shown in Figure 8.2, and each step is detailed below. The tasks applied for the reduction are described in following sections which are organised by coarsely grouping the tasks into levels of functionality.

Figure 8.2: General steps for processing both calibrator and science data, and applying calibrations to science data. The right-most column describes the CASA 4.7 tasks called (or keywords set =T, in the case of bdflags2MS) during each stage.

8.4.1  Brief Description of SD reduction

The following briefs each of the steps towards fully calibrated image datasets, as shown in Figure 8.2. Note that this process occurs entirely in Measurement Set format. For more details on the actual steps and their purpose, please refer to the CASA guide.

  1. importASDM, import ASDM flags
    The conversion from ASDM to Measurement Set is done with the task
    importasdm (See section 2.2.1). This task also imports various flags necessary for processing single-dish data, from the ADSM to the measurement set. This is accomplished by setting bdflags2MS=importasdm.
  2. Inspection of data and Tsys quality
    To obtain a kind of overview, and determine spectral window identification numbers, etc. we use the
    listobs task. This task has superseded the sd-based task sdlist. To examine the quality of the Tsys (used to calibrate the data to the kelvin scale), we use the task gencal to extract the Tsys data as a calibration table, then use plotbandpass (or ploms) to examine the stability.
  3. ’a priori’ flagging (band edge, any atmospheric lines)
    Typically, the edges of the bandpasses dive to zero power. This is a symptom of all tuned radio telescopes and is simply a manifestation of the frequency-sampling function across the baseband. This step is only important if the spectral window also samples the edge of the baseband and in this case we typically flag a few percent of each band - approximately 5-10% of the affected edges of the spectral windows using the
    flagdata task (which has replaced the task sdflag). The unflagged data should be examined with plotms to ensure the edges of the bands are removed.

    The case where a spectral line of scientific value falls close to the edge of the band, will omit this step of course, and will need to adopt a more careful baseline correction effort at later stages.

  4. Applying sky and Tsys calibration
    Applying the sky and T
    sys calibration is effected using the tsdcal task in the Measurement set format: This task completes bandpass correction and calibration step, with the output in antenna temperature units: TA*, however note that additional calibrations need to be applied (described below), before the data are correctly represented in TA* units.
  5. Correction for Correlator non-linearity (only data taken before Cycle 3)
    The correction is applied by passing the correction value (simply a factor of 1.25), via a
    gencal call, which generates the correction as a calibration table, and a applycal call, which applies the calibration (Note that gencal has some internal mathematics that actually applies the square of the inverse supplied calibration value - therefore the value passed into gencal must be the inverse square root of the desired value: i.e. 1/√x).
  6. Baseline subtraction
    Significant work has gone into improving the functionality of
    tsdbaseline, used to compute and remove a residual baseline feature in the calibrated spectra. It is typical to attempt to remove a rather low-order polynomial or spline, although higher-frequency observations often suffer more significantly from high-order baseline perturbations. Band 3 is almost immune to atmospheric fluctuations and so orders 1 or 2 are sufficient at those bands. Higher orders are likely for higher bands. tsdbaseline in CASA 4.7 features a new sinusoid removal algorithm, which operates on the fourier transform of the spectral-line, enabling the user to remove specific lags, or auto-detect to find the lag numbers of the most powerful components. This feature is most useful for mitigating baseline-ripples manifest in many single-dish data.

    After this stage is complete, the data will be calibrated in units of Kelvin, in antenna temperature units: TA*

  7. Imaging of Calibration data - calibration data only
    The imaging step is quite involved, and will be described in the last stage below. The difference in imaging the calibrator is that a single continuum image is generated - in contrast to the science data where a spectral
    cube is created. Note that this step is not critical for generating the required product datasets (in units of Kelvin). It is only important if the single-dish data is to be combined with interferometer data, where it must be converted into Janskys to match units with the interferometer data. If this calibration step is not completed, the science data will remain in units of TA*, i.e. not corrected for primary beam forward efficiency.
  8. Determination of K to Jy conversion - calibration data only
    After imaging the calibration data, the peak brightness in the calibration image (in antenna temperature, in kelvin) is extracted using
    imstat. The process differs a little if the calibrator is a planet or a quasar, in that the planet observations must also be deconvolved (since the planet is a significant size relative to the beam). To determine the JyperK value, the known flux density (in Jy) is compared with the extracted peak brightness temperature (in K), and a scaling factor determined simply with the ratio flux (Jy) /Brightness (K). This value is used to scale the science data in the next step. It is envisaged that this process will be be done by JAO in the near future, and ALMA users will need to invoke only a single task to apply the conversion to the science data (details to be determined)
  9. Application of K to Jy conversion - science data only
    As for the application of the nonlinearity correction, the conversion factor for Kelvin to Jy is simply multiplication by a factor. Again, we pass the correction factor via a
    gencal call and a applycal call, to generate and apply the calibration tables, respectively.
  10. Imaging of Science data
    For this step (and also for the step to image the calibrator, above), the analysis utils package needs to be loaded.

    Many of the Imaging parameters are not yet automatically determined by default, and so additional tasks must be called (spatial sampling rates are determined with aU.getTPSampling, Beam FWHM is determined with aU.PrimaryBeamArcsec) to allocate them to variables which will be passed into the main task: sdimaging. Note that sdimaging can process multiple calibrated datasets simultaneously, and there is no need to split the data into respective antennas. That is: multiple calibrated datasets, with multiple single-dish antennas can be processed transparently with sdimaging.

    Further baselining is possible with imcontsub, and moment images can be generated from the resulting calibrated data, with immoments,

8.5  Brief Description of functionality for relevant SD tasks

In the following, we describe tasks that are exclusive to, and critical for reducing data using CASA single-dish. The discussion of other tasks that are common to interferometer processing (e.g. flagmanager, importASDM) will not be repeated here.

8.5.1  Importing and flagging:
importnro, importasap, sdflag and sdflagmanager

Tasks importnro and importasap are used to handle specialised importing of data from different formats (specifically: Nobeyama NOSTAR format (see section 8.6), and Scantable format respectively. The fundamental structures of the Scantable format vs. Measurement set format mean that measurements of Tsys in the Scantable set format are interpolated (and/or extrapolated, according to some specified algorithm) into the measurement set format.

sdflag and sdflagmanager operate on Scantable format, (and are replaced by flagdata and flagmanager in the case of flagging Measurement set format), and allow setting of flags for data in a wide variety of dimensions (time, channel, coorelation, etc.). Note that sdflag has an interactive option which is not ported into the Measurement set-equivalent: flagdata, though the functionality of the plotlevel parameter can be almost completely accessed using other extant CASA tasks (e.g. plotms)

8.5.2  Calibration and baselining:
sdcal, sdcal2, tsdcal, tsdbaseline, sdbaseline, sdbaseline2 and sdgaincal

Tasks sdcal, sdcal2 and tsdcal are all calibration tasks that complete the basic bandpass and calibration: sdcal operates based in scantable format, sdcal2 operates in scantable format but uses a interferometery-style of generating and applying calibration tables (that it can also produce), and tsdcal operates in the measurement set. Overall, they operate the same in the basic formalism, to bandpass correct the data and convert into brightness temperature in units of K, using the equation

TA* = Tsys×(ONOFF)/OFF      (8.1)

where ON and OFF are the data on-source (i.e., during the raster scanning) and off-source (on a reference position where only background emission exists), respectively.

Of significance is the way in which ’OFF’ observations are identified. The three calibration modes are ’position-switched’: ps, ’On-the-fly:’ OTF and OTF-Raster:’ ’otfraster’. The OFF positions in ’ps’ mode are explicit in the observing mode: an isolated part of the sky free from emission is observed, and are automatically labelled as ’OFF’ by the online software at the point of data-taking. Note that calmode=’ps’ applies even to raster-mapped observations which have an explicit ’OFF’ position observed during data-taking.

Calmode= ’otfraster’ is the used to calibrate raster-mapped regions of sky, where the edges of the map in the raster direction are statistically combined on a per-raster basis, and used to calibrate the rest of the raster-row. Exactly how much of the ’edge’ of the map is used as ’OFF’ postitions is at the control of the user (using the ’fraction’ or ’noff’ keywords). Note that calmode=’otfraster’ applies only to square or rectangular fields.

Calmode= ’otf’ is a much more general mode which does not require (but will be happy with) rectangular fields. Areas (fast-) mapped with lissajous or double-circle observational modes without an explicit ’OFF’ target can really only be processed with calmode=’otf’, and it is a more general extension of ’otfraster’, where the edge of the field in any direction can be considered as an ’OFF’ - having said that, this calmode might have some strange results when applied to data raster-mapped in one direction.

New to CASA 4.7 is a task sdgaincal specific to single-dish ALMA observing pattern "Double circle", however, this task is still under development, and the application stage is buggy! Please submit an ALMA helpdesk ticket if you wish to use this task. In this mode, the telescope beam moves in fast-mapping mode, in smaller half-circles, with the circle centre rotating slowly around the target centre. The end result is a large, circular image with the beam having passed through the centre of the field a number of times equal to the number of sub-circles. This gain-calibration task assumes the brightness of the centre of the target does not vary quickly with time, and a time-varying gain calibration can be determined by normalising the brightness from each pass through the center. At this point, this task is developed specifically for ALMA solar observations, although it can be used for any target observed with the double-circle observation mode.

Task tsdbaseline, sdbaseline, sdbaseline2 fits (via least-squares fitting of ’poly’, ’chebyshev’, ’cspline’, or ’sinusoid’ functions) and removes a spectral baseline, and can optionally correct for atmospheric opacity (via tau) or perform unit conversion from Kelvin (the native single-dish brightness unit) to Janskys, via fluxunit. Both sdbaseline and tsdbaseline have an automated line-finding algorithm, which can be controlled and optimised by the user.

Note that ’baseline’ is typical parlance in single-dish processing referring to what is commonly low-order structure in the frequency domain. The structure occurs primarily from the time-variable turbulent and wet atmosphere in the (near-field) beam of the single-dish, and is therefore different for each single-dish telescope. Much of operating these tasks have to do with identifying parts of the spectrum with or without emission, and the parameters can be set to a number of different strategies.

Task sdreduce is implemented as a all-in-one package that can accomplish calibration, smoothing and sdcal, sdsmooth, sdbaseline all together.

Task sdgaincal is a specialized task implemented for the purpose of calibrating the ’doublecircle’ observation mode in ALMA. The beam sweeps out a large number of smaller circles rotating about, and overlapping at a central position. The sky brightness (including target) is assumed to be constant at that central position for the duration of the observation, and any variance is therefore a gain variation injected by sky+system. This task is useful in particular, for fast-mapped solar observations at higher ALMA bands (B8+). Please note that this task is buggy for CASA 5.0 release, and users should contact ALMA via helpdesk, for latest development information.

8.5.3  Gridding and imaging:
sdimaging, sdgrid,sdimprocess

Gridding data follows a few steps which themselves comprise a series of sampling functions - the fundamental steps here are common to the two gridding tasks; sdimaging and sdgrid (sdimprocess is an image optimization task, see below). sdgrid operates using the ASAP format, while sdimaging operates using Measurement set format.

  1. Generate a grid with appropriate parameters is characterised from data extracted within metadata and from user-inputs
  2. Iteratively populate the grid with values extracted and weighted from the actual data.
  3. Convolved the gridded data with a smoothing kernel

Obviously the user has significant control over the various gridding functions and should bear in mind the actual achieved resolution in the final data is necessarily a convolution of a number of sampling functions with the end result that the effective beam-size of the data is larger than the actual beam-size of the telescopes.

A number of smoothing kernels are available in sdimaging and sdgrid:

The different functions are somewhat (but not completely) akin to the robust parameter in interferometric processing, where the weights are distributed across the function to optimise sensitivity, or resolution. The actual shape of the gridding kernel is fully constrained in combination with kernel truncation scales specified with convsupport for spherical kernels, truncate for Gaussian and GJinc kernels, and in combination with kernel width scales, specified with gwidth for Gaussian and GJinc kernels, and with the jinc function parameter specified with jwidth.

The user would be advised to start with either a Spherical function (SF), and determine on their own basis and science goals, which functions are most appropriate for their purpose and by comparison of the resulting RMS (obtained with imstat) to the theoretical value.

When building the calibrator continuum image, use sdimaging with mode=’channel’, nchan=1 and width=nchan, where nchan is equal to the number of channels in the spectral window (which is shown in the listobs command).

The phasecenter parameter automatically defaults to the centre of the observed field.

Note that the sdgrid task has a means to modulate weighting contributing spectra that is not explicitly available in sdimaging, however weightings can now be initialised in single-dish data, according to Tsys, Integration time, and a number of other formalisms, with the CASA task initweights when generating an image

Note also that in CASA 4.7 and earlier, the formalisms for computing the weighting for stokes=’I’ (sdimaging only) are computed consistently with convention and by assuming stokes=I is a straightforward average of stokes X and stokes Y. While the computation of I=(XX+YY)/2. is mathematically equivalent to that of an average of XX and YY, it should not be regarded as such (since it ignores the presence of the Q terms), and the computations of the weights should not in general, be computed as if that is the case. Future versions of CASA (5.0+) will handle the weightings for stokes I correctly by default, but will preserve the option for (formally incorrect) convention.

New to CASA 4.7 is a clipminmax mode. This option rejects the maximum and minimum values contributing to the gridded pixel value - and is a very trimmed implementation of a median smoothing. It is nonetheless more robust to spurious datapoints. Note the benefit of clipping is lost when the number of integrations contributing to each gridded pixel is small, or where the incidence of spurious datapoints is approximately or greater than the number of beams (in area) encompassed by expected image.

Task sdimprocess is not a gridding task, but is an imageoptimisation task that attempts a Fourier-based approach to removing scanning noise by either substituting fourier components from data obtained over a variety of scanning directions (Emerson &; Grave 1988), or extrapolation in the Fourier domain if only one scanning direction is used (Sofue &; Reich 1979). It operates on either CASA or FITS images which are assumed in this case, to be significantly dominated by gain variations in telescope data (that manifest as stripes in the direction of scanning), and returns data in CASA image format. While it has the advantage that scanning noise can be effectively removed, the penalty is loss of Fourier components, which are likely to modify the representation of structure in the final image in a way unique to each data, and difficult to quantify without simulations.

8.5.4  Dataset mathematics and manipulation:
sdcoadd ,sdaverage, tsdsmooth, sdmath, sdscale, sdcoadd

It is anticipated that many of these tasks will be unsupported in the near future, since the migration to measurement set format will make them redundant, and ongoing development has enabled other alternatives.
sdcoadd operates on both Measurement set and scantable format data, and simply returns a single, format-free dataset from multiple input scantables. Some tolerances have to be applied when attempting to keep the spectral windows in line, in the frequency domain. In the migration to measurement set format, it is redundant with task concat, but there are few occasions, if any, where it need be invoked standard measurement set reduction, and note that task concat achieves a similar function to sdcoadd.

Note that due to the operation of the underlying ASAP libraries, averaging by polarization (polaverage = True), automatically averages in time too. There is no way to avoid this in sdaverage, and the workflow will be to leave polarization averaging until the last step, and apply it during gridding. Smoothing kernels include hanning, gaussian, boxcar and regrid.

The weighting parameters of averaging (’tintsys’, ’tsys’, ’tint’, ’var’ or ’median’) are attempts to normalise the noise level between accumulated spectra.

When averaging integrations with for example, different Tsys, dν and/or dT, normalising the data correctly is important. For data where the channel widths are identical, but the integration times are different, the averaged spectra are found with:

Tav(ν) = (T1(ν)× w1 + T2(ν)× w2)/(w1+w2)     (8.2)

where the ratio of weighting parameters, w1:w2 is found with the ratio of integration times, dT1:dT2.

The other weighting schemes (’tsys’,’tintsys’ are analogues of the ’tint’ weighting, but instead capture variation of Tsys. eg. normalising under Tsys weighting uses 1/Tsys2. These terms are natrually derived from the radiometer equation, where the variance in a signal is a function of the system temperature, channel width and integration time:

Var(Tsysdν, dT) = 
(dν× dT)

Where Tsys is the measured System temperature in K, dν here being the channel width in frequency and dT is the integration time in seconds.

The alternatives CASA SD considers, and some vague use cases are:

  1. normalise by integration time and system temperature (tintsys): dT/Tsys2, when dν is known to be invariant.
  2. normalise by system temperature only (tsys); 1/Tsys2 - this would be the case when dν, dt are known to be invariant
  3. normalise by integration time only (tint): dT; when Tsys and dν are known to be invariant
  4. normalise by variance - an empirical calculation from the spectra, that should implicitly capture any variations of any of the terms.

Task tsdsmooth applies a standard smoothing in the spectral domain, using a Gaussian or Boxcar kernel. Note the smoothing is done by convolution in the Fourier domain and therefore masking and blanking data can affect the outcome. Please use smooth masked data with care. For best results. baselining the data before smoothing to ensure there are few large excursions to zero in the spectral domain, which might contaminate the transformed data.

The task sdmath can perform basic arithmetic operations on scantable or measurement set format datasets. This task will be unsupported in the near future. The expr paramter is formed using combinations of operators and optionally, datasets or variables; For two datasets containing, for example, "on" data and "off" data (i.e. science and reference observations), the data can be calibrated via the standard "on-off/off" algorithm:


Task sdscale simply scales the data by a user-supplied constant value. This function is actually redundant with sdmath and also with gencal and applycal, and like sdmath , it will be unsupported in the near future.

8.5.5  Fitting:
sdfit, tsdfit

Fitting of Gaussian or Lorentzian profiles to spectral data is optimised with the Levenberg-Marquardt algorithm. The fitting is done on per-spectra basis, using user-supplied first guesses for the requested number of gaussians and their positions. The guesses of peak intensity, position, and width that are provided to the fitting algorithm are determined by dividing the user-provided range into N sub-ranges (where N is the number of components - also provided by the user), and the peak value and it’s position are determined for each sub-range, directly from the data. The width estimate is computed as 70% of equivalent width of the range.

New to CASA 4.7 is the fitmode=’auto’. As the name suggests, this option automatically detects line emission according to the specified expandable parameters. Generally the automatic detection works very well, except in cases with a great deal (>>50%) of overlap between two or more (Gaussian or Lorentzian) components.

From the user point of view, one of the key differences between sdfit and tsdfit is that sdfit applies additional smoothing, and is not part of the transition to measurment set (i.e. sdfits operates on scantable format and Measurement set format), though development plans to support on the fly averaging in tsdfit in the future. Note that In the near future, sdfit will be unsupported.

8.5.6  Dataset output:
sdplot, sdlist, sdstat

The functionality of these three tasks are now generally accessed by plotms, listobs, imstat and visstat2 which can do much of the same functionality in Measurement Set format.

sdlist, used to obtain basic observational information about the dataset is replaced by listobs for measurement set format data, but otherwise completes the same functionality. sdstat and visstat2 operate in the frequency domain, whereas imstat operates in the image domain. Similarly, sdplot which is used to plot the spectral information of the data set has expanded capability for polarisation averaging before plotting that is lacking in plotms (though note that pol average in ASAP tasks always do time averaging at the same time as polarisation averaging).

These tasks will be maintained until the migration to measurement set format is complete

8.5.7  Data input and output:

Data can be transported across formats using sdsave. Output formats include scantable, Measurement set, SDFITS and ASCII. This task has little, if any application in core reduction of single-dish data, but is useful if the user wishes to export data into ASCII format for offline processing, outside CASA.

8.6  Import of NRO data and scantable data

New to CASA 4.7 is importnro, a filler to convert between NOSTAR format and Measurement Set format. This new task obviates the need for conversion into Scantable format via sdsave, which will not be developed further from after approximately CASA 5.0.

Even so; at this point, importing NRO data into scantable format is still available, and both NEWSTAR and NOSTAR formats are supported, although multibeam data is currently problematic and older data using AOS may show some inconsistencies relative to the original (before-import) data. CASA currently supports Dual-polarization data. Note also that the different beams are identified by ANTENNA_ID in Measurement Set..

In Scantable format: Note that an individual IFNO is assigned to each of arrays to identify data from an array. Note that arrays are numbered by successive IFNOs. For example, if observation of three arrays, A01, A03, and A05, are stored in a data set, their IFNOs will be 0, 1, and 2, respectively. The parameter, freqref, controls how frequency reference frame is set. The default is ’rest’. The other option is ’vref’. If freqref is ’vref’, frequency reference frame takes from VREF field in input NRO data. This parameter is only available in tool level, i.e. sd.scantable.

Note that at this point, processing of full polarization data needs to be done very carefully when working with scantable format, and with particular attention paid to the spectral window identification.

8.7  Using The ASAP Toolkit

Please note, ASAP operates in Scantable format, which will not be supported from approximately after next CASA release (CASA 5.0)

When operating on scantable format, the tasks transparently invoke a call to sd.scantable to read the data. The scantable objects do not persist within CASA after completion of the tasks and are destroyed to free up memory. On the other hand, all of the tasks that operate on measurement set format work from a file on disk rather than from in memory as the ASAP toolkit does.

Although the Measurement Set can store data from multiple antennas even if it consists of only single-dish spectra (auto-correlation data), the scantable cannot distinguish data from multiple antennas. It causes a problem when the user processes the Measurement Set using tasks that expect ASAP format. Therefore, when using tasks that can operate also on scantable format to process measurement set format, the id or name of the antenna that the user wants to process must be explicitly specified. This can be done by antenna parameter. By default (antenna=0), data associate with antenna id 0 is imported. The antenna parameter takes no effect for other input data formats.

ASAP is included with the CASA installation/build. It is loaded upon start-up, and the ASAP functionality is under the Python ’sd’ tool.

While CASA tasks operating on Measurement Set format are able to trap leading and trailing whitespace on string parameters (such as infile), ASAP does not. Note also; ASAP is case-sensitive, with most parameters being upper-case (e.g.’ASAP’) used in

8.7.1  Configurable SD Environment Variables

A number of environment variables used by CASA SD tools (see .asaprc). Within CASA, they are accessible through keys and values of the Python dictionary sd.rcParams. e.g:
Set verbose mode (producing verbose feedback form CASA SD).

Set so scantable operations are done in memory or on disk (use ’disk’ when the dataset is large.)

(Note: setting sd.rcParams[''] = 'disk' may cause data to be overwritten when using some tool and functions, even if sd.rcParams['insitue'] = False ). See § 8.7 for more details on ASAP-based CASA tasks which affect on-disk data. See § 8.1.1 for more details on the ASAP environment variables.

8.7.2  ASAP tools

The ASAP interface is essentially the same as that of the CASA toolkit, that is, there are groups of functionality (aka tools) which have the ability to operate on your data. The complete list of ’sd’ tools are:

sd.AsapLogger              sd.asaplinefind            sd.mask_or
sd.__builtins__            sd.asaplog                 sd.matplotlib
sd.__class__               sd.asaplog_post_dec        sd.merge
sd.__date__                sd.asapmath                sd.new_asaplot
sd.__delattr__             sd.asapplotter             sd.opacity
sd.__dict__                sd.average_time            sd.opacity_model
sd.__doc__                 sd.calfs                   sd.os
sd.__file__                sd.calibrate     
sd.__format__              sd.calnod                  sd.parameters
sd.__getattribute__        sd.calps                   sd.plotter
sd.__hash__                sd.commands                sd.plotter2
sd.__init__                sd.coordinate              sd.pylab
sd.__name__                sd.dosigref                sd.quotient
sd.__new__                 sd.dototalpower            sd.rc
sd.__package__             sd.edgemarker              sd.rcParams
sd.__path__                sd.env                     sd.rcParamsDefault
sd.__reduce__              sd.fitter                  sd.rcp
sd.__reduce_ex__           sd.flagplotter   
sd.__repr__                sd.get_revision            sd.sbseparator
sd.__revision__            sd.gui                     sd.scantable
sd.__setattr__             sd.inspect                 sd.selector
sd.__sizeof__              sd.interactivemask         sd.setup_env
sd.__str__                 sd.ipysupport              sd.simplelinefinder
sd.__subclasshook__        sd.is_asap_cli             sd.skydip
sd.__version__             sd.is_casapy               sd.splitant
sd._asap                   sd.is_ipython              sd.srctype
sd._is_sequence_or_number  sd.linecatalog             sd.sys
sd._n_bools                sd.linefinder              sd.toggle_verbose
sd._to_list                sd.list_files              sd.unique
sd.almacal                 sd.list_rcparameters       sd.utils
sd.apexcal                 sd.list_scans              sd.version
sd.asapfitter              sd.logging                 sd.welcome
sd.asapgrid                sd.mask_and                sd.xyplotter
sd.asapgrid2               sd.mask_not                

8.7.3  ASAP operation and function descriptions  Rasterutil

Rasterutil is a module which enables you to select individual raster rows or rasters from raster-scanned dataset. Suppose you have a scantable ’foo.asap’ in which several rasters are scanned. To know how many rasters or raster rows this dataset contains, execute first as follows:

s=sd.scantable('foo.asap', False)
import rasterutil
r.detect() .

Numbers of raster rows or rasters are shown by typing ’r.nrow’ or ’r.nraster’, respectively. Once you run rasterutil.Raster.detect(), IDs are given in chronological order both for each raster rows and rasters: a raster row can be specified by ID in range from 0 to r.nrow-1, while a raster can be specified by ID from 0 to r.nraster-1.

Here are sample commands to obtain a scantable that contains specified raster row or raster only or other stuffs to select the specified ones:

srow0=r.asscantable(rowid=0)    #get \emph{scantable} object containing the first raster row
sras1=r.asscantable(rasterid=1) #get \emph{scantable} object containing the second raster
selrow2=r.asselector(rowid=2)   #get selector object for selecting the third raster row
taqlras3=r.astaql(rasterid=3)   #get TaQL query for selecting the 4th raster

Other useful commands include rasterutil.Raster.plot_rows(); data selected as a raster row or raster can be visualized using asapplotter.

Within ASAP, data is stored in a scantable, which holds all of the observational information and provides functionality to manipulate the data and information. The building block of a scantable is an integration which is a single row of a scantable. Each row contains just one spectrum of a beam, IF and polarization.

Once you have a scantable in ASAP, you can select a subset of the data based on scan numbers, or source names; note that each of these selections returns a new ’scantable’ with all of the underlying functionality:

  CASA <5>: scan27=scans.get_scan(27)                 # Get the 27th scan
  CASA <6>: scans20to24=scans.get_scan(range(20,25))  # Get scans 20 - 24
  CASA <7>: scansOrion=scans.get_scan('Ori*')         # Get all Orion scans

To copy a scantable, do:

  CASA <15>: ss=scans.copy()  Data Selection

The selection syntax for single dish is modified based on the CASA Measurement Set data selection syntax:

Data can be selected based on IF, beam, polarization, scan number as well as values such as Tsys. To make a selection create a selector object choose among various selection functions, e.g.,

  sel = sd.selector()      # initialize a selector object
                           # sel.<TAB> will list all options
  sel.set_ifs(0)           # select only the first IF of the data
  scans.set_selection(sel) # apply the selection to the data
  print scans              # shows just the first IF  State Information

Some properties of a scantable apply to all of the data, such as spectral units, frequency frame, or Doppler type. This information can be set using the scantable.set_xxxx methods. These are currently:

CASA <1>: sd.scantable.set_<TAB>
sd.scantable.set_dirframe    sd.scantable.set_selection
sd.scantable.set_doppler     sd.scantable.set_sourcename
sd.scantable.set_feedtype    sd.scantable.set_sourcetype
sd.scantable.set_fluxunit    sd.scantable.set_spectrum
sd.scantable.set_freqframe   sd.scantable.set_tsys
sd.scantable.set_instrument  sd.scantable.set_unit

For example, sd.scantable.set_fluxunit sets the default units that describe the flux axis:

  scans.set_fluxunit('K')  # Set the flux unit for data to Kelvin

Choices are ’K’ or ’Jy’. Note: the scantable.set_fluxunit function only changes the name of the current fluxunit. To change fluxunit, use scantable.convert_flux as described in § instead (currently it is necessary to do some gymnastics for non-AT telescopes).

Use sd.scantable.set_unit to set the units to be used on the spectral axis:

  scans.set_unit('GHz')    # Use GHz as the spectral axis for plots

The choices for the units are ’km/s’, ’channel’, or ’*Hz’ (e.g. ’GHz’, ’MHz’, ’kHz’, ’Hz’). This does the proper conversion using the current frame and Doppler reference as can be seen when the spectrum is plotted.

Set the frame in which the frequency (spectral) axis is defined by sd.scantable.set_freqframe:

CASA <2>: help(sd.scantable.set_freqframe)
Help on method set_freqframe in module asap.scantable:

set_freqframe(self, frame=None) unbound asap.scantable.scantable method
    Set the frame type of the Spectral Axis.
        frame:   an optional frame type, default 'LSRK'. Valid frames are:
                 'REST', 'TOPO', 'LSRD', 'LSRK', 'BARY',
                 'GEO', 'GALACTO', 'LGROUP', 'CMB'

The most useful choices here are frame = ’LSRK’ and frame = ’TOPO’ (what ALMA actually observes in). Note that the ’REST’ option is not yet available. The Doppler frame is set with sd.scantable.set_doppler:

CASA <3>: help(sd.scantable.set_doppler)
Help on method set_doppler in module asap.scantable:

set_doppler(self, doppler='RADIO') unbound asap.scantable.scantable method
    Set the doppler for all following operations on this scantable.
        doppler:    One of 'RADIO', 'OPTICAL', 'Z', 'BETA', 'GAMMA'

Finally, there are a number of functions to query the state of the scantable. These can be found in the usual way:

CASA <4>: sd.scantable.get_<TAB>
sd.scantable.get_abcissa       sd.scantable.get_parangle
sd.scantable.get_antennaname   sd.scantable.get_restfreqs
sd.scantable.get_azimuth       sd.scantable.get_rms
sd.scantable.get_doppler       sd.scantable.get_first_rowno_by_if
sd.scantable.get_column_names  sd.scantable.get_row
sd.scantable.get_coordinate    sd.scantable.get_row_selector
sd.scantable.get_direction     sd.scantable.get_scan
sd.scantable.get_directionval  sd.scantable.get_selection
sd.scantable.get_elevation     sd.scantable.get_sourcename
sd.scantable.get_fit           sd.scantable.get_spectrum
sd.scantable.get_fluxunit      sd.scantable.get_time
sd.scantable.get_inttime       sd.scantable.get_tsys
sd.scantable.get_mask          sd.scantable.get_tsysspectrum
sd.scantable.get_mask_indices  sd.scantable.get_unit
sd.scantable.get_masklist      sd.scantable.get_weather

These include functions to get the current values of the states mentioned above, as well as methods to query the number of scans, IFs, and polarizations in the scantable and their designations. See the inline help of the individual functions for more information.  Masks

Several functions (fitting, baseline subtraction, statistics, etc.) may be run on a range of channels (or velocity/frequency ranges). You can create masks of this type using the create_mask function:

  # spave = an averaged spectrum
  rmsmask=spave.create_mask([5000,7000])   # create a region over channels 5000-7000
  rms=spave.stats(stat='rms',mask=rmsmask) # get rms of line free region

  rmsmask=spave.create_mask([3000,4000],invert=True) # choose the region 
                                                     # *excluding* the specified channels

The mask is stored in a simple Python variable (a list) and so can be manipulated using Python facilities.  scantable Management

scantable can be listed via:

  CASA <33>: sd.list_scans()
  The user created scantables are:
  ['scans20to24', 's', 'scan27']

As every scantable will consume memory usage, if you will not use it any longer, you can explicitly remove it via:

  del <scantable name>  scantable Mathematics

It is possible to do simple mathematics directly on scantables from the CASA command line using the +,-,*,/ operators as well as +=, -=, *=, /=

  CASA <10>: scan2=scan1+2.0 # add 2.0 to data 
  CASA <11>: scan *= 1.05    # scale spectrum by 1.05 

Operands can be a numerical value and one- or two-dimensional Python list. For list operand, its shape should be conform with the shape of spectral data stored in the scantable. Mathematics between two scantables is also available. In that case, scantables must be conform with each other.

NOTE: In scantable mathematics, scantable must be put on the left. For example:

  CASA<12>: scan2=scan1+2.0   # this works
  CASA<13>: scan2=2.0+scan1   # this causes an error  scantable Save and Export

ASAP can export scantables in a variety of formats, suitable for reading into other packages. The formats are:

scantables are exported by the function, save:

  CASA :'',format='MS2')  Tsys scaling

For some observatories, the calibration happens transparently as the input data contains the Tsys measurements taken during the observations. The nominal ’Tsys’ values may be in Kelvin or Jansky. The user may wish to apply a Tsys correction or apply gain-elevation and opacity corrections.

If the nominal Tsys measurement at the telescope is wrong due to incorrect calibration, the scale function allows it to be corrected.

  scans.scale(1.05,tsys=True) \# by default only the spectra are scaled
                              \# (and not the corresponding tsys) unless tsys=True  Flux and Temperature Unit Conversion

The function, convert_flux, is available for converting measurements in Kelvin to Jansky (and vice versa). It converts and scales data to the selected units. The user may need to supply the aperture efficiency, telescope diameter, or the Jy/K factor

  scans.convert_flux(eta=0.48, d=35.) # Unknown telescope
  scans.convert_flux(jypk=15) # Unknown telescope (alternative)
  scans.convert_flux() # known telescope (mostly AT telescopes)
  scans.convert_flux(eta=0.48) # if telescope diameter known

Previous Up Next