Getting Started | Documentation | Glish | Learn More | Programming | Contact Us |
![]() | Version 1.9 Build 1556 |
|
The operations described above would not necessarily all be included within a coordinate class, CoordSys, but might be implemented as methods of an image class, Image, which uses CoordSys. For example, the operation of regridding an image to conform with a predefined coordinate system does not belong within CoordSys but would rely heavily on the methods provided by it. In this section we will attempt to further refine our understanding of what the CoordSys class consists of.
Coordinate systems which represent the celestial sphere are of special significance in astronomy and provide a good illustration of an important point concerning the coordinate mapping function. Such coordinate systems consist of a spherical coordinate system together with a (spherical) map projection. The two components are separate entities, and there are good reasons to treat them as such.
Although map projections can only be specified mathematically with reference to a spherical coordinate system, they are in fact geometrical entities which exist independently of any coordinate system. This is most clearly seen in the ``projective'' map projections which may be defined by purely geometrical constructs. The particular spherical coordinate system chosen when representing them mathematically has no special significance, since the projection could be reformulated in terms of any other spherical coordinate system.
Practically speaking, changing an image's map projection, for example changing its obliquity, or changing its type from say orthographic (SIN) to gnomonic (TAN), always requires interpolation on the pixel values and usually changes the shape of objects in the map. On the other hand, transforming an image's coordinate system, for example from equatorial to ecliptic, does not require interpolation and does not distort objects but instead simply introduces a new coordinate grid.
In fact, the dichotomy between the ``static'' and ``transformable'' parts of the mapping function can be seen in other places. For example, the wavelength calibration of an optical spectrum would be the static part of the spectrum's mapping function, whereas expression of the coordinate as a wavelength, frequency, or velocity, or changing its physical units (Å or nm), would be the transformable part. The static part of the mapping function will henceforth be referred to as the coordinate structure function, and the transformable part as the coordinate transformation function.
At a fairly abstract level, AIPS++ coordinate systems must have the following features:
Degenerate mappings such as might correspond to
(RA, dec)
(l, m, n), and overspecified mappings such as
(l, m, n)
(RA, dec)
must be provided for.
Although the mixed mapping function encompasses both the mapping and inverse mapping function, it may well be based upon them.
Coordinate iterators should be supplied for the Coordinate class.
One way to do this would be to copy the parent image's CoordSys object to the subimage, apply a rotation, translation, and scale (to account for the interpolation interval) via the linear transformation matrix so that the oblique slice was mapped into the x-pixel axis of the parent image, and then flag the y-pixel coordinate of the subimage as having a constant value of zero (the restriction).
Changes in the relative order of the subimage axes, i.e. transpositions, could be implemented via transpositions of the rows and columns of the transformation matrix.
If the coordinate system is too complex for the FITS coordinate model, the AIPS++ FITS writer will have to resort to using random parameters.
However, certain mathematical transformations such as FFTs and DFTs are based implicitly on linear coordinate systems and are most efficiently expressed in terms of the coefficients of these linear systems (reference pixel, reference value, and increment for each axis). The alternative of computing the coordinates of each pixel within the inner processing loop of these algorithms is far too inefficient.
Therefore, the CoordSys class must be prepared to supply the coefficients of these linear coordinate systems if it is sensible to do so (this is closely related to the task of producing FITS headers).
The relationship between pixel coordinates and the actual storage location of the pixels was not discussed in the above analysis which concerned itself only with the relationship between pixel and image coordinates. Put another way, no assumption has been made concerning the value of the pixel coordinate of the first pixel in the image (top-, or bottom-left hand corner, depending on which is adopted for AIPS++).
It may be useful for the Image class to differentiate between pixel coordinates and storage coordinates which are related to the way pixels are stored and so are inherently integral. For example, the first pixel in an image would always have storage coordinates (1, 1,...). Pixel coordinates might be translated and possibly inverted with respect to storage coordinates.
Some examples of why it may be useful to distinguish between pixel and storage coordinates are:
Storage coordinates are outside the scope of the CoordSys class which deals only with the relationship between pixel and image coordinates, and it would be the responsibility of the Image class to implement them. This would entail the maintenance of the integral offsets between pixel coordinates and storage coordinates, plus a query function to report the pixel coordinates at each corner of the image. Applications programmers would need to bear in mind that the pixel coordinates of an image don't necessarily begin at (1, 1,...), and with each request for the pixel values from a region of the image they would be supplied with the corresponding pixel coordinates. To save confusion, storage coordinates should only appear in low-level Image methods (possibly as private data) and remain invisible to applications programmers.
A Rumbaugh object model diagram for the AIPS++ coordinate classes is presented in fig 1. The mapping function, which forms the heart of the diagram, is divided into an ordered sequence of three components, a linear transformation, followed by the coordinate structure function, and then the coordinate transformation function. The subimaging association is modelled as the linear transformation class, and likewise the coordinate transformation association is modelled as the coordinate transformation function class. The relation ``mapping function transforms pixel coordinate to image coordinate'' is represented as a ternary association between these three classes. Also of note, the association between CoordSys and Image is one-to-many to allow for a CoordSys object to be shared amongst more than one Image object, for example, a dirty map, dirty beam, and cleaned maps.
Figure 1: Rumbaugh object model diagram for the AIPS++ coordinate classes.
The dynamic model for the AIPS++ coordinate classes is trivial since they implement a non-interactive computation (the mapping function).
A Rumbaugh functional model diagram for the AIPS++ coordinate classes is presented in fig 2. It provides a coarse-grained description of the operation of the mapping function, and shows the operation of each of its three components.
Figure 2: Rumbaugh functional model diagram for the AIPS++ coordinate classes.
Terms used in this document are listed here in glossary form.