Getting Started Documentation Glish Learn More Programming Contact Us
Version 1.9 Build 1367
News FAQ
Search Home


next up previous
Next: Efficiency vs Incapsulation Up: No Title Previous: The Coordinate System is Cumbersome

Subsections


Conceptually Complicated and Uncleanly Divided

The AIPS++ coordinate information access might be simplified soon, but the application programmer still must know about and understand Image Coordinates, Image Pixel Coordinates, and Pixel Coordinates. While these three levels of abstraction facilitate some operations and enable the applications programmer to consider any corner of an image to be 0, 0, they also lead to a great deal of confusion. The primary confusion arises from the fact that the data are firmly connected to the Pixel coordinate system and the fact that there are operations one can do on the levels of any of the three coordinate systems. For example, in implementing a CLEAN algorithm, one gets the maximum value of an array with

     Pixel maxpix = myImage.Maximum();
To designate the center of the image to be the pixel where the maximum was reported, one has to do
     PixelCoord myPixCoord = maxpix.GetPixelCoord()
     myImage.SetCenPix(myPixCoord);

If, for some reason, one has to shift the center of the image, one needs to get the ImPixelCoord out in the code as well. The net effect of this is that the Pixel, PixelCoord and ImPixelCoord are all visibile at the application level throught out the code. While there might be good reasons to encapsulate functionality into the three coordinate systems, the concepts are not encapsulated and the programmer must be thinking about them all simultaneously and must remember which functions belong to which coordinate systems. A bare minimum of two coordinate systems (data storage coordinates and a mapping of the data storage coordinates onto the sky) are required. The programmer should have, as much as possible, an independent and complete set of permisible operations in each coordinate system's interface (or the Image's interface). As it stands now, it is impossible to think of the Image as a set of values on a regular grid defined by "one" coordinate system. While the Image and its coordinate system must be flexible enough to deal with difficult cases such as non-linear or non-orthogonal coordinate systems, this generality should not overburden the simple coordinate geometries.

We also note that the Image class has 38 methods in it, and more classes than we wish to count. These will obviously grow.

Coordinate Wish List

Here are a number of things which I find useful about SDE and some random thoughts about the current coordinate systems.

OBSRA and OBSDEC are required. Perhaps even a vector of OBSRA and OBSDEC. We should keep our eyes open for a better way of doing this.

There seems to be no axis type data member of CoordSys. One cannot assume a spectral line cube will always be in X, Y, F order. For that matter, we will probably stick things onto coordinate axes that none of us have thought of yet.

The coordinates of an image and the coordinates of a Fourier Transform of that image will be related to each other. The coordinate system must know about this. The parameters nx and ny are not considered to be part of the coordinate system. However, when the transform image is considered, the CELLSIZE of the transform image is equal to 1.0 / (CELLSIZE of the image * NX). In this way, NX and NY get a little insestuous with the coordinate system, and perhaps should be considered as part of the coordinates. Needs some thought.

Also, a piece of thought which is in the YEG domain: what astronomical coordinates are needed for the YEGS? The OBSRA, OBSDEC, reference RA, reference DEC, and coordinate projection type. We YEGGERS need to see what parts of the Image coordinate system we can steal for ourselves, and we need to make the interface between the YEG coordinate system and the UV grid, which should be considered to be just another image, except with different coordinate axis types (``UU--SIN'', ``VV--SIN''...).

Consider a MEM based program in which you want to take the current iteration's model (128 x 128) and convolve it with the PSF. An efficient way of doing this is to take the psf (256 x 256) and do an FFT to make the transfer function (256 x 256). Then, when we want to convolve the (128 x 128) image, we just do a (128 x 128) --> (256 x 256) FFT, multiply by the transfer function, then back-FFT (256 x 256) --> (128 x 128). In doing these FFT's, it is useful to store the dimensions and coordinate systems of the previous image in the current (FFT'd) image...it aids in going back.


next up previous
Next: Efficiency vs Incapsulation Up: No Title Previous: The Coordinate System is Cumbersome
Please send questions or comments about AIPS++ to aips2-request@nrao.edu.
Copyright © 1995-2000 Associated Universities Inc., Washington, D.C.

Return to AIPS++ Home Page
2006-03-28