Ticket #1760 (closed enhancement: fixed)

Opened 8 years ago

Last modified 8 years ago

Infrastructure to support simultaneous measurements on multiple images

Reported by: price Owned by: price
Priority: normal Milestone:
Component: meas_algorithms Keywords:
Cc: rhl, jbosch, bick, cloomis, price, dubcovsky, ktl Blocked By:
Blocking: Project: LSST
Version Number:
How to repeat:

not applicable


We need to be able to use the various algorithms to make simultaneous measurements on multiple images (e.g., measurements on detector images after detection on stack, or measurements on images in different filters to get colors).


thoughts.cc (18.1 KB) - added by price 8 years ago.
Thoughts on design for measuring multiple images.

Change History

comment:1 Changed 8 years ago by DefaultCC Plugin

  • Cc bick, cloomis, price added

comment:2 Changed 8 years ago by price

Starting from the bottom and working up, let's consider the API for the measurement.

We have to ask first of all whether it's necessary to have all exposures in memory at once, or whether the measurements can be made by a single iteration over the exposures as they are loaded in turn. While the latter case promises saving (potentially substantial) memory, we reject it because it limits the measurements that might be made. For example, non-linear minimisation (which might be needed for fitting parallaxes and proper motion directly to the pixels) requires multiple passes through the exposures. We will just have to be careful with the memory footprint.

We can therefore start with a small modification of the single-image MeasureSources?:

Measurement::Ptr doMeasure(

std::vector<CONST_PTR(ExposureT)> im, CONST_PTR(lsst::afw::detection::Peak) peak, CONST_PTR(lsst::afw::detection::Source) source );

The footprints on the different images may be different (if the images aren't aligned, in the general case), and so we should provide these. We shouldn't force all of the measurement codes to do this conversion themselves, as it's a common operation, and it might be tricky. We might also at this point ask why we need a footprint and whether the usual Footprint class is suitable. In the single-image measurement, the footprint specifies pixels (and possibly neighbours as well) that meet some signal-to-noise threshold. However, in the multiple-image measurement, we may well be measuring a source that has no pixels with any significant signal-to-noise in a single image. In that case, the footprint doesn't serve much purpose, and might be replaced by an Ellipse to define the aperture of interest (and it's much more easily transformed). However, the footprint may have a use in the context of multiple objects in the field to define pixels that (substantially) "belong" to a particular object; then a simple Ellipse doesn't work, as the division between the objects is likely not neat, and a Footprint makes sense. Dustin reports that the Deblender will produce some sort of Footprint-like object per astrophysical object, so let's stick with a Footprint per image for now (recognising that this may change when the Deblender design anneals).

Now, it's not clear what purpose the peak serves. In the single-image analysis, this provides pixel and sub-pixel peak positions, but that is irrelevant for the multiple-image case. We could transform the peak position to the frame of each image, or simply provide world coordinates for the peak.

The source is a place to put results and to retrieve required results, so it stays. The PSFs and WCSes are contained within the Exposures, so no need to include those explicitly. We end up with:

Measurement::Ptr doMeasure(

std::vector<CONST_PTR(ExposureT)> images, contains PSF and WCS CONST_PTR(lsst::afw::coord::Coord) coord, position of interest std::vector<CONST_PTR(lsst::afw::detection::Footprint) feet, CONST_PTR(lsst::afw::detection::Source) source );

Another question to ask is how do we treat sources that span multiple detectors on a single exposure? If we care about them, we can just shove the multiple detectors into the 'images' vector as separate observations --- there's no need to treat multiple detectors in a single exposure in some special way.

comment:3 Changed 8 years ago by jbosch

  • Cc dubcovsky added

comment:4 Changed 8 years ago by price

I need to pretty much reproduce all of the current (single-image only) measurement framework in afw and meas_algorithms for in order to do the multiple-image measurement. RHL agrees that the best thing to do is modify the current measurement framework to operate on a std::vector<ExposureT> instead of ExposureT. That then requires adding another function (for measuring something from a vector of images) to the MeasureQuantity::declare() registration function, while it already has two (measure from a single image, configuration). It would be simpler if registration just required a single object of some class, with defined method names in the base class; I plan to make this modification as well.

comment:5 Changed 8 years ago by price

  • Status changed from new to assigned

Working on this in parallel with Winter2012.

Changed 8 years ago by price

Thoughts on design for measuring multiple images.

comment:6 Changed 8 years ago by price

The attachment includes some thoughts on the design for measuring multiple images. The code has not been compiled, but is intended more for getting a feel for things.

The main thrust is breaking up the current Measurement class into Measurement and Algorithm classes. The Algorithm base class defines methods to be used for measurement at different levels. The first level (measureOne: measure single image) we use already. The second level (measureGroup: measure a group of images) can be composed of multiple first-level calls, but if there are properties that can be measured in common for images in the same filter (e.g., galaxy shape) then all images need to be available at the same time. The third level (measureGroups: measure multiple groups of images) can be composed of multiple second-level calls, but if there are properties that can be measured in common for all images (e.g., center) then all images need to be available at the same time.

There is an example using a randomly selected measurement and algorithm (AperturePhotometry?, AperturePhotometer?) and a couple of workflows for use cases at the bottom.

I'm going to start on implementation.

comment:7 Changed 8 years ago by price

Might be able to use boost::multi_index instead of my custom InsertionOrderedMap?.

comment:8 Changed 8 years ago by jbosch

I'm suppressing ideas better held for the great cleanup for now.

My big addition would be to define the multi-exposure measurement coordinate system as an image-like system, rather than celestial coordinates. Celestial coordinates have poles and wraparound and other issues it's best to avoid. I'd recommend passing to the measurement algorithms a WCS that defines an image-like coordinate system to be used as the reference coordinate system. This could be, for instance, a gnomonic (TAN) coordinate system centered roughly on the object, but a particular Measurement/Algorithm? subclass pair wouldn't be allowed to assume it; they'd just get a WCS. The driver code would then transform the measurements to celestial coordinates. That might be difficult to do, unless the measurements are typed as scalars (like flux), distances (like radii), geometries (specifically ellipses for shape). That might make it seem more trouble than it's worth for now, of course, but I think it's eventually the way we'd want to go. Without it I think we might spend a lot of time trying to make sure all measurements work on all the different "corners" of a sphere, and it might be harder to convert our existing single-image algorithms.

comment:9 Changed 8 years ago by price

  • Cc ktl added

The review said:

  • Don't care about "measureGroups", as for our current purposes it can be implemented from iteration on measureGroup
  • Have Algorithms code implement only measureGroup (explicitly iterate over multiple images)
  • Rename ExposurePatch?, should just be an Exposure and Footprint
  • Checking filter in ExposureGroup? is unnecessarily limiting: just use std::vector<ExposurePatch?>
  • Measurement driver to modify Footprint and pass on to Algorithm as const
  • Measurement driver to receive one const Source and update another Source; Algorithms to get two const Sources.

I've done all of this, except for renaming ExposurePatch? (it's fatter than just Exposure and Footprint, and I'm at a loss for a better name). I'm in the process of cleaning up, and aiming to merge to trunk on Monday afternoon.

comment:10 Changed 8 years ago by price

Merged to trunk:

comment:11 Changed 8 years ago by price

  • Status changed from assigned to closed
  • Resolution set to fixed

All done. I didn't get to put in the algorithm dependencies, but I can do that later.

Note: See TracTickets for help on using tickets.