wiki:Summer2013/ConfigAndStackTestingPlans
Last modified 6 years ago Last modified on 08/23/2013 08:46:51 PM

Final Info

Notes:

Plans:

The main goal of the production run will be to test the middleware, however it presents an opportunity to make some adjustments to the configuration files and code to enhance the resulting data products. These enhancements will include:

  • Deblending
  • Star-galaxy separation
  • Better uniformity of depth and coverage
  • Coadding (background matched, non-psf-matched) three or five bands: forced photometry for union of detections

Deliverables

  • Config files AND defaults in obs_sdss/config pushed to master
  • List of data Ids for input into coadds: Including reference runs
  • Code updates, merged and tagged before June.

Deblender

The deblender takes a couple config parameters. The following are the defaults:

config.deblend.psf_chisq_2=1.5
config.deblend.psf_chisq_1=1.5
config.deblend.psf_chisq_2b=1.5
config.deblend.maxNumberOfPeaks=20

On the Apps call 5/3, we decided not to spend too much time tweaking these. Qualitatively, we've seen that the deblender gives more reasonable results when limiting the maxNumberOfPeaks. On one hand, we want all the peaks to be measured, on the other hand we want to prevent shredding of bright stars, which populates the catalogs with bogus objects and takes CPU time to "measure" them.

Action

Visually inspect results, to check that it looks OK. Completeness tests will indirectly test the deblender.

Star/Galaxy? Separation

Star-galaxy separation requires accurate galaxy photometry. Forced galaxy photometry will not be ready before June, however we can use the new coaddPsf to measure modelFlux on the coadd. Perry is showing that this works on simulated LSST images.

Action

For a test region, we'll compare the star galaxy separation with a deeper catalog (note: starting with a small HST field and DEEP2: field 3_1 and 3_2. See first set of notes above for more detail.)

Uniformity of Depth and Coverage

Gaps and Camcol 1

http://content.screencast.com/users/YusraAlSayyad/folders/Jing/media/2d0e7c4a-a4fd-402f-a026-c42887c1e5c5/00000006.png

This shows the number of images going into the the r-band coadds in the early W13. It shows that there's a problem in selectImagesTask around RA=0. Russell is fixing this in #2761. It also shows that many images from camcol 1 are not being added to the coadd. This is because the backgrounds in camcol 1 are more variable (dare I say problematic) and require more parameters to fit the offset between images. Gaps are a result of reference images missing for certain patches. We could include logic that says: if requested reference run isn't found, pick a new one.

config.matchBackgrounds.maxMatchResidualRatio

No equations on wiki. Screen shot of tex:
http://content.screencast.com/users/YusraAlSayyad/folders/Jing/media/f05e111d-6389-4d20-aef5-41df3a630068/00000007.png http://content.screencast.com/users/YusraAlSayyad/folders/Jing/media/1d16d3d9-c304-4de6-bf8f-1757fb04744f/00000008.png

Action: Test config.matchBackgrounds.maxMatchResidualRatio for (u,g,i,z), per camcol for the problem areas seen above. At least 80% of available images should make it into the coadd in camcols 2 - 6.

Coadding all five bands: forced photometry of union of detections

We want to create coadds in all five bands (u,g,r,i,z) and find detections on all five coadds. In order to get all detected objects while using the least number of CPU-hours, we want to seed forced photometry with the union of detections in all five bands. The book-keeping for this is not in place yet, and new code would need to be written. Anyone familiar with source association and forced photometry have time?

Risks

psfMag Discrepancies

SDSS fields overlap by 128 pixels. When an object falls in this overlap region, it gets measured twice. We see a discrepancy between flux measurements when an object falls in the 128pixel overlap region between SDSS fields. The pixels are the same, but the fluxMag0s are given per CCD and the background subtraction is done per CCD.

The problem is a combination of discretized fluxmag0 (can be corrected in dataset afterwards) and background subtraction. I'm currently looking into (1) the distribution of the magnitude differences and (2) and how much of the discrepancy is due to background subtraction vs. how much is due to the discretized zeropoints. We can use the fact that magnitudes depend on fluxMag0 and the psfFluxes do not. 90\% of the psfFlux differences are within the smaller psfFluxSigma, that is (max_psfFlux - min_psfFlux) / min_psfFluxSigma < 1. This indicates that most of discrepancies are due to the discretized zeropoints. Fortunately this can be addressed in the database, post processing.