Last modified 6 years ago Last modified on 06/26/2013 03:17:29 PM

Stage 2 Galaxy Modeling Tasks for S13

The first version of the S13 galaxy modeling code is up and running, though there may still be bugs and it's far from feature-complete. This includes single-frame measurement on custom S13-specific simulated EImages, using single-component (exponential or de Vaucouleur) galaxy models and a flat prior.

Current Status Overview and General Instructions

We use a naive grid sampler in e1,e2,radius (the NaiveGridSampler class in meas_multifit) and a placeholder prior (SingleComponentPrior). The latter is a flat prior on all parameters that rejects negative amplitudes and rejects models with more than one component (i.e. it only integrates over the part of parameter space that corresponds to pure disk or pure bulge models, with an input parameter that specifies the expected probability of having a pure disk vs. a pure bulge). This was the least restrictive prior I could come up with that produces analytic integrals (requiring only one component collapses the integrals to 1-d) and still guarantees that the posterior distribution isn't completely degenerate at zero radius.


The S13 modeling code currently requires the latest master versions of all packages except "shapelet", "meas_multifit", and "obs_lsstSim", which require the "u/jbosch/s13" branch (as well as a rebuild of "meas_extensions_multiShapelet", as this has a binary dependency on "shapelet" even though its master version is sufficient). I have built a shared stack at NCSA with all necessary packages using the $LSST_DEVEL sandbox "/home/jbosch/installs":

export LSST_HOME=/lsst/DC3/stacks/default
export LSST_DEVEL=/home/jbosch/installs
source ${LSST_HOME}/

In addition, I am maintaining a set of config files, command lines, and analysis scripts at This has been installed and declared as the "s13" package, and it also serves as a metapackage for the rest of the code, so you can set up the latest version of everything with:

setup s13 s13-v1.1+3

In particular, $S13_DIR/bin/cmdlines contains a pair of bash functions I use to easily run commands with the config files and standardize somewhat where the outputs go. I strongly recommend reading that file, even if you don't choose to source it and use the functions.


The current simulation dataset is at /home/jbosch/s13/data/v1 (using symlinks to /lsst8/krughoff/forJim), which is referred to as $S13_DATA_DIR in my analysis scripts and command lines (I've declared it as the "s13_data" package, which is a dependency of the "s13" package described above). It contains two subdirectories, "bulge" and "disk", each containing a single CCD (raft=2,2 sensor=1,1) eimage. We use instead of to generate the calexp and src datasets, as this lets us avoid ISR and use the same simulation with different noise levels via the "noiseValue" config parameter in The command lines in the s13 package put the outputs in $S13_DATA_DIR/output/[disk|bulge]-[noiseValue]. I have already run everything at two different noise levels, so the following intermediate repositories already exist:

$S13_DATA_DIR/output/disk-0100         # processEimage outputs with low noise (disk-only simulations)
$S13_DATA_DIR/output/bulge-0100        # processEimage outputs with low noise (bulge-only simulations)
$S13_DATA_DIR/output/disk-1000         # processEimage outputs with realistic noise (disk-only simulations)
$S13_DATA_DIR/output/bulge-1000        # processEimage outputs with realistic noise (bulge-only simulations)

After running, the next step is running, a a task bin script defined in meas_multifit (there's a shell function in cmdlines that shows how I run it). This writes a new catalog to the new "modelfits" mapper dataset, containing the results of the fitting in the form of a meas.multifit.ModelFitCatalog. In addition to regular fields, each ModelFitRecord also contains a footprint that saves the pixel region used in the fit, and a lsst.meas.multifit.SampleSet object, which contains all the likelihood points evaluated in the course of the modeling. All of the other modeling are derived from the SampleSet (see the meas_multifit doxygen for more info), though some (such as the "mean.*" and "median.*" quantities are saved directly in the record for convenience. I have already run this on each of the above intermediate repositories, with outputs going here:

$S13_DATA_DIR/output/disk-0100         # measureCcd outputs with low noise (disk-only simulations, fit with exponential model)
$S13_DATA_DIR/output/bulge-0100        # measureCcd outputs with low noise (bulge-only simulations, fit with de Vaucouleur model)
$S13_DATA_DIR/output/disk-1000         # measureCcd outputs with realistic noise (disk-only simulations, fit with exponential model)
$S13_DATA_DIR/output/bulge-1000        # measureCcd outputs with realistic noise (bulge-only simulations, fit with de Vaucouleur model)

Note that each of the output files for the model fits is very large (>300MB) because they contain a large SampleSet for each galaxy, so they take along time to load.

Using a script ( in the s13 package, I have converted the simulation reference catalogs produced by Simon (in $S13_DATA_DIR/refs) into afw::table FITS catalog in the $S13_DATA_DIR/[bulge|disk] directories; these can be accessed using the "refcat" mapper dataset present in the s13 version of obs_lsstSim. The command does the match and join with the reference catalog, and saves the WCS-transformed reference catalog ellipse in the field "ref.ellipse" to compare with the measured values.

New Tasks


add realistic priors (JFB)

The current prior is just a flat prior, which isn't enough to regularize the fits when fitting more than one component at a time, so we're limited to fitting pure exponential and de Vaucouleur models at present.

adaptive importance sampler (JFB)

The naive grid sampler isn't at all what we expect to use in production, and one of the real experiments in S13 is trying to see how well we can do with an adaptive importance sampling approach. Before I can start this, however, I'll need to have some idea of what the posterior distributions I'll be trying to approximate with analytic functions look like (the last analysis task on this page).

code to extract flux and component-fraction estimators from samples (JFB)

Because the likelihood w.r.t. the flux parameters is handled very differently from the likelihood w.r.t. the ellipse parameters, the code to extract estimators of these values (expectation values, percentiles) is more complex, and has not been written. While S13 isn't focused on flux measurements, being able to compare the measured flux values to the reference catalog may still be useful in debugging.

multifit version of the modeling code (REO)

  • Creating a MultiEpochObjective subclass of the meas::multifit::Objective class. The SingleEpochObjective class can be used as a guide; the difference is that we'll want to have it accept inputs from many exposures, and create a model matrix by combining all the per-exposure model matrices along rows. Note that it will have to deal with transforming ellipses between coordinate systems; I think we'll want to have the Objective class take an ellipse in the coadd WCS coordinate system.
  • Create a new CmdLineTask that mirrors MeasureImageTask. It should expect to load a 'modelfits' catalog from a previous run of MeasureImageTask on the coadd, and iterate over it, doing the following:
    • Determine which exposures overlap the object (probably using the footprint attached to each ModelFitRecord), and load postage stamps and metadata from them
    • Create a MultiEpochObjective object that organizes all the postage stamps and metadata
    • Use its "sampler" subtask's "reset" method to create a new C++ sampler object, and run it on the MultiEpochObjective object, saving the results in a manner similar to MeasureImageTask: a new ModelFitRecord with a new SampleSet object as well as summary fields that may be convenient for analysis (the more similar the schemas are, the easier it will be to reuse analysis code, but clearly they can't be identical).
  • The details of where the new task's Python code ends and MultiEpochObjective's constructor begins can be considered an implementation detail; this task should not have to worry about being able to construct other Objective classes, and MultiEpochObjective does not have to worry about being constructed by other code.


profiling (PAP/JFB)

An initial C++ profile by Paul identified two easy-to-fix hotspots that Jim has already fixed (but not yet released). Even after this, the per-pixel model evaluation that Jim expected to dominate the compute budget is still likely to be subdominant; the current tall pole is computation of the shapelet-space convolution matrix, which does not scale with the number of pixels (but does scale with the number of exposures and the PSF shapelet order). It might be possible to speed this computation up by switching from double to single precision, and it's also possible that per-pixel calculations would be more important with a broader distribution of object sizes (all the objects in the current sims are very small, but while these may dominant the source numbers, they may not dominate the computation time).

A significant amount of time is also going into I/O (because I'm saving a large number of samples) and into pure-Python parts of the code. The latter bears more investigation.

shapelet PSF approximation parameters (Perry)

When running, the psf.innerOrder, psf.outerOrder, and psf.radiusRatio can be used to tune the shapelet approximation to the PSF fit used when convolving galaxy models. The innerOrder and outerOrder parameters should probably be in the range 0-4, as the computational performance should scale approximately as the square of these numbers. Higher orders give us more degrees of freedom, however, which may be necessary when fitting realistic PSFs. The radiusRatio controls the size of the inner component relative to the outer component. Someday, we shouldn't make this fixed by config, but for now, it'd be ok to tune it to whatever works best with these particular PSFs.

There are two measures of goodness of fit saved in each ModelFitRecord:

  • multishapelet.psf.chisq: chisq/dof for the fit, but in arbitrary units because we don't have any estimate of the uncertainty in the PSF model (i.e. don't expect it to be close to 1, but do expect it to scale as chisq/dof would).
  • multishapelet.psf.integral: the analytic integral of the shapelet PSF model to infinity. Given that we're fitting to a sum-normalized PSF model image, this should be close to one.

You can also construct an lsst.meas.extensions.multiShapelet.FitPsfModel object directly from a ModelFitRecord and a control object equivalent to the one used when doing the fitting (call config.makeControl() to get a control object from a config object). You can then call asMultiShapelet() on that to get a lsst.shapelet.MultiShapeletFunction, which provides various ways of evaluating itself on an image.

I'm curious about:

  • which of the saved goodness-of-fit metrics is most useful;
  • how the goodness-of-fit measures behave as a function of innerOrder and outerOrder (and whether there is a clear "best choice" in terms of diminishing returns)
  • what value of radiusRatio seems to work best
  • whether the performance of the galaxy modeling code really does scale as I'd expect with innerOrder and outerOrder

For all but the last of these, it will be useful to run with the config option prepOnly set to True; this will setup the all the records (including the PSF shapelet approximation and determining the pixels to fit), but will not do the much more expensive galaxy fitting. For the last stage, it may be useful to reduce the number of grid points evaluated (sampler.nRadiusSteps and sampler.ellipticityStepSize), to speed things up as well (since the performance will almost certainly be close to linear in the number of grid points, as long as the number isn't too small).

comparison of galaxy measurements with reference catalog (?)

At least some of this analysis of shapes was already on my plate for Summer -- Perry

The matches to the reference catalog in its prep stage, and adds the reference catalog results directly into the output catalog for convenience. We should compare these ellipses and (eventually) fluxes against the measured quantities. Note that because we're sampling from the likelihood, there are multiple ways to estimate the best-fit values and their uncertainty from the samples, not all of which have been implemented. We can compute posterior means and covariance matrices, and percentiles (inc. medians) of ellipse parameters so far, and some of these are saved directly in the ModelFitRecord for convenience.

All these also depend quite strongly on the prior and (hopefully to a lesser extent) on the sampling method used. We shouldn't expect too much from these results until those are improved, but any analysis code developed now should be usable when we update these.

We'll also want to make these comparisons as a function of signal-to-noise ratio (note that we can modulate the noise simply by rerunning with a different noiseValue config parameter).

It's probably best to use the existing outputs to do this analysis. Note that you'll want to structure any analysis code such that you can work in one Python session for a while, because you won't want to repeatedly load the modelfits catalog.

inspection of likelihood surfaces (?)

We'd like to use an importance sampling method instead of a naive grid sampler soon, and to do that intelligently we need to have a good sense for how best to approximate the full posterior surface (not just the part near the best-fit value) using an analytic function, ideally a Gaussian.

  • We should test out different ellipse parametrizations (via SampleSet::setEllipseType) to see which of these makes the posterior simplest.
  • I'm very curious how the posterior surface changes as a function of signal-to-noise ratio; if we're lucky, it will mostly just get broader as signal-to-noise decreases, but I suspect the behavior will be more complex.
  • Because we essentially are using a flat prior now, the posterior is proportional to the likelihood. A more informative prior should help with make the posterior simpler and easier to approximate with an analytic function, so we should get that in place before pressing too far with this analysis.

There is code in lsst.meas.multifit.display to inspect individual fits, both as a 3-d scatter plot of log likelihoods and as data/model/residuals images. This still needs a bit of cleanup and documentation, but I'll do that soon and add a section on this page about it.

fit region analysis (?)

Which pixels to include in the fit is an important factor in the performance of the algorithm (though perhaps less important than I once thought), and I haven't done any experiments to test big the fitting region should be. The current code simply grows the detection footprint by a configurable number of pixels, but we could also imagine setting the fit region based on the adaptive moments (i.e. source.getShape()) or some combination of the two.

We should first try different config values with the current code (fitRegion.nGrow is the relevant config parameter), and see how these affect the results of the fit. Once we'd identified the point at which growing the footprint no longer changes the result, we should see if this fitting region size is more strongly connected to the moments than the detection footprint, and consider updating how the fit region footprint is computed.

We should also track how different nGrow values affect performance; currently per-pixel calculations are a small part of the computing budget, but this could change if we significantly increase the number of pixels in the fits.


simulations for coaddition/multifit (KSK)

We'd like to make simulations that are appropriate for building a coadd and/or running multifit that are otherwise similar to the current single-CCD simulations.

  • Galaxy and star populations similar those in the current set of simulations should be used. Please prioritize a full set of disk-only simulations before starting the bulge-only simulations, unless it's just as easy to do them in parallel.
  • We'd like to simulate a single raft for each of exposure.
  • We'd like to simulate 32 exposures, with random rotational and half-chip translational dithers.
  • Each of these should be 1/16 the depth of the single image in the current set of simulations (i.e. we'd like the final coadd to be twice as deep as the current sims)
  • We'd also like a new single exposure using the same catalog as the coadd-input simulations that's as deep as the full coadd (32x the depth of each input exposure, and twice as deep as the current sims)