wiki:Winter2014/Meetings/CameraTeam_20140130
Last modified 5 years ago Last modified on 01/30/2014 08:59:00 AM

[LSST-data] Report from Camera-Team Visualization Meeting at Harvard, 1/27-1/28

On Monday and Tuesday of this week I traveled up to Harvard to represent DM at a meeting focused on finding solutions to the LSST camera team's near-term visualization needs. Bill Joye, the principle developer of DS9, was also present, so one major component of the meeting involved comparing DS9's current capabilities with LSST's visualization needs and getting feedback on how difficult our requirements would be to address with DS9.

While we did try to remain focused on near-term requirements, the desire not to spend effort on something that would be thrown away was on everyone's mind, and so we did spend some time discussing how things would need to change as we scale up, and how this might all fit in with the SUI and the visualizations needs of the telescope operators (neither of which were represented).

Jim

EXECUTIVE SUMMARY

The camera team's requirements are essentially:

1) They need something very soon that will enable extremely low-latency visualization of images from CCS (I think it was CCS...?), ideally without even writing them to disk, but without a whole lot of features in addition to simple visualization.

2) They need something moderately soon (~8 month timescales) that will allow them to do visualization of images already written to disk with many more features, most along the lines of IRAF's "imexamine" tool: aperture fluxes, radial profiles, histograms, and statistics at a point. They're also interested in point-spread function information and other high-level quantities we have pipeline code to produce, but available in a more interactive fashion.

And the rough plan for how to address those requirements is:

1) The camera team will write their own code for this based on DS9+XPA, likely just scripts that call the various XPA command-line binaries to feed images more quickly into DS9 than the usual GUI would allow. Using DM XPA bindings doesn't appear helpful at this since CCS is written in Java.

2) Camera team developers (primarily Heather Kelly) will work on adding imexamine-type features to DM's DS9/XPA interface, using a Python-initiated XPA poll of the currently-selected region to determine the region of interest. DM will just provide support at this stage, but will include these features in the W14 Data Analysis work package when we get to it.

CAMERA-TEAM WISH-LISTS FOR DM W14 WORK

Would like showCamera-type functionality in all reasonable CameraGeom coordinate systems. In particular, sometimes they'll want to display the pixel data in approximately the right place on the focal plane to ensure that it doesn't need to be interpolated (or even converted to float), but other times they'll want to resample such that everything really is in focal plane or pupil coordinates.

Want raft-level showCamera functionality ("showRaft").

Would like to have showCamera tightly-coupled to ISR, including lots of configuration, in the sense that it sounds like they'll want to have many options for correcting gains, offsets (i.e. not just from calibration frames, but from header values, measurements, etc), and they'll want to experiment with those interactively.

Some of the sensors the camera team is testing won't have consistent structure (bias region, etc) from file to file, but those files will have the appropriate FITS keywords set describing those differences. In order for them to be able to make use of CameraGeom on those files, I think we'll need to be able to create (or perhaps modify) CameraGeom data structures on-the-fly from FITS files.

Would like displays of various per-CCD or per-amp metrics (e.g. magzero, PSF width) displayed with camera layout - like PipeQA does - but using the same display engine as showCamera, so you could overlay those metrics on data images with transparency and/or blink between them.

We should have a cross-subproject standard for creating a URL-like identifier that refers to a region on a particular image, and could be shared (e.g. via email) without any questions about how to interpret it. But the images may refer to a test sensor region, or a visit/raft/sensor/[snap] ID corresponding to a particular (also recorded) simulation, or (eventually) a visit/raft/sensor/[snap] ID corresponding to science or calibration data. And the region could be in any CameraGeom-supported system or celestial coordinates. I personally think we'd probably have to drop some of these options to make this practical, but it's still a good idea.

We talked briefly about whether we could extract some subset of DM functionality into a pure-Python library that relied only on e.g. numpy, pyfits, and hence would much easier to install (and in one case, specifically could work on Windows). But it wasn't clear how strong the demand really was for this; I suspect a lot depends on how well Mario's plans for easier installs pan out. There seemed to be a lot of variance within the camera team about how big they considered the DM barrier-to-entry to be.

DS9 FEATURES/LIMITATIONS

While we're all planning to go with DS9 for visualization for now, we expect an number of issues will make it impossible to continue to use it unless they are resolved. Here's a summary of those issues and Bill's response to them:

  • We'll need multi-scale panning/zooming that doesn't require having all the data on local disk. Not clear if DS9 will ever be able to support this. There are many research-project-level web-based viewers that do (and a few commercial ones), but those we know of are far behind DS9 in other areas of usability.
  • We need better support for events and callbacks in DS9, to allow controlling programs to (remotely) detect changes to regions, cursor positions, etc. Without this we are limited to using Python to trigger inspection of existing DS9 regions, which makes for a much clunkier interface. Bill Joye stated that this was a high priority for DS9 development even before LSST's interest in it (JWST is also interested), but there may be substantial development needed to getthis working.
  • We're concerned about the amount of port-forwarding needed to control DS9 remotely through a firewall via XPA, and the possibility for collisions between users raised by this approach. Overall, we're concerned about the lack of an existing protocol for remote operation that does everything we will need - it's clear that the newest VO protocol, SAMP, may be useful in a limited context but does not provide enough control for everything we want to do. DS9 also supports http messaging, which handles concurrency much better, but it's not clear whether it's as efficient at transferring image data. Any http-based solution (regardless of whether DS9 is the client) would require significant development on the server side to support both the lots-of-data-many-users case and the quick-and-dirty-few-users case.
  • Bill Joye regards XPA buffer problems reported by RHL as a bug that he'll try to fix if given a more complete bug report. He does not see XPA as fundamentally limited in a "lots-of-data" performance sense, though he admits that it is limited in a "lots-of-users" sense (in that case, he recommends the similar http-based messaging).