wiki:db/DC2/PartitioningTests

Version 6 (modified by smm, 12 years ago) (diff)

--

Description

These tests are aimed at determining how to partition the Object and DIASource tables to support efficient operation of the Association Pipeline (AP). The task of the AP is to take new DIASources produced by the Detection Pipeline (DP), and compare them with everything LSST knows about the sky at that point. This comparison will be used to generate Alerts that LSST and other observatories can use for followup observations, and is also used to bring LSSTs knowledge of the sky up to date.

The current AP design splits processing of a Field-of-View (FOV) into 3 phases. For context, here is a brief summary:

prepare phase
This phase of the AP is in charge of loading information about the sky that falls within (or is in close proximity to) a FOV into memory. We will know the location of a FOV roughly 30 seconds in advance of actual observation, and this phase of the AP will start when this information becomes available. The Object, DIASource, and Alert tables contain the information we will actually be comparing new DIASources against. Of these, Object is the largest, DIASource starts out small but becomes more and more significant towards the end of a release cycle, and Alert is relatively trivial in size.
compare-and-update phase
This phase takes new DIASources (produced by the DP) and performs a distance based match against the contents of Object and DIASource. The results of the match are then used to retrieve historical Alerts for any matched Objects. The results of all these matches and joins are sent out to compute nodes for processing - these compute nodes decide which Objects must be changed (or possibly removed), which DIASources correspond to previously unknown Objects, and which of them are cause for sending out an Alert. At this point, the AP enters its final phase.
post-processing
The responsibility of this phase is to make sure that changes to Object (inserts, updates, possibly deletes), DIASource (inserts only), and Alert (inserts only) are present on disk. There is some (TODO: what is this number) amount of time during which we are guaranteed not to revisit the same FOV.

Note that LSST has a requirement to send out alerts within 60 seconds of performing an observation. Of the 3 AP phases, only compare-and-update is in the critical timing path. This is illustrated by the following (greatly simplified) diagram:

Association Pipeline Phase Timeline

In this diagram, processing of a FOV starts right after observation, with the image processing pipeline (IPP) which is followed by the DP, and finally the AP. The yellow and red boxes together represent processing which must happen in the 60 second window.

The database tests are currently focused on how to partition Object and DIASource such that the prepare phase is as fast as possible, and on how to perform the distance based crossmatch of the compare-and-update phase. Tests of database updates, inserts, and of how quickly such changes can be moved from in-memory tables to disk based tables will follow.

The tests are currently being performed using the USNO-B catalog as a standin for the Object table. USNO-B contains 1045175763 objects, satsifying the DC2 requirement of simulating LSST operations at 10% scale.

Code

The code for the tests is available here. The following files are included:

lsstpart/Makefile Builds objgen, chunkgen, and bustcache
lsstpart/bustcache.cpp Small program that tries to flush OS file system cache
lsstpart/chunkgen.cpp Generates chunk and stripe descriptions
lsstpart/crossmatch.sql SQL implementation of distance based crossmatch using the zone algorithm
lsstpart/distribution.sql SQL commands to calculate the spatial distribution of the Object table (used to pick test regions and to generate fake Objects)
lsstpart/genFatObjectFiles.py Generates table schemas for "fat" tables
lsstpart/objgen.cpp Program for generating random (optionally perturbed) subsets of tables, as well as random positions according to a given spatial distribution
lsstpart/prepare.bash Loads USNO-B data into Object table
lsstpart/prepare_chunks.bash Creates coarse chunk tables from Object table
lsstpart/prepare_diasource.bash Uses objgen to pick random subsets of Object in the test regions (with perturbed positions). These are used to populate a fake DIASource table
lsstpart/prepare_fine_chunks.bash Loads fine chunk tables from Object
lsstpart/prepare_stripes.bash Loads stripe tables (indexed and clustered on ra) for the test regions from Object
lsstpart/prepare_zones.bash Clusters stripe tables generated by prepare_stripes.bash and clusters them on (zoneId, ra)
lsstpart/prng.cpp Pseudo random number generator implementation
lsstpart/prng.h Pseudo random number generator header file
lsstpart/schema.sql Test database schema
lsstpart/stripe_vars.bash Variables used by stripe testing scripts
lsstpart/test_chunks.bash Test script for the coarse chunking approach
lsstpart/test_fine_chunks.bash Test script for the fine chunking approach
lsstpart/test_funs.bash Common test functions
lsstpart/test_regions.bash Ra/dec boundaries for the test regions
lsstpart/test_stripes.bash Test script for the stripe approach with ra indexing
lsstpart/test_zones.bash Test script for the stripe approach with (zoneId, ra) indexing


Partitioning Approaches

Stripes

In this approach, each table partition contains Objects having positions falling into a certain declination range.

Chunks

In this approach, a table partition corresponds to a declination stripe on the sphere.

Performance Results

Attachments