wiki:DC3DataMapping
Last modified 10 years ago Last modified on 08/12/2009 08:25:29 AM

Data Mapping in DC3b

This is a discussion of data mapping in DC3b. See ticket #795.

Currently, data is assigned based on MPI rank (in source:DMS/ctrl/dc3pipe/trunk/IPSD/01-sliceInfo_policy.paf), which doesn't give us good control over data locality for the sake of efficiency. We would like to be able to strategize with our data assignments.

Instead, it would be nice to be able to map chunks of data to compute nodes in a way that is optimized for a particular pipeline. One way to do that is to map the hostname to a data ID which the slice worker can use to fetch its data from the file system. The mapping needs to be configurable.

Data Mapping Sequence

  1. Pipeline loads data mapping configuration from policy file, during initialization. The policy file provides:
    • The name of the Python class to instantiate that implements DataMapper
    • Configuration that is specific to the DataMapper implementation.
  2. Pipeline queries slice workers for host information
  3. Slice workers respond, sending hostname, # CPUs, RAM, etc.
  4. Pipeline passes info for all nodes to DataMapper
  5. DataMapper assigns a data ID to each node
  6. Pipeline sends data ID to each slice worker.
  7. Each slice worker requests data based on data ID assigned by DataMapper

To Do: compare to PipelineFramework#TheFullPipelineSequence

Development Plan

  1. Add to the Stage interface a method that receives a data ID (which will be a PropertySet? for now).
  2. Implement a mechanism for the pipeline to query for host info (gather) and broadcast data IDs (scatter) via MPI.
    • Slice tells pipeline information about its compute node (hostname, CPUs, RAM, etc.)
    • Pipeline uses mapper to implement assignment strategy
    • Pipeline sends data ID back to slice
    • Slice uses data ID to ask for CCD, Amp IDs, etc.
  3. Create a Mapper interface that the pipeline can use to map hostnames to data IDs. The pipeline will instantiate it based on a class name given in its policy file; each Mapper implementation will have its own policy file schema. Create a simple Mapper implementation.
  4. Slices pass data ID to all of their stages.
    • Currently, the clipboard is emptied at start; should data ID be preset each time the clipboard is initialized by the slice?
  5. Alter implementation of lsst.pex.harness.IOStage.InputStage? to:
    • Get the Data ID
    • Use it to determine CCD, Amplifier ID, etc.

Design Notes

  • Data ID properties:
    • Must be slice-specific; that is, they cannot overlap with properties from events, which are the same across all slices
    • Should be directly substitutable into pathnames & database lookups, with existing mechanisms (so that it won't be necessary to alter the implementation of lsst.pex.harness.IOStage.InputStage)
    • Will likely be useful for tasks other than input. Should we place the data ID on the Clipboard with a well-known key?
  • To obtain a complete set of data addresses for lsst.daf.persistence.LogicalLocation.setLocationMap(), a slice will combine (it may be useful to create a method LogicalLocation.addLocationMap()):
    • the location map passed from Orca (including the locations of the input, output, scratch, update, and work directories and the database URL)
    • its data ID assigned by the DataMapper
  • Currently, mapping is done in ctrl_dc3pipe/pipeline/IPSD/01-...

Attachments