Last modified 10 years ago Last modified on 02/24/2009 11:49:43 AM

How to Run the Complete Set of DC2 Pipelines

The dc2pipe package is provided for running the DC2 pipelines with one script:

Note: this document has been updated for running DC2 pipelines under the DC3 software stack. This requires a precise set of package versions (so head the Prerequisite section below).


You need a particular software stack to run the DC2 pipelines on the LSST cluster at NCSA: /lsst/DMSstack. Before you load this stack, make sure that your environment variables EUPS_PATH and LSST_PKGS are not set. LSST_DEVEL may be set, but only if you intentionally want to use private installations of certain packages; however, for default operation, LSST_DEVEL should not be set either. Set your environment to use this stack in the usual way:

    source   # or source loadLSST.csh

The scripts needed, along with the environment for all the dependent packages, are loaded when you set-up the dc2pipe package:

    setup dc2pipe

This will setup the default versions that are known to work. If you need to run with different versions, then you usually have to set them up explicitly after setting up dc2pipe. You will also need to use the -e option to script; see below for details. You can review what's been setup with:

    eups list --setup

Before you can run for the first time, there are a few things you need to set up.


SSH must be set up to allow password-less logins. You must have effectively logged into every host from every other host in the cluster. This can be set up for you via the script:

You should test this by logging into another one of the cluster machines using ssh.


A configuration file for MPI, .mpd.conf, must be installed into your home directory on every machine that you use in a pipeline. This file must contain a "secret word" for MPI to operate.

The NCSA LSST cluster now has shared home directories. Thus, to set this up, simply type:

    cp $DC2PIPE_DIR/etc/mpd.conf $HOME/.mpd.conf

DB authentication

An authentication string needs to be available on every node that will connect to the database. This can accomplished by placing it in the file /tmp/lsst.db.auth or placing it in the environment variable LSST_DB_AUTH.

A copy of this file can be found in $LSST_HOME/lsst.db.auth.

Editor's note: I have not confirmed that the environment variable method is 100% working properly. However, the authentication file currently exists in /tmp on every cluster machine that can run the pipelines, so it should not be necessary to use the environment variable.

Launching with launchDC2

To run the default configuration of the DC2 Pipelines, you can simply run the script: $DC2PIPE_DIR/pipeline/dc2pipe.paf myrunid $DC2PIPE_DIR/exposureLists/

Often, you will want to change the configuration before you launch it. The typical thing to do is to copy the parts of the policy repository that you will use to a local directory, edit the files, and then launch.

The default policy repository is $DC2PIPE_DIR/pipeline. For each defined pipeline, there is a top level policy file named after the pipeline and (optionally) a subdirectory having the same name (without an extension) containing the stage policy files for the pipeline. DC2 is made up of 3 pipelines: imageSubtractoinDetection, association, and movingobjects. Thus to configure all 3:

   cp -r $DC2PIPE_DIR/pipeline/{imageSubtractionDetection,association,movingobjects} .
   cp $DC2PIPE_DIR/pipeline/{imageSubtractionDetection,association,movingobjects}.paf .

The script is driven by its own policy file, so you will need that, too:

   cp $DC2PIPE_DIR/pipeline/dc2pipe.paf .

First edit dc2pipe.paf. It controls which pipelines will run and on which nodes. The most common things to change are:

  • pipelines.pipeline_name.nodes, listing the nodes to run on and the number of processes each. Every pipeline needs to run with at least 2 processes.
  • pipelines.pipeline_name.launch, set to true or false, indicating whether that pipeline should be launched. Thus, to just run image subtratction and detection, set launch for association and movingobjects to false.

You will also need an exposure list. You can find examples in $DC2PIPE_DIR/exposureLists. Copy out one of the lists there, e.g. to your current directory as well.

If simply running setup dc2pipe as you did above does not setup the proper versions of all packages you need--that is, you had to explicitly run setup for those packages afterward, then you will need one other file for local editing: either or setup.csh:

   cp $DC2PIPE_DIR/etc/setup.csh   # or for bash users

This script is used to setup the proper DC2 environment on the master nodes from which each pipeline is launched. Add the necessary setup calls to get the correct package versions. Then, below, you will need to use the -e option to

To launch the pipelines, then, type: -r . dc2pipe.paf myrunid

This script will complain if the run ID has already been used. If you specialized a or setup.csh file, then reference it with the -e option: -r . -e setup.csh dc2pipe.paf myrunid

All files related to the run will get saved under /share/DC2root/myrunid. In particular, /share/DC2root/myrunid/pipeline_name/work contains a copy of all of the policy files used to configure that pipeline, along with some logging output from the master node. For full logging, see

Controlling the maping of CCD to Slice

By default, will process the first N CCDs, where N is the number of processes made available via the dc2pipe.paf policy file (under the imageSubtractionDetection section) minus 1. This can be changed by adjusting the policy parameter, CcdOffset in each of the three policy files:

  • imageDetectionSubtraction/input_policy.paf
  • imageDetectionSubtraction/subtractOutput_policy.paf
  • imageDetectionSubtraction/exposureOutput_policy.paf

Setting the CcdOffset parameter will cause the first N CCDs to be processed beginning with the CCD having the given number. A value of 1, then, produces the default behavior.

Stopping/Killing? the Pipelines

Even if the pipeline fails with errors, there will likely remain some processes still running that need to be stopped when processing is done. To stop the pipelines, run -p dc2pipe.paf -r myrunid

This will extract the head nodes from dc2pipe.paf and run the runid on each one.

Editor's Note: I have found this script does not always work properly. A ssh node killall python for each node should do the trick.