Last modified 3 years ago Last modified on 11/26/2013 09:38:25 AM

Installing LSST Data Management Software Stack, Winter 2013 Release

Before you begin


Install these using your distribution's package manager:

OS Packages Notes
RHEL 6 and derivatives gcc builds: gcc-c++ gcc-gfortran flex bison libXt-devel ncurses-devel readline-devel libuuid-devel zlib-devel bzip2-devel freetype-devel perl make openssl-devel
clang builds: clang 3.0
Officially supported platform
RHEL 5 and derivatives which perl make gcc-c++ flex bison libX11-devel readline-devel zlib-devel gcc44-c++ gcc44-gfortran e2fsprogs-devel bzip2-devel libXt-devel libstdc++44-devel Requires extra step
Ubuntu 10.04 Minimum: curl flex bison graphviz make perl zlib1g-dev libbz2-dev libreadline-dev libncurses5-dev libxt-dev g++ gfortran uuid-dev libssl-dev
Development: g++ gfortran git-core autoconf automake libtool m4 make flex bison libx11-dev libncurses5-dev libreadline5-dev patch libuuid1 uuid-dev latex2html libxaw7-dev zlib1g-dev libbz2-dev
Known to work
Ubuntu 12.04 curl libx11-dev libreadline6-dev zlib1g-dev dpkg-dev libbz2-dev gfortran libxaw7-dev libfreetype6-dev texlive-latex-base libatlas-base-dev libatlas-dev libssl-dev Requires extra step
Mac OS X 10.7 (Lion) and 10.8 (Mountain Lion) XCode 4.3, Command-line tools (use Download prefs pane in Xcode), gfortran for XCode 4.3 Requires the use of clang and special instructions
Mac OS X 10.x, (x<7) Does not work (compiler too old)

Installing a New Stack

Binary installs


curl -O

followed by:

su -c "bash lsst-distrib install"

if on RHEL, or

sudo bash lsst-distrib install

if on Ubuntu or OS X will download (using rsync) a pre-compiled binary distribution for your platform.

We currently support:

  • RHEL 6 (or derivatives)
  • RHEL 5 (or derivatives)
  • Ubuntu 12.04 LTS (or derivatives)
  • Mac OS X 10.7 (Lion)
  • Mac OS X 10.8 (Mountain Lion)

The script will autodetect your platform and download the correct binaries. More platforms may be added in the future, depending on popular demand.

The binaries will be downloaded to /opt/lsst/$PLATFORM (e.g., /opt/lsst/rhel-6-x86_64). You can download to a different directory by setting the environment variable LSST_HOME=/directory/to/download. Note however that this feature is highly experimental.

Building from source for the impatient

RHEL 6 (or derivatives)

# bash users:
cd root/directory/where/lsst/stack/will/be/installed (e.g., ~/lsst)


export NCORES=$((sysctl -n hw.ncpu || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) || echo 2) 2>/dev/null)

curl -O

eups distrib install --nolocks -t v6_2 lsst_distrib

RHEL 5 (or derivatives)

# bash users:
cd root/directory/where/lsst/stack/will/be/installed (e.g., ~/lsst)


export NCORES=$((sysctl -n hw.ncpu || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) || echo 2) 2>/dev/null)

curl -o

eups distrib install rhel5_gcc44 4.4
setup rhel5_gcc44

eups distrib install --nolocks -t v6_2 lsst_distrib

Ubuntu 12.04 and similar

# bash users:
cd root/directory/where/lsst/stack/will/be/installed (e.g., ~/lsst)


export NCORES=$((sysctl -n hw.ncpu || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) || echo 2) 2>/dev/null)

# Additional environment variables following toolchain changes (
export LDFLAGS+=" -Wl,--no-as-needed"
SCONSFLAGS+=" LINKFLAGS='-Wl,--no-as-needed' --setenv"

curl -O

eups distrib install --nolocks -t v6_2 lsst_distrib

Mac OS X 10.7 (Lion) and 10.8 (Mountain Lion) and Linux with clang

If you're running on OS X, this may be needed:

# make a symlink for gfortran:
sudo ln -s /usr/bin/gfortran-4.2 /usr/bin/gfortran

For both OS X and Linux, do:

# bash users:
cd root/directory/where/lsst/stack/will/be/installed (e.g., ~/lsst)


export LANG=C
export CC=clang
export CXX=clang++

export NCORES=$((sysctl -n hw.ncpu || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) || echo 2) 2>/dev/null)
export SCONSFLAGS="-j $NCORES cc=clang"

curl -O

On OS X, complete the stack build by:

eups distrib install --nolocks -t v6_2 lsst_distrib

On RHEL6, complete the stack build by:

# CHECK: What doesn't work in lsst_distrib?
eups distrib install --nolocks -t v6_2 testing_pipeQA

For Mac 10.7(Lion), using the system clang alongside macports

For people who are using macports for package management, you may (or may not) want to continue using your macports versions of numpy, scipy, matplotlib, mysql, etc instead of adding LSST versions, for various reasons. You do not need macports to install the LSST stack - you can just use the directions above if you have never used macports. However, if you do have macports installed, there are a few things to watch out for.

First - if you use macports for your mac package management, make sure you are NOT using the macports clang (you must use the Xcode-installed clang). Follow the steps above up to and including 'source loadLSST.csh' (or 'source').
Then - if you want to just use LSST versions of everything (reinstalling numpy, mysql, etc.), do the 'eups distrib install --nolocks -t v6_2 lsst_distrib' step. If you run into build errors, you might want to try removing anything macports-related from your path temporarily (including your library paths), as sometimes it seems libraries from Macports can be picked up instead of system libraries and this may cause errors in the build.
On the other hand - if you want to use your good old macports versions of numpy, matplotlib, scipy, sqlite3, mysql5 and mysqldb (and python) then you have the ability to use these third party versions of these packages by following these next steps after you 'source loadLSST.csh'.

1 - edit the site/manifest.remap file (this file tells eups distrib whether or not to download a package from LSST).

cat >> site/manifest.remap 
        numpy   system
        scipy   system
        matplotlib system
        pyfits     system
        sqlite     system
        pysqlite   system
        mysqlclient system
        mysqlpython system

2 - create the necessary files (*.cfg files) for scons so that the LSST installation system (eups + scons) can find any libraries and headers it needs to compile other packages against these system packages. This is only necessary for python (but the script already took care of python for you, so you're good to go there), numpy, sqlite, and mysqlclient.

# for numpy
cd $EUPS_PATH/DarwinX86/external
mkdir numpy
mkdir numpy/ups
curl > numpy/ups/numpy.cfg
eups declare numpy system -r numpy -m none
# for sqlite and mysql
cd $EUPS_PATH/DarwinX86/external
curl > mysqlclient/ups/mysqlclient.cfg

# CHECK: This may not be necessary
cat < sqlite/ups/sqlite.cfg
# -*- python -*-

import lsst.sconsUtils

dependencies = {}

config = lsst.sconsUtils.ExternalConfiguration(

Then link to the libraries and include files as eups expects.

cd $EUPS_PATH/DarwinX86/external
ln -s /opt/local/lib/mysql5/mysql mysqlclient/lib
mkdir mysqlclient/include
ln -s /opt/local/include/mysql5/mysql mysqlclient/include/mysql

ln -s /opt/local/lib/ sqlite/lib

3 - Next eups declare all of these system packages (even the ones you didn't have to make *cfg files for, because the manifest.remap doesn't tell eups how to 'setup' a package, only that it shouldn't download it).

cd $EUPS_PATH/DarwinX86/external
eups declare numpy system -r numpy -m none
eups declare matplotlib system -r none -m none
eups declare scipy system -r none -m non
eups declare mysqlclient system -r mysqlclient -m none
eups declare mysqlpython system -r none -m non
eups declare sqlite system -r sqlite -m none
eups declare pysqlite system -r none -m none
eups declare pyfits system -r none -m none

4 - And now you can finally install the rest of the stack

# Set some environment variables to use clang
setenv LANG C
setenv CC clang
setenv CXX clang++

# Find out the number of CPUs to speed up builds via SCONSFLAGS and MAKEFLAGS 
setenv NCORES `bash -c "(sysctl -n hw.ncpu || ( test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l ) || echo 2) 2> /dev/null"`
setenv  MAKEFLAGS "-j $NCORES"
setenv SCONSFLAGS "-j $NCORES cc=clang"

# Finish the stack installation
eups distrib install --nolocks -t v6_2 lsst_distrib

Step by step instructions (for Linux)

Bootstrapping the environment

If you have already installed the software stack, be sure unset your environment variables for that stack. To do this, you can type (after the % prompt) one of the following:

% unset LSST_HOME EUPS_PATH      # for bash users
% unsetenv LSST_HOME EUPS_PATH   # for tcsh users

Create and change into the directory where LSST DM stack is to be installed (the "LSST home"):

% mkdir -p /the/LSST/installation/root && cd /the/LSST/installation/root

The home of the LSST installation is your choice. Just don't use the same location as a previous LSST installation unless you move it out of the way first.

Installing the stack involves downloading and building a number of (sizable) source packages. If you have a multi-core machine with sufficient memory (at least 1 GB per core), you speed up the builds significantly by allowing the LSST installer to use all cores.

bash users:

export LANG=C
export CC=clang
export CXX=clang++

# Find out the number of CPUs to speed up builds via SCONSFLAGS and MAKEFLAGS (works on Linux and Mac)
export NCORES=$((sysctl -n hw.ncpu || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) || echo 2) 2> /dev/null)
export  MAKEFLAGS="-j $NCORES"

csh users:

setenv LANG C
setenv CC clang
setenv CXX clang++

# Find out the number of CPUs to speed up builds via SCONSFLAGS and MAKEFLAGS (works on Linux and Mac)
setenv NCORES `bash -c "(sysctl -n hw.ncpu || ( test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l ) || echo 2) 2> /dev/null"`
setenv  MAKEFLAGS "-j $NCORES"

Download and run the installation setup script:

% curl -O
% bash

This installs the basic packages required to install other packages. It also sets up the loadLSST.* scripts which you should source:

% source # for bash users
% source loadLSST.csh # for csh users

to get LSST tools (e.g., the eups command) added to your path.

Ubuntu 12.04 specific step

Starting with Natty Narwhal (Ubuntu 11.04), Ubuntu changed the behavior of the linker (for details, see While this is intended to make linking more robust, this currently breaks the LSST build. To undo the effects of these changes, add the following environment variables

export LDFLAGS+=" -Wl,--no-as-needed"
SCONSFLAGS+=" LINKFLAGS='-Wl,--no-as-needed' --setenv"

RHEL 5 specific step

If you're using RHEL 5 (or derivative), your default compiler version is too old too compile LSST DM code. You will need to install the gcc44 RPM packages, and a special package to make it known to EUPS:

eups distrib install rhel5_gcc44 4.4
setup rhel5_gcc44

Warning: This package will set environment variables LAPACK=None, ATLAS=None and BLAS=None. These are used by the numpy installer to determine whether to look for and build with external LAPACK, blas and/or atlas libraries. If you have compiled either of these (with gcc44 compiler), set these variables to the directories where they reside.

Installing the stack

To install the Winter2013 release of the DM stack, type:

% eups distrib install --nolocks -t v6_2 lsst_distrib

Running a Demo

We provide a simple demonstration of using the LSST DM stack to detect sources in a simulated LSST image (a single chip):

curl -O

tar xzf lsst_dm_stack_demo-
cd lsst_dm_stack_demo-

setup obs_sdss

(note: this is a ~210MB download). Look into the README file for more information.

On the NCSA lsst* machines this repository is not yet available as /lsst3/lsst_dm_stack_demo-Summer2012

Known Issues

Building the stack takes a long time

The current build system defaults to using a single core when building the stack. This behavior can be overridden using the MAKEFLAGS and SCONSFLAGS environment variables:


# Find out the number of CPUs to speed up builds via SCONSFLAGS and MAKEFLAGS (this works on Linux and Mac)
export NCORES=${NCORES:-$(sysctl -n hw.ncpu 2>/dev/null || (test -r /proc/cpuinfo && grep processor /proc/cpuinfo | wc -l) 2>/dev/null || echo 2)}

# Set up the basic environment to ~/lsst
curl -o

# Install Winter2013 stack
# Install astrometry_net and wcslib separately because of -jN bugs (e.g. #1970 and related)
MAKEFLAGS="-j $NCORES" SCONSFLAGS="-j $NCORES" eups distrib install --nolocks -t v6_2 lsst_distrib

EUPS Locking

Some users have encountered difficulties with the more aggressive locking in the new EUPS. If a stack will be used by multiple users, you should at least modify the EUPS lock directory to point to a location writable by all users. This can be accomplished by adding the following to $LSST_HOME/site/ = "/tmp"

If you still encounter problems, you can disable locking entirely by instead using: = None

Numpy may fail to build

When you are installing "pipe_tasks" the installation will sometimes fail on numpy. The workaround is to install numpy, then resume your installation of lsst_distrib. For example:

  • eups distrib install -t v6_2 numpy
  • eups distrib install -t v6_2 lsst_distrib

The underlying problem is that LD_LIBRARY_PATH is set as part of installing lsst_distrib, and this can confuse the numpy installer.

How to run common tasks

Before you begin, be sure to setup the appropriate packages. As of 2012-09-?? we use tag "v6_2", soon to be "stable":

setup -t v6_2 pipe_tasks # for almost all tasks
setup -t v6_2 obs_sdss --keep # to process SDSS data
setup -t v6_2 obs_lsstSim --keep # to process LSST Sims

To assure stability in your stack, save exactly which versions you are using and use this for subsequent setups. That way if the stack is updated you can continue using the older versions. If you fail to do this you may find that your procedures stop working:

eups list -s >myversions # to save an exact set of versions to a file named "myversions"
# CHECK: does this actually work?
setup -m myversions # to setup an exact set of versions from this file

Running on Stripe 82 with extended source photometry turned on

Acquire and install the astrometry_net_data package for Stripe 82:

curl -O 
tar xzf sdss-2012-05-01-0.tgz
eups declare -r sdss-2012-05-01-0 astrometry_net_data sdss-2012-05-01-0

Make sure the meas_extensions_multiShapelet and astrometry_net_data packages are installed and setup:

eups distrib install meas_extensions_multiShapelet -t Winter2013
setup -t v6_2 meas_extensions_multiShapelet --keep
setup astrometry_net_data sdss-2012-05-01-0 --keep

For SDSS data, extended source photometry is already enabled by default on both single frames and coadds. For ImSim, follow the other instructions for running one of the process*.py scripts, but add the command-line option below:


Running a Stripe 82 SFM

SDSS fpC files must be preprocessed before they can be coadded. The command is and the input data is found in /lsst7/stripe82/dr7/runs. Here is an example command:

setup pipe_tasks
setup obs_sdss -k
setup astrometry_net_data sdss-2012-05-01-0 sdss /lsst7/stripe82/dr7/runs --id run=1033 camcol=2 field=111 filter=g --output /nfs/lsst7/stripe82/dr7-coadds/v1/run2

Creating a Stripe 82 co-add

Every time you want to create a coadd the first step is to create a sky map. This describes the geometry of the coadd as a set of large tracts (which are essentially large exposures) subdivided into patches (which are subregions of approximately the size of a science image).

Create the sky map using an existing data directory as input and a new directory for output. That new directory then becomes both the input and output for all subsequent commands.

Stripe 82 has one extra consideration: if you want a coadd consisting of a single camcol from a single stripe (N or S) then the declination range of the sky map should be just large enough to included that one camcol from that one stripe; otherwise your coadd will contain many EDGE pixels along the bottom and top.

# make a sky map for camcol 2, stripe N sdss /lsst7/stripe82/dr7-coadds/v1/run2 --config coaddName=goodSeeing,-0.42454,2024 --output myCoaddDir

Use to determine which patches contain useful data. For example: sdss myCoaddDir --config coaddName=goodSeeing --id filter=g tract=3 --config raDecRange=333.693,-0.729,334.432,-0.350 select.camcols=2,2 select.strip=N select.quality=2 select.maxFwhm=2.5

If you need to process specific images to fill a skypatch then add config option showImageIds=True to the command; this will list the ID of each image found.

Make coadd patches using The following example uses a single process to make all of the above coadd patches. You can manually parallelize the process by running multiple instances with different sets of patches. sdss myCoaddDir --id filter=g tract=3 patch=113,0^114,0^115,0^116,0 --config coaddName=goodSeeing desiredFwhm=1.7 select.camcols=2,2 select.strip=N select.quality=2 select.maxFwhm=2.5

Running SFM on a Stripe 82 co-add sdss myCoaddDir --id filter=u tract=3 patch=113,0^114,0^115,0^116,0 --output myCoaddSFMDir --config calibrate.initialPsf.fwhm=1.7 detection.thresholdType="pixel_stdev"

$DATAREL_DIR/bin/ingest/ --camera=sdss myDatabase
# (or use same one as created for SFM ingestion)

mkdir myCoadd-csv

$DATAREL_DIR/bin/ingest/ --camera=sdss --database=myDatabase --strict --coadd-names=goodSeeing --create-views myCoadd-csv myCoaddSFMDir

$DATAREL_DIR/bin/ingest/ --camera=sdss --database=myDatabase --coadd-names=goodSeeing --exposure-metadata=myCoadd-csv/GoodSeeingCoadd_Metadata.csv --ref-catalog=myCoaddDir/_parent/_parent/refObject.csv myCoadd-csv 

Running forced photometry

sqlite3 myCoadd/_parent/_parent/registry.sqlite3 "SELECT run||' '||camcol||' '||field||' '||filter FROM raw;" | while read run camcol field filter; do sdss myCoadd --output forcedPhot --id run=$run camcol=$camcol field=$field filter=$filter -c references.dbName=myDatabase; done

Alternatively, use the list of runs/fields that went into the coadd:

sort -u myCoadd/_parent/_parent/runrerunfield_uniq.lis | while read run rerun field; do sdss myCoadd --output forcedPhot --id camcol=2 field=$field filter=r run=$run -c references.dbName=myDatabase; done

In either case:

mkdir forcedPhot-csv

$DATAREL_DIR/bin/ingest/ --camera=sdss --database=myDatabase --coadd-name=goodSeeing --create-views forcedPhot-csv forcedPhot
$DATAREL_DIR/bin/ingest/ --camera=sdss --database=myDatabase

Note that the database used for the reference sources (configuration option references.dbName) has no default, and so much be specified explicitly, as in the above example.

Some example queries

Getting 10 example calibrated magnitudes out of a database with a Source and Science_Ccd_Exposure table:

SELECT scisql_dnToAbMag(s.psfFlux, sce.fluxMag0) as psfMag_r, 
       scisql_dnToAbMagSigma(s.psfFlux, s.psfFluxSigma, sce.fluxMag0, sce.fluxMag0Sigma) psfMagErr_r 
FROM Source s 
JOIN Science_Ccd_Exposure sce 
ON (s.scienceCcdExposureId = sce.scienceCcdExposureId) 
WHERE s.filterId = 2 

If you are using a coadd database (GoodSeeing? coadd in this case) the names of the tables and columns change, but the syntax is the same.

SELECT scisql_dnToAbMag(s.psfFlux, sce.fluxMag0) as psfMag_r,
       scisql_dnToAbMagSigma(s.psfFlux, s.psfFluxSigma, sce.fluxMag0, sce.fluxMag0Sigma) psfMagErr_r   
FROM GoodSeeingSource s   
JOIN GoodSeeingCoadd sce   
ON (s.goodSeeingCoaddId = sce.goodSeeingCoaddId) 
WHERE s.filterId = 2 

Here is the example query sent to the list. It is much the same as the one above but with an extra join to the reference catalog. Note there is not limit so will return all matches for the r band:

SELECT ref.rMag, 
      scisql_dnToAbMag(s.psfFlux, sce.fluxMag0) psfMag,
      scisql_dnToAbMagSigma(s.psfFlux, s.psfFluxSigma, sce.fluxMag0, sce.fluxMag0Sigma) psfMagSigma
FROM Source s
JOIN RefSrcMatch rsm 
ON (r.sourceId = rsm.sourceId)
JOIN RefObject ro 
ON (rsm.refObjectId = ro.refObjectId)
JOIN Science_Ccd_Exposure sce 
ON (s.scienceCcdExposureId = sce.scienceCcdExposureId)
s.filterId = 2 and rsm.refObjectId is not NULL;

Developing for the LSST DM Stack


LSST DM software stacks consist of a number of packages, managed by the EUPS tool. EUPS is similar to environment modules that you may have encountered on Beowulf clusters. It allows the user to load and mix and match the desired packages, by manipulating environment variables such as PATH, LD_LIBRARY_PATH, PYTHONPATH, etc. Eups also knows which packages depend on others; for example loading (or 'setting up', in EUPS speak) the top-level package pipe_tasks will automatically load packages on which pipe_tasks depends:

[mjuric@moya ~]$ setup pipe_tasks       # setup package pipe_tasks (thus making available commands such as

[mjuric@moya ~]$ eups list -s           # see which packages were set up -- most of these have been pulled in as dependencies of pipe_tasks
afw           	current Winter2012 setup
astrometry_net        0.30       	current stable setup
base          	current beta Winter2012 setup
boost                 1.47.0+5   	current beta Winter2012 setup
cfitsio               3290+1     	current beta Winter2012 setup
coadd_chisquared  	current Winter2012 setup
coadd_utils   	current Winter2012 setup
... etc ...

EUPS also lets you override default packages with your own versions. Example:

[mjuric@moya ~]$ git clone
Initialized empty Git repository in /home/mjuric/afw/.git/
remote: Counting objects: 33716, done.
remote: Compressing objects: 100% (10542/10542), done.
remote: Total 33716 (delta 22288), reused 29944 (delta 19804)
Receiving objects: 100% (33716/33716), 30.93 MiB | 7.12 MiB/s, done.
Resolving deltas: 100% (22288/22288), done.
[mjuric@moya ~]$ cd afw/
[mjuric@moya afw]$ git checkout Winter2012/Release
Branch Winter2012/Release set up to track remote branch Winter2012/Release from origin.
Switched to a new branch 'Winter2012/Release'

[mjuric@moya afw]$ setup -j -r .

[mjuric@moya afw]$ scons -j 16 opt=3 -s
Setting up environment to build package 'afw'.
Warning: afwdata is not set up; not running the tests!

The above will make the cloned afw the 'setup'-ed one; that is, other packages looking for afw will find your locally built copy, instead of the system one.

Note: There's more in this older (and out of date) document:

Installing packages to a personal directory

In shared environments, where a system-wide, read-only stack exists, it is useful to have the ability to install one's own packages to a personal directory. This is where mksandbox command helps:

[mjuric@moya ~]$ mksandbox mystack

This will create a subdirectory 'mystack', and create some EUPS-related files in it (for bookkeeping purposes). It needs to be done only once.

To let EUPS know about the directory, do:

[mjuric@moya ~]$ export LSST_DEVEL="$PWD/mystack"
[mjuric@moya ~]$ source $LSST_HOME/ 

Now you can 'eups distrib install' new packages into your "personal stack".

For more information, see here.

Enabling Builds with GPU Acceleration

GPU-aware packages (currently, only afw) look for the cuda_toolkit EUPS package to locate the NVIDIA CUDA compilers. This package must be installed explicitly. For example:

export CUDA=/usr/local/cuda
eups distrib install cuda_toolkit 4.1+1
setup cuda_toolkit 4.1+1

will install and setup cuda_toolkit EUPS support package for CUDA 4.1 toolkit that resides in /usr/local/cuda. Note: the cuda_toolkit package will not install NVIDIA's compilers -- you already must have those installed. It will only set up the necessary environment variables and symlinks for the other packages to know how to find CUDA.

After setting up cuda_toolkit, any future afw builds will be GPU-enabled.

Developing on

The public stack on moya is located in


. As the path suggests, it's the Winter2012 release.

Developing at NCSA

The public stack on the LSST cluster at NCSA is available at


. This will only work on machines running Red Hat Enterprise Linux (RHEL) 6.

Building with clang

At NCSA, there's a build of clang in ~mjuric/clang/3.0. Add its bin subdirectory to PATH to enable it. Otherwise, build clang using these instructions.

Clang-based applications development and testing uses the pipe_tasks framework. The Active Messaging System used by the DM gcc-based stack is not ported to clang so DM event management and process orchestration (ctrl_*) used for multi-processor testing is unavailable.

Follow the instructions for installing on OS X 10.7 with the exception noted in the instructions for RHEL6 builds.


Documentation for the sconsUtils can be found in several locations:

NOTE: sconsUtils generates a file for every package that carries its version and the versions of the dependencies it was built against. This is imported by the of each package, causing an import error if it does not yet exist. This means that even pure-Python packages must now be built with scons before they can be used.


The latest (>= versions of the "lsst" package set the LSST_GIT and LSST_DMS environment variables to point to and, so these can be used in largely the same way as the old LSST_SVN and LSST_DMS variables.

To clone the git repo for an LSST package, do:

git clone $LSST_DMS/my_package

Installing without the script and lsst package

These instructions are for those who would like to use an existing EUPS install, and avoid the "lsst" package and its associated scripts (most likely people at Princeton who are developing in both LSST and HSC environments).

  • Make sure you have the latest lssteups (see the next section).
  • Make sure you have eups >= 1.2.23.
  • Set your EUPS_PATH to whatever you like.
  • Set your EUPS_PKGROOT to include
  • Remove or otherwise disable old scons and sconsDistrib packages; the new scons is the equivalent of the old sconsDistrib, and that can lead to some confusion. Or don't, but come back to this step if you have problems down the road.
  • Be aware that things in your manifest.remap might break the install (or they might do what you want them to do, but it's another place to look if things go awry).
  • Install the new sconsUtils (which should install Python, Tcl/Tk?, scons, and Doxygen as dependencies) with eups distrib install sconsUtils.
  • Realize that you won't get the LSST_GIT and LSST_DMS environment variables noted above unless you set them yourself.
  • Install away!