On this page... (hide)

  1. 1. Chimera: Software
    1. 1.1 Compilers
    2. 1.2 MPI
    3. 1.3 Process and Queue Management
    4. 1.4 OS
    5. 1.5 MPJ
    6. 1.6 Matlab

1.  Chimera: Software

The development of the software environment is in progress.

1.1  Compilers

Three compiler suites are installed on Chimera: GCC, Intel, and PGI. The easiest way to set up your environment to use the suite of your choice is to use the [modules] package. If for some reason you need more detailed information, keep reading.


GCC should be in the default path.


The PGI compilers are installed in /usr/local/pgi/. This compiler was installed with ACML support. To use it add /usr/local/pgi/linux86-64/10.9/bin to the PATH environment variable and set the LM_LICENSE_FILE environment variable to

PGI documentation:

Notes from the PGI web site:

  • PGI,
    • PGI "Server" Fortran/C/C++ academic pricing
      • 2-user: license: $1999, subscription: $770 per year
      • 5-user: license: $3749, subscription: $1450 per year
      • 10-user: license: $6999, subscription: $2710 per year
    • "Cluster development kit": more stuff (profilers/debuggers) for up to 256 procs.
    • Full support for OpenMP 3.0 (up to 256 cores).
    • Pre-validated de facto standard support libraries including NetCDF, F95 OpenGL, ATLAS, ScaLAPACK, FFTW, MPICH, MPICH2 and LAM MPI
    • Full support for ANSI C99
  • Pathscale,
    • need to call/email for price
    • OpenMP 2.5


The Intel compilers are installed in /usr/local/intel/. To use it csh/tcsh users should run 'source /usr/local/intel/composerxe-2011.1.107/bin/compilervars.csh intel64' while bash users should run 'source /usr/local/intel/composerxe-2011.1.107/bin/ intel64'. This will setup the environment for using the intel compilers. The icc command is used for C code, icpc for C++ and ifort for Fortran.

Intel documentation:

Notes from the web site:

  • Intel,
    • actually said to work very well for AMD and recommended by Penguin
    • Intel Compiler Suite Professional Edition for Linux:
      • $1654 for 2 concurrent users
      • $3623 for 5 concurrent users
      • "The Compiler Suite Professional Editions include the Intel C++ Compiler, Intel Fortran Compiler, Intel Integrated Performance Primitives, Intel Math Kernel Library, and Intel Threading Building Blocks."
      • Open MP 3.0. "OpenMP raises the parallelism abstraction away from the API, simplifying threading and making code more portable. Previously limited to loop-based data-parallelism, the new 3.0 standard simplifies both data and task parallelism."
      • Multithreaded Application Support: OpenMP and auto-parallelization allow you to take full advantage of multicore technology, including the latest Intel® muticore processors.
      • Fortran Standards Support: The compiler offers additional features from Fortran 2003 including object-oriented features, type-bound procedures and operators, and interoperability features that make it easier to develop mixed-language applications.
      • Parallel Lint for OpenMP: Performs static analysis to check for OpenMP parallelization correctness. Helps diagnose deadlocks, data races, or potential data dependency—side effects from synchronization issues.
      • Parallel debugger for IA-32 and Intel® 64 architectures
      • Outstanding multithreaded application execution control without added complexity. Serialization of parallel region and detailed information on OpenMP constructs.
      • Intel Fortran Compiler for Linux fully supports the Fortran 95 language standard, as well as the previous standards: Fortran 90, Fortran 77, and Fortran IV. It also includes many features from the Fortran 2003 language standard, as well as numerous popular language extensions.
      • Intel C++ Compiler for Linux is substantially standards compliant, and includes compatibility with GCC and the GNU* tool chain. It also supports Intel® Itanium® 2 processors, including the dual-core Intel® Itanium 2 processor. Intel C++ Compiler for Linux also includes support for additional Linux distributions, including Debian* 4.0.5, 5.0, Ubuntu* 8.10, 9.10, Fedora* 10.

1.2  MPI

Again, the easiest way to select your MPI implementation is to use modules. If you need to do things by hand, the following information may be useful.

There are five versions of MPI installed in /usr/mpi/. There are two versions for each of the three compilers (except there is currently no openmpi version for the pgi compilers due to bugs in pgi). The PATH environment variable needs to have /usr/mpi/COMPILER/MPI/bin added at the beginning to use a specific version of mpi (where COMPILER is either gcc, intel or pgi and MPI is either mvapich2-1.6 or openmpi-1.4.3).

1.3  Process and Queue Management

Chimera uses SLURM. See the SLURM Queue Manager page for details.

Here are some other tools that were considered, but not currently installed:

1.4  OS

Booting with PXELinux and Frisbee

1.5  MPJ

MPJ is an implementation of "high-performance" Java. It is quite experimental, and may or may not work as expected. A module file has been created to set the appropriate environment variables:

module load mpj

To compile:

javac -cp .:$MPJ_HOME/lib/mpj.jar

To use on Chimera, you'll need to use the salloc method of getting a SLURM allocation. E.g.:

salloc -n 8

The script will (should) start the mpj daemons on the allocated nodes and then kick off the actual java job. The daemons should be automatically killed by the SLURM epilog.

Please remember: MPJ on Chimera has been only very lightly tested!

1.6  Matlab

Matlab is available for limited use. To request a Matlab allocation, include -L "matlab*N" in your Slurm submission, where N is the number of Matlab-enabled nodes you wish to use. Current policy is that no more than 5 nodes may be Matlab-enabled at a time. For example, if you wish to use Matlab on five nodes:

salloc -N5 -L "matlab*5" /bin/bash

Using the Parallel-Processing Toolbox is slightly complicated. Allocate your Slurm job as described above. Your job script should then source /usr/local/MatlabR2008a-mdce/bin/ before running matlab. The script will start up the parallel job manager and job workers on your allocated nodes.