MPI2007 Flag Description
SGI SGI ICE X (Intel Xeon E5-2690 v2, 3.0 GHz)


Compiler Invocation

C benchmarks

C++ benchmarks

126.lammps

Fortran benchmarks

Benchmarks using both Fortran and C


Base Portability Flags

121.pop2

127.wrf2

130.socorro


Peak Portability Flags

121.pop2

127.wrf2

130.socorro


Base Optimization Flags

C benchmarks

C++ benchmarks

126.lammps

Fortran benchmarks

Benchmarks using both Fortran and C


Peak Optimization Flags

C benchmarks

104.milc

122.tachyon

C++ benchmarks

126.lammps

Fortran benchmarks

107.leslie3d

113.GemsFDTD

129.tera_tf

137.lu

Benchmarks using both Fortran and C

115.fds4

121.pop2

127.wrf2

128.GAPgeofem

130.socorro

132.zeusmp2


Other Flags

C benchmarks

C++ benchmarks

126.lammps

Fortran benchmarks

Benchmarks using both Fortran and C


Implicitly Included Flags

This section contains descriptions of flags that were included implicitly by other flags, but which do not have a permanent home at SPEC.


System and Other Tuning Information

SGI MPT 2.0x options and environment variables

Job startup command and options

mpiexec_mpt [ global_opts ] local_opts cmd [ : local_opts cmd ] ...

The mpiexec_mpt command launches a Message Passing Toolkit (MPT) MPI program in a batch scheduler-managed cluster environment. mpiexec_mpt uses the list of cluster nodes it receives from the batch scheduler to generate and issue an appropriate mpirun command to launch the multi-node job.

-n <# of processes> or -np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

mpiexec [ global_opts ] local_opts cmd [ : local_opts cmd ] ...

The PBS Pro's mpiexec command provides the standard mpiexec interface on the Altix running ProPack 4 or greater. It provides equivalent functionality to mpiexec_mpt.

Environment variables

MPI_REQUEST_MAX

Determines the maximum number of nonblocking sends and receives that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 16384

MPI_TYPE_MAX

Determines the maximum number of data types that can simultaneously exist for any single MPI process. MPI generates an error message if this limit (or the default, if not set) is exceeded. Default: 1024

MPI_BUFS_THRESHOLD

Determines whether MPT uses per-host or per-process message buffers for communicating with other hosts. Per-host buffers are generally faster but for jobs running across many hosts they can consume a prodigious amount of memory. MPT will use per- host buffers for jobs using up to and including this many hosts and will use per-process buffers for larger host counts. Default: 64

MPI_DSM_DISTRIBUTE (toggle)

If set, NUMA job placement mode is activated. This mode ensures that each MPI process gets a unique CPU and physical memory on the node with which that CPU is associated. Currently, the CPUs are chosen by simply starting at relative CPU 0 and incrementing until all MPI processes have been forked. Default: Not enabled

MPI_IB_RAILS

If the MPI library uses the IB driver as the inter-host interconnect it will by default use a single IB fabric. If this is set to 2, the library will try to make use of multiple available separate IB fabrics and split MPI traffic across them. Default: 1

MPI_IB_DEVS

Directs MPT to open specific IB ports in each rank. If MPI_IB_DEVS is empty or not defined, MPT will assign ranks to IB ports by the formula "local rank modulo number of ports." The first rank on each host will use the first port on that host, etc. By default MPT will only use the first working port on the first HCA with a working port.

Other Tuning Information

ulimit -s unlimited

Removes limits on the maximum size of the automatically- extended stack region of the current process and each process it creates.


Flag description origin markings:

[user] Indicates that the flag description came from the user flags file.
[suite] Indicates that the flag description came from the suite-wide flags file.
[benchmark] Indicates that the flag description came from a per-benchmark flags file.

The flags file that was used to format this result can be browsed at
http://www.spec.org/mpi2007/flags/SGI_x86_64_Intel14_flags.html.

You can also download the XML flags source by saving the following link:
http://www.spec.org/mpi2007/flags/SGI_x86_64_Intel14_flags.xml.


For questions about the meanings of these flags, please contact the tester.
For other inquiries, please contact webmaster@spec.org
Copyright 2006-2010 Standard Performance Evaluation Corporation
Tested with SPEC MPI2007 v2.0.1.
Report generated on Tue Jul 22 13:48:18 2014 by SPEC MPI2007 flags formatter v1445.