Results
Information
Press and Publications
Order Benchmarks
Resources
|
Frequently Asked Questions (FAQs) About SPEC OMP2001 Software
- Q1: What is SPEC OMP2001?
- A1: SPEC OMP2001 is a software benchmark product produced
by the Standard Performance Evaluation Corp.'s High-Performance Group (SPEC/HPG).
SPEC is a non-profit organization that includes computer vendors, systems
integrators, universities, research organizations, publishers and consultants
from around the world. The benchmark is designed to provide performance measurements
that can be used to compare compute-intensive parallel workloads on different
parallel computing systems.
SPEC OMP2001 contains two benchmark suites. The first benchmark, SPEC OMPM2001,
measures performance of shared-memory systems with between four and 32 processors.
The second, SPEC OMPL2001, is designed to measure performance of systems with as
many as 512 processors.
- Q2: What is a benchmark?
- A2: A benchmark is a standard of measurement or evaluation.
SPEC was formed to establish and maintain computer benchmarks for measuring
and comparing component- and system-level computer performance.
- Q3: What components does SPEC OMP2001 measure?
- A3: Since the benchmarks are designed to reflect applications
requiring compute-intensive parallel processing, they measure performance
of the computer's processors, memory architecture, operating system, and
compiler. It is important to remember the contribution of the latter three
components.
- Q4: What component performance is not measured by SPEC OMP2001?
- A4: The OMP2001 benchmarks do not stress I/O (disk drives),
networking or graphics. It might be possible to configure a system in such
a way that one or more of these components impact the performance of OMP2001,
but that is not the intent of the suites.
- Q5: What is included in the SPEC OMP2001 packages?
- A5: SPEC provides the following:
- SPEC OMP2001 tools for compiling, running and validating the benchmarks
for a variety of operating systems
- source code for the tools, so that they can be built for systems not
covered by the pre-compiled tools
- source code for the benchmarks
- tools for generating performance reports
- run and reporting rules defining how the benchmarks should be used
to produce standard results
- SPEC OMP2001 documentation
SPEC OMP2001 includes tools for most UNIX operating systems. Additional products
for Windows NT and other operating systems will be released later if SPEC
detects enough demand. SPEC OMPM2001 and SPEC OMPL2001 will be shipped on
separate CD-ROM disks.
- Q6: What does the SPEC OMP2001 user have to provide?
- A6: The user must have a computer system with a CD-ROM
drive that has a UNIX based O/S with both C and FORTRAN90 compilers supporting
OpenMP. Depending on the system under test, approximately 3GB will be needed
on a hard drive to install, build and run SPEC OMPM2001; twice as much is
needed for SPEC OMPL2001.
The system should have at least 2GB of RAM for SPEC OMPM2001 and 8GB for
SPEC OMPL2001 to ensure that benchmarks remain in memory and paging does
not occur. Systems with large processor accounts will require greater amounts
of memory.
- Q7: What are the basic steps in running the benchmarks?
- A7: Installation and use are covered in detail in the
SPEC OMP2001 User Documentation. The basic steps are as follows:
- Install SPEC OMPM2001 or SPEC OMPL2001 from media.
- Run the installation scripts specifying your operating system.
- Compile the tools if executables are not provided in SPEC OMP2001.
- Determine which metric you want to run.
- Create a configuration file for that metric. In this file, you specify
compiler flags and other system-dependent information.
- Run the SPEC tools to build (compile), run and validate the benchmarks.
- If the above steps are successful, generate a report based on the run
times and metric equations.
- Q8: What source code is provided? What exactly makes up these suites?
- A8: SPEC OMPM2001 and SPEC OMPL2001 are based on compute-intensive,
parallel-processing applications that are provided as source code containing
OpenMP directives. A medium and large version of the same 9 applications
(seven FORTRAN90 and two C) are used within the two benchmark suites along
with an additional FORTRAN90 and an additional C code for the medium version:
310.wupwise_m and 311.wupwise_l |
quantum chromodynamics |
312.swim_m and 313.swim_l |
shallow water modeling |
314.mgrid_m and 315.mgrid_l |
multi-grid solver in 3D potential field |
316.applu_m and 317.applu_l |
parabolic/elliptic partial differential equations |
318.galgel_m |
fluid dynamics analysis of oscillatory instability |
330.art_m and 331.art_l |
neural network simulation of adaptive resonance theory |
320.equake_m and 321.equake_l |
finite element simulation of earthquake modeling |
332.ammp_m |
computational chemistry |
328.fma3d_m and 329.fma3d_l |
finite-element crash simulation |
324.apsi_m and 325.apsi_l |
solves problems regarding temperature, wind, distribution of pollutants |
326.gafort_m and 327.gafort_l |
genetic algorithm code |
The numbers in the benchmarks' names serve as identifiers to distinguish
programs from one another and from similar codes in SPEC CPU2000. More detailed
descriptions of the benchmarks can be found in the individual benchmark directories
in the SPEC benchmark tree.
- Q9: What metrics can be measured?
- A9: The benchmark suites can be used to measure and calculate
the following metrics:
SPEC OMPM2001
SPECompMpeak2001: The geometric mean of 11 normalized ratios (one
for each benchmark) when compiled with "aggressive" optimization and possible
code modification.
SPECompMbase2001: The geometric mean of 11 normalized ratios (one
for each benchmark) when compiled with "conservative" optimization (all
benchmarks compiled with the same flags and no modifications to source
code).
SPECompM2001: greater of base & peak metric.
SPEC OMPL2001
SPECompLpeak2001: The geometric mean of 11 normalized ratios (one
for each benchmark) when compiled with "aggressive" optimization, and possible
code modification.
SPECompLbase2001: The geometric mean of 12 normalized
ratios (one for each benchmark) when compiled with "conservative" optimization
(all benchmarks compiled with the same lags and no modifications to source
code).
SPECompL2001: greater of base & peak metric.
The ratio for each of the benchmarks is calculated using a SPEC-determined
reference time and the actual run time of the benchmark. A higher score means "better
performance" on the given workload.
- Q10: What is the difference between a "conservative" (base) metric
and an "aggressive" (non-base) metric?
- A10: In order to provide comparisons across different
computer hardware, SPEC provides benchmarks as source code. This means they
must be compiled before they can be run. There was agreement within SPEC
that the benchmarks should be compiled the way users compile programs. But
how do users compile programs? On one side, people might just compile with
the general high-performance options suggested by the compiler vendor. On
the other side, people might experiment with many different compilers and
compiler flags to achieve the best performance. So, while SPEC cannot match
exactly how everyone uses compilers, it can provide metrics that represent
the general characteristics of these two groups.
The base metrics (e.g., SPECompMbase2001) are required for all reported results
and have set guidelines for compilation (e.g., the same flags must be used
in the same order for all benchmarks of the same language, no assertion flags).
The assumed model uses performance compiler flags that a compiler vendor
would suggest for a given program knowing only its own language.
The non-base metrics (e.g., SPECompMpeak2001) are optional and have less-strict
requirements (e.g., different compiler options and code modifications related
to parallel performance can be used on each benchmark).
A full description of the distinctions can be found in the SPEC OMP2001 run
and reporting rules.
- Q11: How should I use SPEC OMP2001?
- A11: Typically, the best measurement of a system is the
performance of your own application with your own workload. Unfortunately,
time, money and other constraints make it very difficult to get a wide base
of reliable, repeatable and comparable measurements on different systems.
Benchmarks act as a reference point for comparison. It's the same reason
that gas mileage ratings exist, although probably no driver gets exactly
the same mileage as listed in the ratings. If you understand what benchmarks
measure, they're useful.
It's important to know that SPEC OMPM2001 and -large focus on parallel-processing
not system performance. They concentrate only on some of the factors that
contribute to applications performance. A graphics or network performance
bottleneck within an application, for example, will not be reflected in these
benchmarks.
Understanding your own needs helps determine the relevance of the benchmarks.
Q12: Why was SPEC OMP2001 developed?
- A12: As more multi-processor systems became available,
the SPEC High Performance Group saw a need for a benchmark suite to measure
parallel performance, and more specifically, the performance of shared-memory
parallel systems. The OpenMP directives were used because they have become
the de facto standard for implementing this type of parallelism.
- Q13: What criteria were used to select the benchmarks?
- A13: In the process of selecting applications to use as
benchmarks, SPEC considered the following criteria:
- portability to all SPEC hardware architectures (64-bit including Alpha,
Intel Architecture, MIPS, SPARC, etc.)
- portability to various operating systems
- benchmarks should produce scalable parallel performance over several
architectures.
- benchmarks should not include measurable I/O
- benchmarks should not include networking or graphics
- benchmarks should run in 2GB of RAM for SPEC OMPM2001 and 8GB of RAM
for SPEC OMPL2001 without swapping for a single CPU.
- no more than five percent of benchmarking time should be spent processing
code not provided by SPEC.
- Q14: Weren't most of the SPEC OMP2001 benchmarks in SPEC CPU2000?
How are they different?
- A14: Although some of the benchmarks from SPEC CPU2000
are included in OMP2001, they all have been given larger workloads. Also,
all of the codes have been modified for parallelism, including the insertion
of OpenMP directives. The revised benchmarks have been assigned different
identifying numbers to distinguish them from versions in previous suites
and to indicate that they are not comparable with their predecessors.
- Q15: Why does SPEC use a reference machine for determining performance
metrics? What machine is used for SPEC OMP2001 benchmark suites?
- A15: SPEC uses a reference machine to normalize the performance
metrics used in the OMP2001 suites. Each benchmark is run and measured on
this machine to establish a reference time for that benchmark. These times
are then used in the SPEC calculations.
SPEC OMP2001 uses an SGI 2100 with four 350MHz processors as the reference
machine. It takes approximately one-and-a-half days to do a SPEC-conforming
run of SPEC OMPM2001 using all four processors on this machine. The performance
relation between two systems measured with the OMP2001 benchmarks would remain
the same even if a different reference machine was used. This is a consequence
of the mathematics involved in calculating the individual and overall (geometric
mean) metrics. The SGI reference machine performs at about 1000 SPECompMbase2001.
- Q16: How long does it take to run the SPEC OMP2001 benchmark suites?
- A16: It depends on the suite and the machine that is running
the benchmarks. In particular it will depend on the number of processors
in the target machine.
- Q17: What if the tools cannot be run or built on a system? Can
they be run manually?
- A17: To generate SPEC-compliant results, the tools used
must be approved by SPEC. If several attempts at using the SPEC tools are
not successful for the operating system for which you purchased OMP2001,
you should contact SPEC for technical support. SPEC will work with you to
correct the problem and/or investigate SPEC-compliant alternatives.
- Q18: Where are SPEC OMP2001 results available?
- A18: Results for all measurements submitted to SPEC are
available at http://www.spec.org/hpg/omp/
- Q19: Can SPEC OMP2001 results be published outside of the SPEC
web site?
- A19: Yes, SPEC OMP2001 results can be freely published
if all the run and reporting rules have been followed and the results are
reviewed by SPEC/HPG for a nominal fee. Yes, SPEC OMP2001 results can be
freely published if all the Run and Reporting Rules have been followed and
reviewed by SPEC/HPG for a nominal fee. The SPEC OMP2001 license agreement
binds every purchaser of the suite to the run and reporting rules if results
are quoted in public. A full disclosure of the details of a performance measurement
must be provided to anyone who asks. See the SPEC OMP2001 Run and Reporting
Rules for details.
SPEC strongly encourages that results be submitted for the web site, since
it ensures a peer review process and uniform presentation of all results.
The Run and Reporting Rules contain an exemption clause for research and
academic use of SPEC OMP2001. Results obtained in this context need not comply
with all the requirements for other measurements. It is required, however,
that research and academic results be clearly distinguished from results
submitted officially to SPEC.
- Q20: How do I contact SPEC?
- A20: Send e-mail to info@spec.org
Press contacts:
Bob Cramblitt or Erin Hatfield
Cramblitt & Company
919-481-4599; cramco@cramco.com
|