Last updated: 14 Mar 2001 17:00pm wesley@sgi.com
(To check for possible updates to this document, please see http://www.spec.org/hpg/omp2001
)
Prior to installation, look on your CD in docs/ or docs.nt\ -- for example, if your CD is mounted on /cdrom (Unix) or drive E: (NT), you would find most of the documents in:
/cdrom/docs/ (Unix) or E:\docs.nt\ (NT)After installation, look in:
$SPEC/docs/ (Unix) or %SPEC%\docs.nt\ (NT)
If you are just getting started with SPEC OMP2001, here is a suggested reading order for the various documents supplied by SPEC:
The following documents are located in docs/ or docs.nt\ as described above under "Location:"
config.html | To run SPEC OMP2001, you need a config file. This document tells you how to write one. |
errata.txt | Debugging and errata information. |
example-advanced.cfg | A complex sample config file with commentary. |
example-medium.cfg | A complete, but not very complex, sample config file with commentary. |
example-simple.cfg | A simple configuration file with commentary. A first time user could take this as a template for a first run of SPEC OMP2001. |
install_guide_unix.html | How to install SPEC OMP2001 on UNIX systems. Includes an example installation and an example of running the first benchmark. |
install_guide_nt.txt | How to install SPEC OMP2001 on Windows/NT systems. Includes an example installation and an example of running the first benchmark. |
legal.txt | Copyright notice and other legal information. |
makevars.txt | Advanced users of the suite who want to understand exactly how the benchmarks are built can use this file to help decipher the process. |
readme1st.html | The document you are reading now, contains a documentation overview and question and answers regarding the purpose and intent of SPEC OMP2001. |
runrules.html | The SPEC OMP2001 Run and reporting rules. These must be followed for generating publicly disclosed results. |
runspec.html | Information on the "runspec" command, which is the primary user interface for running SPEC OMP2001 benchmarks. |
system_requirements.html | A list of the hardware and software needed in order to run the SPEC OMP2001 suite. |
techsupport.txt | Information on SPEC technical support. |
tools_build.txt | How to build (or re-build) the tools such as runspec |
utility.html | How to use various utilities, such as specinvoke and specdiff |
In addition, each individual benchmark in the suite has its own documents, found in the benchmark "docs" subdirectory. For example, the description of the benchmark 330.art_m may be found in:
$SPEC/benchspec/OMPM2001/330.art_m/docs/330.art_m.txt (Unix) or %SPEC%\benchspec\OMPM2001\330.art_m\docs\330.art_m.txt (NT)
Only on the CD, you will find:
original.src/README Information about freely-available sources that have been incorporated in SPEC OMP2001
Background
By providing this background, SPEC hopes to help the user set their expectations and usage appropriately to get the most efficient and beneficial use out of this benchmark product.
Overall, SPEC designed SPEC OMP2001 to provide a comparative measure of shared memory processor (SMP) performance across the widest practical range of hardware. This resulted in source code benchmarks developed from real user applications. These benchmarks are dependent on the processor, memory, compiler, and OpenMP implementation on the tested system.
SPEC is an acronym for the Standard Performance Evaluation Corporation. SPEC is a non-profit organization composed of computer vendors, systems integrators, universities, research organizations, publishers and consultants whose goal is to establish, maintain and endorse a standardized set of relevant benchmarks for computer systems. Although no one set of tests can fully characterize overall system performance, SPEC believes that the user community will benefit from an objective series of tests which can serve as a common reference point.
The definition from Webster's II Dictionary states that a benchmark is "A standard of measurement or evaluation."
A computer benchmark is typically a computer program that performs a strictly defined set of operations (a workload) and returns some form of result (a metric) describing how the tested computer performed. Computer benchmark metrics usually measure speed (how fast was the workload completed) or throughput (how many workloads per unit time were measured). Running the same computer benchmark on multiple computers allows a comparison to be made.
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often very difficult to get a wide base of reliable, repeatable and comparable measurements for comparisons of different systems on your own application with your own workload. This might be due to time, money, confidentiality, or other constraints.
At this point, you can consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable and maybe already run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the workloads similar and do they have the same characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: It is not intended that the SPEC benchmark suites be used as a replacement for the benchmarking of actual customer applications to determine vendor or product selection.
SPEC OMP2001 focuses on SMP performance, which means these benchmarks emphasize the performance of:
It is important to remember the contribution of the latter two components; performance is more than just the processor.
SPEC OMP2001 is made up of two subcomponents that focus on size of the SMP parallel systems::
Note that SPEC OMP2001 does not stress other computer components such as I/O (disk drives), networking, operating system or graphics. It might be possible to configure a system in such a way that one or more of these components impact the performance of OMPM2001 and OMPL2001, but that is not the intent of the suites.
As mentioned above, SPEC OMP2001 provides a comparative measure of medium and large SMP performance. If this matches with the type of workloads you are interested in, SPEC OMP2001 provides a good reference point.
Other advantages to using SPEC OMP2001:
As described above under "Why use a benchmark?", the ideal benchmark for vendor or product selection would be your own workload on your own application. Please bear in mind that no standardized benchmark can provide a perfect model of the realities of your particular system and user community.
SPEC provides the following on the SPEC OMP2001 Media:
Briefly, you need a Unix or NT system with 2 or 8 GB of memory for OMPM2001 and OMPL2001 respectively, 4GB of disk, and a set of compilers. Please see the details in the file system_requirements.html
Installation and use are covered in detail in the SPEC OMP2001 User Documentation. The basic steps are as follows:
OMPM2001 (Medium) and OMPL2001 (Large) are based on compute-intensive applications provided as source code.
OMPM2001 contains 9 applications written in Fortran and 2 in C that are used as benchmarks:
Name | Remarks |
---|---|
310.wupwise_m | Quantum chromodynamics |
312.swim_m | Shallow water modeling |
314.mgrid_m | Multi-grid solver in 3D potential field |
316.applu_m | Parabolic/elliptic partial differential equations |
318.galgel_m | Fluid dynamics: analysis of oscillatory instability |
320.equake_m | Finite element simulation; earthquake modeling |
324.apsi_m | Solves problems regarding temperature, wind, velocity and distribution of pollutants |
326.gafort_m | Genetic algorithm |
328.fma3d_m | Finite element crash simulation |
330.art_m | Neural network simulation; adaptive resonance theory |
332.ammp_m | Computational Chemistry |
OMPL2001 contains 9 of the same 11 applications with larger data sets that are used as benchmarks: 311.wupwise_l, 313.swim_l, 315.mgrid_l, 317.applu_l, 321.equake_l, 325.apsi_l, 327.gafort_l, 329.fma3d_l, 331.art_l.
More detailed descriptions on the benchmarks (with reference to papers, web sites, etc.) can be found in the individual benchmark directories in the SPEC benchmark tree.
The numbers used as part of the benchmarks names provide an identifier to help distinguish programs from one another. For example, some programs were updated from SPEC CPU2000 and need to be distinguished from their previous version.
Many of the SPEC benchmarks have been derived from publicly available application programs and all have been developed to be portable to as many current and future hardware platforms as practical. Hardware dependencies have been minimized to avoid unfairly favoring one hardware platform over another. For this reason, the application programs in this distribution should not be used to assess the probable performance of commercially available, tuned versions of the same application. The individual benchmarks in this suite may be similar, but NOT identical to benchmarks or programs with the same name which are available from sources other than SPEC; therefore, it is not valid to compare SPEC OMP2001 benchmark results with anything other than other SPEC OMP2001 benchmark results. (Note: This also means that it is not valid to compare SPEC OMP2001 results to older SPEC CPU benchmarks; these benchmarks have been changed and should be considered different and not comparable.)
The OMPM2001 and OMPL2001 suites can be used to measure and calculate the following metrics:
The ratio for each of the benchmarks is calculated using a SPEC- determined reference time and the run time of the benchmark.
A higher score means "better performance" on the given workload.
In order to provide comparisons across different computer hardware, SPEC provides the benchmarks as source code. Thus, in order to run the benchmarks, they must be compiled. There is agreement that the benchmarks should be compiled the way users compile programs. But how do users compile programs?
Some people might experiment with many different compilers and compiler flags to achieve the best performance. Other people might just compile with the basic options suggested by the compiler vendor. SPEC recognizes that it cannot exactly match how everyone uses compilers, but two reference points are possible:
Note that the base metric rules are a subset of the peak metric rules. For example, a legal base metric is also legal under the peak rules but a legal peak metric is NOT legal under the base rules.
A full description of the distinctions and required guidelines can be found in the SPEC OMP2001 Run and Reporting Rules available with SPEC OMP2001.
It depends on your needs. SPEC provides the benchmarks and results as tools for you to use. You need to determine how you use a computer or what your performance requirements are and then choose the appropriate SPEC benchmark or metrics.
SPEC can be contacted in several ways. For general information, including other means of contacting SPEC, please see SPEC's World Wide Web Site at http://www.spec.org/
General questions can be emailed to: info@spec.org
OMP2001 Technical Support Questions can be sent to: OMP2001support@spec.org
You should verify that your system meets the requirements as described in
system_requirements.html
and then you can install the suite, following the instructions in
install_guide_unix.html or install_guide_nt.html