Last updated: $Date: 2011-09-07 11:08:21 -0400 (Wed, 07 Sep 2011) $ by $Author: CloyceS $
(To check for possible updates to this document, please see http://www.spec.org/cpu2006/Docs/ )
ABSTRACT
This document describes the various fields in a SPEC CPU2006 result disclosure.
(To check for possible updates to this document, please see http://www.spec.org/cpu2006/)
Selecting one of the following will take you to the detailed table of contents for that section:
1. Benchmarks
1. Benchmarks
1.1 Benchmarks by suite
1.1.1 Benchmarks in the CFP2006 suite
1.1.2 Benchmarks in the CINT2006 suite
1.2 Benchmarks by language
1.2.1 C Benchmarks
1.2.2 C++ Benchmarks
1.2.3 Fortran Benchmarks
1.2.4 Benchmarks using both Fortran and C
2. Major sections
2.1 Top bar
2.1.1 CFP2006 Result
2.1.2 CINT2006 Result
2.1.3 SPECfp2006
2.1.4 SPECfp_base2006
2.1.5 SPECfp_rate2006
2.1.6 SPECfp_rate_base2006
2.1.7 SPECint2006
2.1.8 SPECint_base2006
2.1.9 SPECint_rate2006
2.1.10 SPECint_rate_base2006
2.1.11 CPU2006 license #
2.1.12 Hardware Availability
2.1.13 Software Availability
2.1.14 Test date
2.1.15 Test sponsor
2.1.16 Tested by
2.2 Result table
2.2.1 Benchmark
2.2.2 Copies
2.2.3 Seconds
2.2.4 Ratio
2.3 Notes/Tuning Information
2.3.1 Compiler Invocation Notes
2.3.2 Submit Notes
2.3.3 Portability Notes
2.3.4 Base Tuning Notes
2.3.5 Peak Tuning Notes
2.3.6 Operating System Notes
2.3.7 Platform Notes
2.3.8 Component Notes
2.3.9 General Notes
2.4 Compilation Flags Used
2.4.1Compiler Invocation
2.4.2Portability Flags
2.4.3Optimization Flags
2.4.4Other Flags
2.4.5Unknown Flags
2.4.6Forbidden Flags
2.5 Errors
3. Hardware description
3.1 CPU Name
3.2 CPU Characteristics
3.3 CPU MHz
3.4 FPU
3.5 CPU(s) enabled
3.6 CPU(s) orderable
3.7 Primary Cache
3.8 Secondary Cache
3.9 L3 Cache
3.10 Other Cache
3.11 Memory
3.12 Disk Subsystem
3.13 Other Hardware
4. Software description
4.1 Operating System
4.2 Auto Parallel
4.3 Compiler
4.4 File System
4.5 System State
4.6 Base Pointers
4.7 Peak Pointers
4.8 Other Software
5. Other information
5.1 Median results
5.2 Run order
The CFP2006 suite is comprised of 17 floating-point compute intensive codes; 6 in Fortran, 3 in C, 4 in C++, and 4 which contain both Fortran and C.
The CINT2006 suite is comprised of 12 integer-compute-intensive codes; 9 in C and 3 in C++.
(Also, "C Benchmarks (except as noted below)")
Nine benchmarks in the CINT2006 suite are written in C:
Three benchmarks in the CFP2006 suite are written in C:
(Also, "C++ Benchmarks (except as noted below)")
Three benchmarks in the CINT2006 suite are written in C++:
Four benchmarks in the CFP2006 suite are written in C++:
(Also, "Fortran Benchmarks (except as noted below)")
There are no benchmarks in the CINT2006 suite written in Fortran.
Six benchmarks in the CFP2006 suite are written in Fortran:
(Also, "Benchmarks using both Fortran and C (except as noted below)")
There are no benchmarks in the CINT2006 suite written in a mixture of Fortran and C.
Four benchmarks in the CFP2006 suite are written using both Fortran and C:
More detailed information about metrics is in sections 4.3.1 and 4.3.2 of the CPU2006 Run and Reporting Rules.
This result is from the CFP2006 suite.
This result is from the CINT2006 suite.
The geometric mean of seventeen normalized ratios (one for each floating point benchmark) when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the CPU2006 Run and Reporting Rules.
The geometric mean of seventeen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the CPU2006 Run and Reporting Rules.
The geometric mean of seventeen normalized throughput ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the CPU2006 Run and Reporting Rules.
The geometric mean of seventeen normalized throughput ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the CPU2006 Run and Reporting Rules.
The geometric mean of twelve normalized ratios (one for each integer benchmark) when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the CPU2006 Run and Reporting Rules.
The geometric mean of twelve normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the CPU2006 Run and Reporting Rules.
The geometric mean of twelve normalized throughput ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the CPU2006 Run and Reporting Rules.
The geometric mean of twelve normalized throughput ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the CPU2006 Run and Reporting Rules.
The SPEC CPU license number of the organization or individual that ran the result.
(Also, "Hardware Avail")
The date when all the hardware necessary to run the result is generally available. For example, if the CPU is available in Aug-2005, but the memory is not available until Oct-2005, then the hardware availability date is Oct-2005 (unless some other component pushes it out farther).
(Also, "Software Avail")
The date when all the software necessary to run the result is generally available. For example, if the operating system is available in Aug-2005, but the compiler or other libraries are not available until Oct-2005, then the software availability date is Oct-2005 (unless some other component pushes it out farther).
The date when the test is run. As of CPU2006 V1.1, this value is obtained from the system under test, unless the tester explicitly changes it.
The name of the organization or individual that sponsored the test. Generally, this is the name of the license holder.
The name of the organization or individual that ran the test. If there are installations in multiple geographic locations, sometimes that will also be listed in this field.
In addition to the graph, the results of the individual benchmark runs are also presented in table form.
The name of the benchmark.
For throughput (SPECrate) runs, this column indicates the number of benchmark copies that were run simultaneously.
For SPECspeed runs, this is the amount of time in seconds that the benchmark took to run. For throughput (SPECrate) runs, it is the amount of time between the start of the first copy and the end of the last copy.
This is the ratio of benchmark run time to the run time on the reference platform.
(Also, "Notes/Tuning Information (Continued)")
This section is where the tester provides notes about compiler flags used, system settings, and other items that do not have dedicated fields elsewhere in the result.
Run rules relating to these items can be found in section 4.2.4 of the CPU2006 Run and Reporting Rules.
(Also, "Compiler Invocation Notes (Continued)")
This section is where the tester provides notes about how the various compilers were invoked, whether any special paths had to be used, etc.
(Also, "Submit Notes (Continued)")
This section is where the tester provides notes about how the config file submit option was used to assign processes to processors.
(Also, "Portability Notes (Continued)")
This section is where the tester provides notes about portability options and flags used to build the various benchmarks.
(Also, "Base Tuning Notes (Continued)")
This section is where the tester provides notes about baseline optimization options and flags used to build the various benchmarks.
(Also, "Peak Tuning Notes (Continued)")
This section is where the tester provides notes about peak optimization options and flags used to build the various benchmarks.
(Also, "Operating System Notes (Continued)")
This section is where the tester provides notes about changes to the default operating system state and other OS-specific tuning information.
(Also, "Platform Notes (Continued)")
This section is where the tester provides notes about changes to the default hardware state and other non-OS-specific tuning information.
(Also, "Component Notes (Continued)")
This section is where the tester provides information about various components needed to build a particular system. This section is only used if the system under test is built from parts and not sold as a whole system.
(Also, "General Notes (Continued)")
This section is where the tester provides notes about things not covered in the other notes sections.
(Also, "Compilation Flags Used (Continued)")
This section is generated automatically by the benchmark tools. It details compilation flags used and provides links (in the HTML and PDF result formats) to descriptions of those flags.
(Also, "Base Compiler Invocation" and "Peak Compiler Invocation")
This section lists the ways that the various compilers are invoked.
(Also, "Base Portability Flags" and "Peak Portability Flags")
This section lists compilation flags that are used for portability.
(Also, "Base Optimization Flags" and "Peak Optimization Flags")
This section lists compilation flags that are used for optimization.
(Also, "Base Other Flags" and "Peak Other Flags")
This section lists compilation flags that are classified as neither portability nor optimization.
(Also, "Base Unknown Flags" and "Peak Unknown Flags")
This section of the reports lists compilation flags used that are not described in any flags description file. Results with unknown flags are marked "invalid" and may not be published. This marking may be removed by reformatting the result using a flags file that describes all of the unknown flags.
(Also, "Base Forbidden Flags" and "Peak Forbidden Flags")
This section of the reports lists compilation flags used that are designated as "forbidden". Results using forbidden flags are marked "invalid" and may not be published.
This section is automatically inserted by the benchmark tools when there are errors present that prevent the result from being a valid reportable result.
Run rules relating to these items can be found in section 4.2.2 of the CPU2006 Run and Reporting Rules.
A manufacturer-determined processor formal name.
Technical characteristics to help identify the processor.
The clock frequency of the CPU, expressed in megahertz.
The type of floating-point unit used in the system.
The number of CPUs that were enabled and active during the benchmark run. More information about CPU counting is in the run rules.
The number of CPUs that can be ordered in a system of the type being tested.
Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache".
Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache".
Description (size and organization) of the CPU's tertiary, or "Level 3" cache.
Description (size and organization) of any other levels of cache memory.
Description of the system main memory configuration. End-user options that affect performance, such as arrangement of memory modules, interleaving, latency, etc, are documented here.
A description of the disk subsystem (size, type, and RAID level if any) of the storage used to hold the benchmark tree during the run.
Any additional equipment added to improve performance.
Run rules relating to these items can be found in section 4.2.3 of the CPU2006 Run and Reporting Rules.
The operating system name and version. If there are patches applied that affect performance, they must be disclosed in the notes.
Were multiple threads/cores/chips employed by a parallelizing compiler? Note that a SPECspeed run that uses a parallelizing compiler causes a single instance of a benchmark to run using multiple CPUs; this is different from a SPECrate run, which typically distributes N instances over N CPUs.
The names and versions of all compilers, preprocessors, and performance libraries used to generate the result.
The type of the filesystem used to contain the run directories.
The state (sometimes called "run level") of the system while the benchmarks were being run. Generally, this is "single user", "multi-user", "default", etc.
Indicates whether all the benchmarks in base used 32-bit pointers, 64-bit pointers, or a mixture. For example, if the C and C++ benchmarks used 32-bit pointers, and the Fortran benchmarks used 64-bit pointers, then "32/64-bit" would be reported here.
Indicates whether all the benchmarks in peak used 32-bit pointers, 64-bit pointers, or a mixture.
Any performance-relevant non-compiler software used, including third-party libraries, accelerators, etc.
For a reportable CPU2006 run, three iterations of each benchmark are run, and the median of the three runs is selected to be part of the overall metric. In output formats that support it, the medians in the result table are underlined in bold.
In CPU95 and CPU2000, all iterations for a given benchmark were run consecutively. CPU2006 has changed this; each iteration now consists of running each benchmark in order. For example, given benchmarks "910.aaa", "920.bbb", and "930.ccc", here's what you might see as the benchmarks were run if they were part of each suite:
CPU95 and CPU2000
Running 910.aaa ref base oct09a default Running 910.aaa ref base oct09a default Running 910.aaa ref base oct09a default Running 920.bbb ref base oct09a default Running 920.bbb ref base oct09a default Running 920.bbb ref base oct09a default Running 930.ccc ref base oct09a default Running 930.ccc ref base oct09a default Running 930.ccc ref base oct09a default
CPU2006
Running (#1) 910.aaa ref base oct09a default Running (#1) 920.bbb ref base oct09a default Running (#1) 930.ccc ref base oct09a default Running (#2) 910.aaa ref base oct09a default Running (#2) 920.bbb ref base oct09a default Running (#2) 930.ccc ref base oct09a default Running (#3) 910.aaa ref base oct09a default Running (#3) 920.bbb ref base oct09a default Running (#3) 930.ccc ref base oct09a default
When you read the results table from a run, such as this one, the results in the results table are listed in the order that they were run, in column-major order. In other words, if you're interested in the base scores as they were produced, start in the upper-lefthand column and read down the first column, then read the middle column, then the right column.
If the benchmarks were run with both base and peak tuning, all base runs were completed before starting peak.
Copyright © 2006 Standard Performance Evaluation Corporation
All Rights Reserved
Copyright 1999-2011 Standard Performance Evaluation Corporation All Rights Reserved