Last updated: $Date: 2012-10-12 09:29:24 -0400 (Fri, 12 Oct 2012) $ by $Author: BrianWhitney $
(To check for possible updates to this document, please see http://www.spec.org/omp2012/Docs/ )
ABSTRACT
This document describes the various fields in a SPEC OMP2012 result disclosure.
(To check for possible updates to this document, please see http://www.spec.org/omp2012/)
Selecting one of the following will take you to the detailed table of contents for that section:
1. Benchmarks
5. Power and Temperature information
1. Benchmarks
1.1 Benchmarks by suite
1.1.1 Benchmarks in the OMPG2012 suite
1.2 Benchmarks by language
1.2.1 C Benchmarks
1.2.2 C++ Benchmarks
1.2.3 Fortran Benchmarks
2. Major sections
2.1 Top bar
2.1.1 OMPG2012 Result
2.1.2 SPECompG_base2012
2.1.3 SPECompG_peak2012
2.1.4 SPECompG_energy_base2012
2.1.5 SPECompG_energy_peak2012
2.1.6 OMP2012 license #
2.1.7 Hardware Availability
2.1.8 Software Availability
2.1.9 Test date
2.1.10 Test sponsor
2.1.11 Tested by
2.2 Result table
2.2.1 Benchmark
2.2.2 Threads
2.2.3 Seconds
2.2.4 Ratio
2.2.5 Energy kJoules
2.2.6 Maximum Power
2.2.7 Average Power
2.2.8 Energy Ratio
2.3 Notes/Tuning Information
2.3.1 Compiler Invocation Notes
2.3.2 Submit Notes
2.3.3 Portability Notes
2.3.4 Base Tuning Notes
2.3.5 Peak Tuning Notes
2.3.6 Operating System Notes
2.3.7 Platform Notes
2.3.8 Component Notes
2.3.9 General Notes
2.4 Compilation Flags Used
2.4.1Compiler Invocation
2.4.2Portability Flags
2.4.3Optimization Flags
2.4.4Other Flags
2.4.5Unknown Flags
2.4.6Forbidden Flags
2.5 Errors
3. Hardware description
3.1 CPU Name
3.2 CPU Characteristics
3.3 CPU MHz
3.4 Maximum CPU MHz
3.5 FPU
3.6 CPU(s) enabled
3.7 CPU(s) orderable
3.8 Primary Cache
3.9 Secondary Cache
3.10 L3 Cache
3.11 Other Cache
3.12 Memory
3.13 Disk Subsystem
3.14 Other Hardware
4. Software description
4.1 Operating System
4.2 Auto Parallel
4.3 Compiler
4.4 File System
4.5 System State
4.6 Base Pointers
4.7 Peak Pointers
4.8 Other Software
5. Power and Temperature information
5.1 Power Supply
5.2 Power Supply Details
5.3 Maximum Power (W)
5.4 Idle Power (W)
5.5 Minimum Temperature (C)
5.6 Current Ranges Used
5.7 Voltage Range Used
5.8 Power Analyzer
5.9 Hardware Vendor
5.10 Model
5.11 Serial Number
5.12 Input Connection
5.13 Metrology Institute
5.14 Calibration By
5.15 Calibration Label
5.16 Calibration Date
5.17 PTDaemon Version
5.18 Setup Description
5.19 Temperature Sensor
6. Other information
6.1 Median results
6.2 Run order
The OMPG2012 suite is comprised of 14 compute intensive codes; 8 in Fortran, 5 in C, and 1 in C++.
(Also, "C Benchmarks (except as noted below)")
Five benchmarks in the OMPG2012 suite are written in C:
(Also, "C++ Benchmarks (except as noted below)")
One benchmark in the OMPG2012 suite is written in C++:
(Also, "Fortran Benchmarks (except as noted below)")
Eight benchmarks in the OMPG2012 suite are written in Fortran:
More detailed information about metrics is in sections 4.3.1 and 4.3.2 of the OMP2012 Run and Reporting Rules.
This result is from the SPEC OMPG2012 suite.
The geometric mean of fourteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the OMP2012 Run and Reporting Rules.
The geometric mean of fourteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the OMP2012 Run and Reporting Rules.
The geometric mean of fourteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the OMP2012 Run and Reporting Rules.
The geometric mean of fourteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the OMP2012 Run and Reporting Rules.
The SPEC OMP2012 license number of the organization or individual that ran the result.
(Also, "Hardware Avail")
The date when all the hardware necessary to run the result is generally available. For example, if the CPU is available in Aug-2012, but the memory is not available until Oct-2012, then the hardware availability date is Oct-2012 (unless some other component pushes it out farther).
(Also, "Software Avail")
The date when all the software necessary to run the result is generally available. For example, if the operating system is available in Aug-2012, but the compiler or other libraries are not available until Oct-2012, then the software availability date is Oct-2012 (unless some other component pushes it out farther).
The date when the test is run. This value is obtained from the system under test, unless the tester explicitly changes it.
The name of the organization or individual that sponsored the test. Generally, this is the name of the license holder.
The name of the organization or individual that ran the test. If there are installations in multiple geographic locations, sometimes that will also be listed in this field.
In addition to the graph, the results of the individual benchmark runs are also presented in table form.
The name of the benchmark.
The number of OpenMP threads (OMP_NUM_THREADS) that were used for this run.
This is the amount of time in seconds that the benchmark took to run.
This is the ratio of benchmark run time to the run time on the reference platform.
This is the amount of energy consumed (in kiloJoules) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the maximum rate of power consumed (in Watts) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the average rate of power consumed (in Watts) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the ratio of benchmark average power consumption to the run time on the reference platform. This will only be present if the option power metric is run.
(Also, "Notes/Tuning Information (Continued)")
This section is where the tester provides notes about compiler flags used, system settings, and other items that do not have dedicated fields elsewhere in the result.
Run rules relating to these items can be found in section 4.2 of the OMP2012 Run and Reporting Rules.
(Also, "Compiler Invocation Notes (Continued)")
This section is where the tester provides notes about how the various compilers were invoked, whether any special paths had to be used, etc.
(Also, "Submit Notes (Continued)")
This section is where the tester provides notes about how the config file submit option was used to assign processes to processors.
(Also, "Portability Notes (Continued)")
This section is where the tester provides notes about portability options and flags used to build the various benchmarks.
(Also, "Base Tuning Notes (Continued)")
This section is where the tester provides notes about baseline optimization options and flags used to build the various benchmarks.
(Also, "Peak Tuning Notes (Continued)")
This section is where the tester provides notes about peak optimization options and flags used to build the various benchmarks.
(Also, "Operating System Notes (Continued)")
This section is where the tester provides notes about changes to the default operating system state and other OS-specific tuning information.
(Also, "Platform Notes (Continued)")
This section is where the tester provides notes about changes to the default hardware state and other non-OS-specific tuning information.
(Also, "Component Notes (Continued)")
This section is where the tester provides information about various components needed to build a particular system. This section is only used if the system under test is built from parts and not sold as a whole system.
(Also, "General Notes (Continued)")
This section is where the tester provides notes about things not covered in the other notes sections.
(Also, "Compilation Flags Used (Continued)")
This section is generated automatically by the benchmark tools. It details compilation flags used and provides links (in the HTML and PDF result formats) to descriptions of those flags.
(Also, "Base Compiler Invocation" and "Peak Compiler Invocation")
This section lists the ways that the various compilers are invoked.
(Also, "Base Portability Flags" and "Peak Portability Flags")
This section lists compilation flags that are used for portability.
(Also, "Base Optimization Flags" and "Peak Optimization Flags")
This section lists compilation flags that are used for optimization.
(Also, "Base Other Flags" and "Peak Other Flags")
This section lists compilation flags that are classified as neither portability nor optimization.
(Also, "Base Unknown Flags" and "Peak Unknown Flags")
This section of the reports lists compilation flags used that are not described in any flags description file. Results with unknown flags are marked "invalid" and may not be published. This marking may be removed by reformatting the result using a flags file that describes all of the unknown flags.
(Also, "Base Forbidden Flags" and "Peak Forbidden Flags")
This section of the reports lists compilation flags used that are designated as "forbidden". Results using forbidden flags are marked "invalid" and may not be published.
This section is automatically inserted by the benchmark tools when there are errors present that prevent the result from being a valid reportable result.
Run rules relating to these items can be found in section 4.2.2 of the OMP2012 Run and Reporting Rules.
A manufacturer-determined processor formal name.
Technical characteristics to help identify the processor.
The clock frequency of the CPU, expressed in megahertz.
The maximum clock frequency of the CPU, expressed in megahertz. This is referred to by some vendors as the Turbo frequency.
The type of floating-point unit used in the system.
The number of CPUs that were enabled and active during the benchmark run. More information about CPU counting is in the run rules.
The number of CPUs that can be ordered in a system of the type being tested.
Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache".
Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache".
Description (size and organization) of the CPU's tertiary, or "Level 3" cache.
Description (size and organization) of any other levels of cache memory.
Description of the system main memory configuration. End-user options that affect performance, such as arrangement of memory modules, interleaving, latency, etc, are documented here.
A description of the disk subsystem (size, type, and RAID level if any) of the storage used to hold the benchmark tree during the run.
Any additional equipment added to improve performance.
Run rules relating to these items can be found in section 4.2.3 of the OMP2012 Run and Reporting Rules.
The operating system name and version. If there are patches applied that affect performance, they must be disclosed in the notes.
Were multiple threads/cores/chips employed by a parallelizing compiler? Note that only OpenMP parallelism is allowed. Please see the run rules relating to this at section 1.1.4 of the OMP2012 Run and Reporting Rules.
The names and versions of all compilers, preprocessors, and performance libraries used to generate the result.
The type of the filesystem used to contain the run directories.
The state (sometimes called "run level") of the system while the benchmarks were being run. Generally, this is "single user", "multi-user", "default", etc.
Indicates whether all the benchmarks in base used 32-bit pointers, 64-bit pointers, or a mixture. For example, if the C and C++ benchmarks used 32-bit pointers, and the Fortran benchmarks used 64-bit pointers, then "32/64-bit" would be reported here.
Indicates whether all the benchmarks in peak used 32-bit pointers, 64-bit pointers, or a mixture.
Any performance-relevant non-compiler software used, including third-party libraries, accelerators, etc.
Run rules relating to these items can be found in section 4.2.6 of the OMP2012 Run and Reporting Rules.
This field is for the number and rating of the power supplies used in this system for this run.
This field is for more details about the power supply, like a part number or some other identifier.
This is the maximum power (in Watts) that was measured during the entire benchmark suite run.
This is a 60 second measurement of idle power (in Watts) on the machine that is made after the benchmark has been run and the system was given time 10 seconds to rest.
This is the minimim temperature measured (in C) that was registered during the entire benchmark suite run.
This is the current ranges that were used by the power analyzer as reported by PTDaemon.
This is the voltage range that was used by the power analyzer as reported by PTDaemon.
This is the Power Analyzer name used to connect PTDaemon to the power analyzer. If more than one power analyzer was used, there will be multiple descriptions presented.
This is the name of the company that provides the power analyzer or temperature meter.
This is the model of the power analyzer or temperature meter.
This is the serial number of the power analyzer being used.
This is a description of the interface used to connect the power analyzer or temperature meter to the PTDaemon host system, e.g. RS-232 (serial port), USC, GPIB, etc.
Name of the national metrology institute, which specifies the calibration standards for power analyzers, appropriate for the Test Location reported in the FDR. Calibration should be done according to the standard of the country where the test was performed or where the power analyzer was manufactured.
Examples from accepted result respors: Country Metrology Institute
This is name of the organization that performed the power analyzer calibration.
This is a number or character string which uniquely identifies this meter calibration event. May appear on the calibration certificate or on a sticker applied to the power analyzer. The format of this number is specified by the metrology institute.
The date (DD-MMM-YYYY) the calibration certificate was issued, from the calibration label or the calibration certificate.
This is the version of the PTDaemon. It is automatically supplied by the tools.
This is a brief description of how the power analyzer or temperature sensor was used. This could include which power supply was connected to this power analyzer, or how far away this temperature sensor was from the air intake of the system.
This is the name used to connect the PTDaemon to the temperature sensor. If more that one temperature sensor was used, there will be multiple descriptions presented.
For a reportable OMP2012 run, three iterations of each benchmark are run, and the median of the three runs is selected to be part of the overall metric. In output formats that support it, the medians in the result table are underlined in bold.
Each iteration now consists of running each benchmark in order. For example, given benchmarks "910.aaa", "920.bbb", and "930.ccc", here's what you might see as the benchmarks were run:
OMPG2012
Running (#1) 910.aaa ref base oct09a default Running (#1) 920.bbb ref base oct09a default Running (#1) 930.ccc ref base oct09a default Running (#2) 910.aaa ref base oct09a default Running (#2) 920.bbb ref base oct09a default Running (#2) 930.ccc ref base oct09a default Running (#3) 910.aaa ref base oct09a default Running (#3) 920.bbb ref base oct09a default Running (#3) 930.ccc ref base oct09a default
When you read the results table from a run the results in the results table are listed in the order that they were run, in column-major order. In other words, if you're interested in the base scores as they were produced, start in the upper-lefthand column and read down the first column, then read the middle column, then the right column.
If the benchmarks were run with both base and peak tuning, all base runs were completed before starting peak.
Copyright 1999-2012 Standard Performance Evaluation Corporation All Rights Reserved