Last updated: $Date: 2017-02-07 11:36:33 -0500 (Tue, 07 Feb 2017) $ by $Author: BrianWhitney $
(To check for possible updates to this document, please see http://www.spec.org/accel/Docs/ )
ABSTRACT
This document describes the various fields in a SPEC ACCEL result disclosure.
Please note that any particular result will contain a subset of this information
depending upon which benchmarks were selected to run.
(To check for possible updates to this document, please see http://www.spec.org/accel/)
Selecting one of the following will take you to the detailed table of contents for that section:
1. Benchmarks
6. Power and Temperature information
1. Benchmarks
1.1 Benchmarks by suite
1.1.1 Benchmarks in the SPEC ACCEL OpenCL suite
1.1.2 Benchmarks in the SPEC ACCEL OpenACC suite
1.1.3 Benchmarks in the SPEC ACCEL OpenMP suite
1.2 Benchmarks by language
1.2.1 C Benchmarks
1.2.2 C++ Benchmarks
1.2.3 Fortran Benchmarks
1.2.4 Benchmarks using both Fortran and C
2. Major sections
2.1 Top bar
2.1.1 ACCEL_OCL Result
2.1.2 SPECaccel_ocl_base
2.1.3 SPECaccel_ocl_peak
2.1.4 SPECaccel_ocl_energy_base
2.1.5 SPECaccel_ocl_energy_peak
2.1.6 ACCEL_ACC Result
2.1.7 SPECaccel_acc_base
2.1.8 SPECaccel_acc_peak
2.1.9 SPECaccel_acc_energy_base
2.1.10 SPECaccel_acc_energy_peak
2.1.11 ACCEL_OMP Result
2.1.12 SPECaccel_omp_base
2.1.13 SPECaccel_omp_peak
2.1.14 SPECaccel_omp_energy_base
2.1.15 SPECaccel_omp_energy_peak
2.1.16 SPEC ACCEL license #
2.1.17 Hardware Availability
2.1.18 Software Availability
2.1.19 Test date
2.1.20 Test sponsor
2.1.21 Tested by
2.2 Result table
2.2.1 Benchmark
2.2.2 Seconds
2.2.3 Ratio
2.2.4 Energy kJoules
2.2.5 Maximum Power
2.2.6 Average Power
2.2.7 Energy Ratio
2.3 Notes/Tuning Information
2.3.1 Compiler Invocation Notes
2.3.2 Submit Notes
2.3.3 Portability Notes
2.3.4 Base Tuning Notes
2.3.5 Peak Tuning Notes
2.3.6 Operating System Notes
2.3.7 Platform Notes
2.3.8 Component Notes
2.3.9 General Notes
2.4 Compilation Flags Used
2.4.1Compiler Invocation
2.4.2Portability Flags
2.4.3Optimization Flags
2.4.4Other Flags
2.4.5Unknown Flags
2.4.6Forbidden Flags
2.5 Errors
3. Hardware description
3.1 CPU Name
3.2 CPU Characteristics
3.3 CPU MHz
3.4 Maximum CPU MHz
3.5 FPU
3.6 CPU(s) enabled
3.7 CPU(s) orderable
3.8 Primary Cache
3.9 Secondary Cache
3.10 L3 Cache
3.11 Other Cache
3.12 Memory
3.13 Disk Subsystem
3.14 Other Hardware
4. Accelerator description
4.1 Accelerator Model Name
4.2 Accelerator Vendor
4.3 Accelerator Name
4.4 Type of Accelerator
4.5 Accelerator Connection
4.6 Does Accelerator Use ECC
4.7 Accelerator Description
4.8 Accelerator Driver
5. Software description
5.1 Operating System
5.2 Compiler
5.3 File System
5.4 System State
5.5 Other Software
6. Power and Temperature information
6.1 Power Supply
6.2 Power Supply Details
6.3 Maximum Power (W)
6.4 Idle Power (W)
6.5 Minimum Temperature (C)
6.6 Power Analyzer
6.7 Hardware Vendor
6.8 Model
6.9 Serial Number
6.10 Input Connection
6.11 Metrology Institute
6.12 Calibration By
6.13 Calibration Label
6.14 Calibration Date
6.15 PTDaemon Version
6.16 Setup Description
6.17 Temperature Meter
7. Other information
7.1 Median results
7.2 Run order
The SPEC ACCEL OpenCL suite is comprised of nineteen tests; 10 in C, and 9 in C++.
The SPEC ACCEL OpenACC suite is comprised of fifteen compute intensive codes; 6 in Fortran, 7 in C, and 2 which contain both Fortran and C.
The SPEC ACCEL OpenMP suite is comprised of fifteen compute intensive codes; 6 in Fortran, 7 in C, and 2 which contain both Fortran and C.
Seventeen benchmarks in the SPEC ACCEL suite are written in C:
Nine benchmarks in the SPEC ACCEL suite are written in C++:
Six benchmarks in the SPEC ACCEL suite are written in Fortran:
Two benchmarks in the SPEC ACCEL suite are written in both Fortran and C:
More detailed information about metrics is in sections 4.3.1 and 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
This result is from the SPEC ACCEL OpenCL suite.
The geometric mean of nineteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of nineteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of nineteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of nineteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
This result is from the SPEC ACCEL OpenACC suite.
The geometric mean of fifteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
This result is from the SPEC ACCEL OpenMP suite.
The geometric mean of fifteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.1 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with conservative optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
The geometric mean of fifteen normalized ratios when compiled with aggressive optimization for each benchmark.
More detailed information about this metric is in section 4.3.2 of the SPEC ACCEL Run and Reporting Rules.
The SPEC ACCEL license number of the organization or individual that ran the result.
(Also, "Hardware Avail")
The date when all the hardware necessary to run the result is generally available. For example, if the CPU is available in Aug-2013, but the memory is not available until Feb-2014, then the hardware availability date is Feb-2014 (unless some other component pushes it out farther).
(Also, "Software Avail")
The date when all the software necessary to run the result is generally available. For example, if the operating system is available in Aug-2014, but the compiler or other libraries are not available until Oct-2014, then the software availability date is Oct-2014 (unless some other component pushes it out farther).
The date when the test is run. This value is obtained from the system under test, unless the tester explicitly changes it.
The name of the organization or individual that sponsored the test. Generally, this is the name of the license holder.
The name of the organization or individual that ran the test. If there are installations in multiple geographic locations, sometimes that will also be listed in this field.
In addition to the graph, the results of the individual benchmark runs are also presented in table form.
The name of the benchmark.
This is the amount of time in seconds that the benchmark took to run.
This is the ratio of the benchmark run time on the reference platform divided by the run time on the system under test.
This is the amount of energy consumed (in kiloJoules) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the maximum rate of power consumed (in Watts) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the average rate of power consumed (in Watts) during the execution of the benchmark. This will only be present if the option power metric is run.
This is the ratio of benchmark average power consumption on the reference platform divided by the run time on the system under test. This will only be present if the optional power metric is run.
(Also, "Notes/Tuning Information (Continued)")
This section is where the tester provides notes about compiler flags used, system settings, and other items that do not have dedicated fields elsewhere in the result.
Run rules relating to these items can be found in section 4.2 of the SPEC ACCEL Run and Reporting Rules.
(Also, "Compiler Invocation Notes (Continued)")
This section is where the tester provides notes about how the various compilers were invoked, whether any special paths had to be used, etc.
(Also, "Submit Notes (Continued)")
This section is where the tester provides notes about how the config file submit option was used to assign processes to processors.
(Also, "Portability Notes (Continued)")
This section is where the tester provides notes about portability options and flags used to build the various benchmarks.
(Also, "Base Tuning Notes (Continued)")
This section is where the tester provides notes about baseline optimization options and flags used to build the various benchmarks.
(Also, "Peak Tuning Notes (Continued)")
This section is where the tester provides notes about peak optimization options and flags used to build the various benchmarks.
(Also, "Operating System Notes (Continued)")
This section is where the tester provides notes about changes to the default operating system state and other OS-specific tuning information.
(Also, "Platform Notes (Continued)")
This section is where the tester provides notes about changes to the default hardware state and other non-OS-specific tuning information.
(Also, "Component Notes (Continued)")
This section is where the tester provides information about various components needed to build a particular system. This section is only used if the system under test is built from parts and not sold as a whole system.
(Also, "General Notes (Continued)")
This section is where the tester provides notes about things not covered in the other notes sections.
(Also, "Compilation Flags Used (Continued)")
This section is generated automatically by the benchmark tools. It details compilation flags used and provides links (in the HTML and PDF result formats) to descriptions of those flags.
(Also, "Base Compiler Invocation" and "Peak Compiler Invocation")
This section lists the ways that the various compilers are invoked.
(Also, "Base Portability Flags" and "Peak Portability Flags")
This section lists compilation flags that are used for portability.
(Also, "Base Optimization Flags" and "Peak Optimization Flags")
This section lists compilation flags that are used for optimization.
(Also, "Base Other Flags" and "Peak Other Flags")
This section lists compilation flags that are classified as neither portability nor optimization.
(Also, "Base Unknown Flags" and "Peak Unknown Flags")
This section of the reports lists compilation flags used that are not described in any flags description file. Results with unknown flags are marked "invalid" and may not be published. This marking may be removed by reformatting the result using a flags file that describes all of the unknown flags.
(Also, "Base Forbidden Flags" and "Peak Forbidden Flags")
This section of the reports lists compilation flags used that are designated as "forbidden". Results using forbidden flags are marked "invalid" and may not be published.
This section is automatically inserted by the benchmark tools when there are errors present that prevent the result from being a valid reportable result.
Run rules relating to these items can be found in section 4.2.2 of the SPEC ACCEL Run and Reporting Rules.
A manufacturer-determined processor formal name.
Technical characteristics to help identify the processor.
The clock frequency of the CPU, expressed in megahertz.
The maximum clock frequency of the CPU, expressed in megahertz. This is referred to by some vendors as the Turbo frequency.
The type of floating-point unit used in the system.
The number of CPUs that were enabled and active during the benchmark run. More information about CPU counting is in the run rules.
The number of CPUs that can be ordered in a system of the type being tested.
Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache".
Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache".
Description (size and organization) of the CPU's tertiary, or "Level 3" cache.
Description (size and organization) of any other levels of cache memory.
Description of the system main memory configuration. End-user options that affect performance, such as arrangement of memory modules, interleaving, latency, etc, are documented here.
A description of the disk subsystem (size, type, and RAID level if any) of the storage used to hold the benchmark tree during the run.
Any additional equipment added to improve performance.
Run rules relating to these items can be found in section 4.2.3 of the SPEC ACCEL Run and Reporting Rules.
The model name of the accelerator.
The company/vendor of the accelerator.
The name of the accelerator.
Describes the type of accelerator. Possible values include, but not limited to: GPU, APU, CPU, etc.
Tells how the accelerator is connected to the system. Possible descriptions include, but not limited to: PCIe, integrated, etc.
Shows if the Accelerator uses ECC for its memory.
Further description of the accelertor.
The name and version of the software driver used to control the accelerator.
Run rules relating to these items can be found in section 4.2.4 of the SPEC ACCEL Run and Reporting Rules.
The operating system name and version. If there are patches applied that affect performance, they must be disclosed in the notes.
The names and versions of all compilers, preprocessors, and performance libraries used to generate the result.
The type of the filesystem used to contain the run directories.
The state (sometimes called "run level") of the system while the benchmarks were being run. Generally, this is "single user", "multi-user", "default", etc.
Any performance-relevant non-compiler software used, including third-party libraries, accelerators, etc.
Run rules relating to these items can be found in section 4.2.7 of the SPEC ACCEL Run and Reporting Rules.
This field is for the number and rating of the power supplies used in this system for this run.
This field is for more details about the power supply, like a part number or some other identifier.
This is the maximum power (in Watts) that was measured during the entire benchmark suite run.
This is a 60 second measurement of idle power (in Watts) on the machine that is made after the benchmark has been run and the system was given time 10 seconds to rest.
This is the minimim temperature measured (in C) that was registered during the entire benchmark suite run.
This is the name used to connect the PTDaemon to the power analyzer. If more that one power analyzer was used, there will be multiple descriptions presented.
This is the name of the company that provides the power analyzer or temperature meter.
This is the model of the power analyzer or temperature meter.
This is the serial number of the power analyzer being used.
This is a description of the interface used to connect the power analyzer or temperature meter to the PTDaemon host system, e.g. RS-232 (serial port), USC, GPIB, etc.
This is the name of the accreditation organization of the institute that did the calibration of the meter (e.g. NIST, PTB, AIST, NML, CNAS, etc.). A list of national metrology institutes for many countries is maintained by NIST here http://gsi.nist.gov/global/index.cfm.
This is name of the organization that performed the power analyzer calibration.
This is a number or character string which uniquely identifies this meter calibration event. May appear on the calibration certificate or on a sticker applied to the power analyzer. The format of this number is specified by the metrology institute.
The date (DD-MMM-YYYY) the calibration certificate was issued, from the calibration label or the calibration certificate.
This is the version of the PTDaemon. It is automatically supplied by the tools.
This is a brief description of how the power analyzer or temperature meter was used. This could include which power supply was connected to this power analyzer, or how far away this temperature meter was from the air intake of the system.
This is the name used to connect the PTDaemon to the temperature meter. If more that one temperature meter was used, there will be multiple descriptions presented.
For a reportable SPEC ACCEL run, three iterations of each benchmark are run, and the median of the three runs is selected to be part of the overall metric. In output formats that support it, the medians in the result table are underlined in bold.
Each iteration now consists of running each benchmark in order. For example, given benchmarks "910.aaa", "920.bbb", and "930.ccc", here's what you might see as the benchmarks were run:
ACCEL_OCL
Running (#1) 910.aaa ref base oct09a default Running (#1) 920.bbb ref base oct09a default Running (#1) 930.ccc ref base oct09a default Running (#2) 910.aaa ref base oct09a default Running (#2) 920.bbb ref base oct09a default Running (#2) 930.ccc ref base oct09a default Running (#3) 910.aaa ref base oct09a default Running (#3) 920.bbb ref base oct09a default Running (#3) 930.ccc ref base oct09a default
When you read the results table from a run the results in the results table are listed in the order that they were run, in column-major order. In other words, if you're interested in the base scores as they were produced, start in the upper-lefthand column and read down the first column, then read the middle column, then the right column.
If the benchmarks were run with both base and peak tuning, all base runs were completed before starting peak.
Copyright 2014-2017 Standard Performance Evaluation Corporation All Rights Reserved