Skip navigation

Standard Performance Evaluation Corporation

Facebook logo LinkedIn logo Twitter logo
 
 

SPECpower®

[ Updates | Power Committee and other related tools | Results | Information and software documentation ]

The increasing demand for energy-efficient IT equipment has resulted in the need for power and performance benchmarks. In response, SPEC formed the SPECpower Committee, whose role is to augment existing and create new industry-standard benchmarks and tools with a power/energy measurement.

The SPECpower benchmark is the first industry-standard benchmark that evaluates the power and performance characteristics of single server and multi-node servers. It is used to compare power and performance among different servers and serves as a toolset for use in improving server efficiency. The benchmark is targeted for use by hardware vendors, IT industry, computer manufacturers, and governments.

The first SPECpower® benchmark suite was created to measure the power and performance characteristics of server-class computer equipment. The first release is known as the SPECpower_ssj® 2008 benchmark suite.

The SPECpower_ssj 2008 benchmark price is $1600 for new customers and $400 for qualified non profit organizations and accredited academic institutions. To find out if your organization has an existing license for a SPEC product please contact SPEC at info@spec.org.

The drive to create the power and performance benchmark came from the recognition that the IT industry, computer manufacturers, and governments are increasingly concerned with the energy use of servers. This benchmark provides a means to measure power (at the AC input) in conjunction with a performance metric. This helps IT managers to consider power characteristics along with other selection criteria to increase the efficiency of data centers.

The workload exercises the CPUs, caches, memory hierarchy and the scalability of shared memory processors (SMPs) as well as the implementations of the JVM (Java Virtual Machine), JIT (Just-In-Time) compiler, garbage collection, threads and some aspects of the operating system. The benchmark runs on a wide variety of operating systems and hardware architectures and should not require extensive client or storage infrastructure.

Since this is the first industry power-performance benchmark, a general methodology (http://www.spec.org/power/docs/SPEC-Power_and_Performance_Methodology.pdf) for power measurement has been documented to assist other benchmark developers interested in measuring power.


The PTDaemon interface update: Version 1.11.1 was released September 10th, 2024.

All SPECpower_ssj® 2008 benchmark submissions after March 20th, 2025, must be made with v1.11.1 of the PTDaemon interface or newer.

Run and Reporting Rules Change: For architectures with microcode which can be updated separately from the Boot Firmware, the ROM version details must be included in the System Under Test notes section. This will be required for all SPECpower_ssj2008 submissions starting with the October 26th review cycle.

For details please see section 3.3.4.5 of the Run and Reporting Rules.

• The Oracle Hotspot 8, 11 & 15 Java versions are considered valid for future SPECpower_ssj2008 submissions using new processor families, as long as they are still supported according to the Oracle roadmap premier support date.

• Please be aware that non-LTS Java versions appear to only be supported for 6 months.


Any SPECpower_ssj2008 benchmark submissions using HotSpot 1.7 will not be accepted with new processor families after April 2018, per Section 2.10 of the Run and Reporting Rules.

When running the SPECpower_ssj2008 benchmark on Windows Server 2016 with huge pages enabled, a blue screen may occur if the heap size selected for the JVM is not a multiple of 1GB. This is resolved by Microsoft Update KB4025334 (last updated on 7/20/2017) for Windows Server 2016, which may be downloaded from: http://www.catalog.update.microsoft.com/Search.aspx?q=KB4025334


The SPEC Power committee is working to clarify the Run & Reporting Rules for the SPECpower_ssj2008 benchmark governing software General Availability requirements. We expect this to disallow any future submissions made with the IBM J9 JVM dating from 2012-03-22.


Run Rules Change: The upper boundary of Calibration interval 0 has been increased to 105% due to issues with some multi-node configurations. Please check section 2.5.2 of the Run and Reporting Rules for details. Results marked INVALID because the elapsed time of Calibration 0 is up to 105%, can still be submitted to SPEC and the resulting INVALID message will be corrected to a WARNING message.

For information on the SPEC Power committee or its tools:

Results

All SPECpower_ssj® 2008 benchmark results currently require the use of v1.12.

Submitted Results
Text and HTML reports for the SPEC power_ssj® 2008 benchmark metrics; includes all of the results submitted to SPEC from the SPEC member companies and other licensees of the benchmark. 

Information

Benchmark Press Releases
Press release material, documents, and announcements:  

Related Publications 

Measuring the Energy Efficiency of Transactional Loads on GPGPU
ICPE'19 - Mumbai, India
Download: https://dl.acm.org/authorize?N673010

Variations in CPU Power Consumption
ICPE'16 - Delft, Netherlands
Download: https://dl.acm.org/authorize?N00304

SPEC: Enabling Efficiency Measurement
ICPE'12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
Download: https://dl.acm.org/authorize?N97346

 

Benchmark Documentation
Technical and support documents, run and reporting rules, etc.
All documentation available on the CD is also available here.

Quick Start Guide
User Guide
Power Measurement Setup Guide
Run and Reporting Rules  (PDF)
Result File Field descriptions 
Technical Design Documents:   
          Design Overview
          CCS Design
          PTDaemon Design
          SSJ Design
Power and Performance Methodology
Frequently Asked Questions - FAQ  (PDF)

Accepted Measurement Devices
The SPECpower_ssj® 2008 Submission Checklist

Support

Technical support requiring involvement of benchmark specialists is done by volunteers from our member institutions (see member list). Please do not send us any proprietary information in your queries.

Frequently Asked Questions - FAQ - Installation, build, and runtime issues raised by users of the benchmark.
Power Measurement Setup Guide
Errata list