To check for possible updates to this document, please see http://www.spec.org/power/docs/SPECpower_ssj2008-Result_File_Fields.html
ABSTRACT
This document describes the various fields in the different levels of result files making up the complete SPECpower_ssj2008 result disclosure.
Selecting one of the following will take you to the detailed table of contents for that section:
1. SPECpower_ssj2008 Benchmark
3. Top Bar
8. Set: 'N'
10. Management Firmware Settings
14. Notes
15. Electrical and Environmental Data
16. Aggregate Performance Data
17. Power/Temperature Details Report
18. Power Details for Device 'N'
19. Aggregate Performance Report
20. Set 'N' Performance Report
21. Host 'N' Performance Report
22. JVM Instance 'N' Performance Report
1. SPECpower_ssj2008 Benchmark
1.1 The Workload
1.1.1 Server Side Java
1.1.2 JVM Director
1.2 The Control and Collect System
1.3 The Power and Temperature Daemon
1.4 Result Validation and Report Generation
1.5 References
2. Main Report File
3. Top bar
3.1 Headline
3.2 Test sponsor
3.3 SPEC license #
3.4 Hardware Availability
3.5 Tested by
3.6 Test Location
3.7 Software Availability
3.8 System Source
3.9 Test Date
3.10 Publication Date
3.11 Test Method
3.12 System Designation
3.13 Power Provisioning
3.14 INVALID
4. Benchmark Results Summary
4.1 Performance
4.1.1 Target Load
4.1.2 Actual Load
4.1.3 ssj_ops
4.1.4 Active Idle
4.2 Power
4.2.1 Average Active Power (W)
4.3 Performance to Power Ratio
4.4 ∑ssj_ops / ∑power =
4.5 Result Chart
5. Aggregate SUT Data
5.1 Set Id
5.2 # of Nodes
5.3 # of Chips
5.4 # of Cores
5.5 # of Threads
5.6 Total RAM (GB)
5.7 # of OS Images
5.8 # of JVM Instances
6. System Under Test
7. Shared Hardware
7.1 Shared Hardware
7.1.1 Cabinet/Housing/Enclosure
7.1.2 Form Factor
7.1.3 Power Supply Quantity and Rating (W)
7.1.4 Power Supply Details
7.1.5 Network Switch
7.1.6 Network Switch Details
7.1.7 KVM Switch
7.1.8 KVM Switch Details
7.1.9 Other Hardware
7.1.10 Comment
8. Set: 'N'
8.1 Set Identifier
8.2 Set Description
8.3 # of Identical Nodes
8.4 Comment
8.5 Hardware per Node
8.5.1 Hardware Vendor
8.5.2 Model
8.5.3 Form Factor
8.5.4 CPU Name
8.5.5 CPU Characteristics
8.5.6 CPU Frequency (MHz)
8.5.7 CPU(s) Enabled
8.5.8 Hardware Threads / Core
8.5.9 CPU(s) orderable
8.5.10 Primary Cache
8.5.11 Secondary Cache
8.5.12 Tertiary Cache
8.5.13 Other Cache
8.5.14 Memory Amount (GB)
8.5.15 # and size of DIMM(s)
8.5.16 Memory Details
8.5.17 Power Supply Quantity and Rating (W)
8.5.18 Power Supply Details
8.5.19 Disk Drive
8.5.20 Disk Controller
8.5.21 # and type of Network Interface Cards (NICs) Installed
8.5.22 NICs Enabled in Firmware / OS / Connected
8.5.23 Network Speed
8.5.24 Keyboard
8.5.25 Mouse
8.5.26 Monitor
8.5.27 Optical Drives
8.5.28 Other Hardware
8.6 Software per Node
8.6.1 Power Management
8.6.2 Operating System (OS)
8.6.3 OS Version
8.6.4 Filesystem
8.6.5 JVM Vendor
8.6.6 JVM Version
8.6.7 JVM Commandline Options
8.6.8 JVM Affinity
8.6.9 JVM Instances
8.6.10 JVM Initial Heap (MB)
8.6.11 JVM Maximum Heap (MB)
8.6.12 JVM Address Bits
8.6.13 Boot Firmware Version
8.6.14 Management Firmware Version
8.6.15 Workload Version
8.6.16 Director Location
8.6.17 Other Software
9. Boot Firmware Settings
10. Management Firmware Settings
11. System Under Test Notes
12. Controller System
12.1 Hardware
12.1.1 Hardware Vendor
12.1.2 Model
12.1.3 CPU Description
12.1.4 Memory amount (GB)
12.2 Software
12.2.1 Operating System (OS)
12.2.2 JVM Vendor
12.2.3 JVM Version
12.2.4 CCS Version
13. Measurement Devices
13.1 Power Analyzer
13.1.1 Hardware Vendor
13.1.2 Model
13.1.3 Serial Number
13.1.4 Connectivity
13.1.5 Input Connection
13.1.6 Current Range
13.1.7 Voltage Range
13.1.8 Metrology Institute
13.1.9 Accredited by
13.1.10 Calibration Label
13.1.11 Date of Calibration
13.1.12 PTDaemon Host System
13.1.13 PTDaemon Host OS
13.1.14 PTDaemon Version
13.1.15 Setup Description
13.2 Temperature Sensor
13.2.1 Hardware Vendor
13.2.2 Model
13.2.3 Driver Version
13.2.4 Connectivity
13.2.5 PTDaemon Host System
13.2.6 PTDaemon Host OS
13.2.7 Setup Description
14. Notes
15. Electrical and Environmental Data
15.1 Target Load
15.2 Average Voltage (V)
15.3 Average Current (A)
15.4 Average Power Factor
15.5 Average Active Power (W)
15.6 Line Standard
15.7 Average Power Factor
15.8 Minimum Ambient Temperature (°C)
15.9 Minimum Temperature (°C)
15.10 Elevation (m)
16. Aggregate Performance Data
16.1 Target Load
16.2 Actual Load
16.3 ssj_ops
16.3.1 Target
16.3.2 Actual
16.3.3 ssj_ops@calibrated=
16.4 ssj_ops Chart
17. Power/Temperature Details Report
17.1 Top bar
17.2 Benchmark Results Summary
17.3 Measurement Devices
17.4 Notes
18. Power Details for Device 'N'
18.1 Target Load
18.2 Average Voltage (V)
18.3 Voltage Range (V)
18.4 Average Current (A)
18.5 Current Range (A)
18.6 Average Power Factor
18.7 Average Active Power (W)
18.8 Power Measurement Uncertainty(%)
19. Aggregate Performance Report
19.1 Top bar
19.2 Benchmark Results Summary
19.3 Aggregate SUT Data
19.4 System Under Test
19.5 Shared Hardware
19.6 Set: 'N'
19.7 System Under Test Notes
19.8 Notes
19.9 Set Instance Summary
19.9.1 Set
19.9.2 ssj_ops@100%
19.9.3 ssj_ops Set Chart
19.10 Set N Scores:
20. Set 'N' Performance Report
20.1 Top bar
20.2 Benchmark Results Summary
20.3 Aggregate SUT Data
20.4 System Under Test
20.5 Shared Hardware
20.6 Set: 'N'
20.7 System Under Test Notes
20.8 Notes
20.9 Host Instance Summary
20.9.1 Host
20.9.2 ssj_ops@100%
20.9.3 ssj_ops Host Chart
20.10 Host 'N' Scores:
21. Host 'N' Performance Report
21.1 Top bar
21.2 Benchmark Results Summary
21.3 System Under Test
21.4 Set: 'N'
21.5 System Under Test Notes
21.6 Notes
21.7 JVM Instance Summary
21.7.1 JVM Instance
21.7.2 ssj_ops@100%
21.7.3 ssj_ops JVM Instance Chart
21.8 JVM 'N' Scores:
22. JVM Instance 'N' Performance Report
22.1 Top bar
22.2 Benchmark Results Summary
22.3 System Under Test
22.4 Set: 'N'
22.5 System Under Test Notes
22.6 Notes
22.7 Performance Details
22.7.1 Target Load
22.7.2 Actual Load
22.7.3 Transaction Type
22.7.4 Count
22.7.5 Total Heap (MB)
SPECpower_ssj2008 is the first generation SPEC benchmark for evaluating the power and performance of server class computers.
The benchmark suite consists of three separate software modules:
The workload is a Java program designed to exercise the CPU(s), caches, memory, the scalability of shared memory processors, JVM (Java Virtual Machine) implementations, JIT (Just In Time) compilers, garbage collection, threads, and certain aspects of the operating system of the SUT.
The workload architecture is a 3-tier system with emphasis on the middle tier. These tiers are comprised as follows:
The JVM Director is a separate and distinct mechanism from the actual workload itself (the three-tiered client-server environment), but runs concurrently with the JVM instance(s) of the workload. Like the workload, the JVM Director is also a java application, and as such, runs as its own JVM instance.
The JVM Director can be run locally on the SUT, or it can be run remotely at the user's discretion (see Director Location). Whichever method is employed, the JVM Director and the workload JVM instance(s) will communicate via a TCP/IP socket connection.
The Control and Collect System is a Java-based application that resides on the controller server. CCS is used to connect to three types of data sources via TCP/IP socket communication:
The Power and Temperature Daemon (PTDaemon) is a single executable program that communicates with a power analyzer or a temperature sensor via the server's native RS-232 port, USB port or additionally installed interface cards, e.g. GPIB. It reports the power consumption or temperature readings to CCS via a TCP/IP socket connection. It supports a variety of RS-232, GPIB and USB interface command sets for a variety of power analyzers and temperature sensors. PTDaemon is the only of the three SPECpower_ssj2008 software modules that is not Java based. Although it can be quite easily setup and run on a server other than the controller server, in the simplest SPECpower_ssj2008 test bed implementation, the PTDaemon will typically reside on the controller server.
At the beginning of each run, the benchmark parameters are checked for conformance to the run rules. Warnings are displayed for non-compliant properties and printed in the final report; however, the benchmark will run to completion producing a report that is not valid for publication.
At the end of a benchmark run the report generator module is called to generate the report files described here from the data given in the configuration files and collected during this benchmark run. Again basic validity checks are performed, to ensure that interval length, target load throughput, temperature etc. are within the limits defined in the run rules. For more information see section "2.5.2 Validity Checks" in the Run and Reporting Rules document.
More detailed information can be found in the documents shown in the following table.
For the latest versions, please consult SPEC's website.
This section gives an overview of the information and result fields in the main report file. Additional information is shown in the Power/Temperature Details Report, including detailed power information for potentially multiple power analyzers, and the Aggregate Performance Report, which is generated for multiple node tests only. The Set Performance Report represents the next level of details and is created only if multiple heterogeneous sets of nodes are used, which is currently not allowed for valid results. This information is further extended in the Host Performance Report and the JVM Instance Performance Report which are generated for each host and each JVM instance including specific configuration and performance details.
The top bar shows the measured SPECpower_ssj2008 result and gives some general information regarding this test run.
The headline of the performance report includes one field displaying the hardware vendor (config.hw.vendor) and the name (config.hw.model) of the system under test. If this report is for a historical system the declaration "(Historical)" must be added to the model name. In a second field the overall SPECpower_ssj2008 result achieved in this test (overall ssj_ops/watt) is printed, eventually prefixed by an "Invalid" indicator, if the current result does not pass the validity checks implemented in the SPECpower_ssj report generation software. More detailed information about the result metric is presented in section 3.1 of the SPECpower_ssj2008 Run and Reporting Rules.
The name of the organization or individual that sponsored the test. Generally, this is the name of the license holder (config.test.sponsor).
The SPEC license number of the organization or individual that ran the result (config.test.spec_license).
The date (month) when all the hardware and related firmware modules necessary to run the
result are generally available (config.hw.available).
The date must be specified in the format: YYYY-MM
For example, if the CPU is available in 2013-02, but the Firmware version used for the test is not available until 2013-04, then the hardware availability date is 2013-04 (unless some other component pushes it out farther).
The name of the organization or individual that ran the test and submitted the result (config.test.tested_by).
The name of the city, the state and country the test took place. If there are installations in multiple geographic locations, that must also be listed in this field (config.test.location).
The date when all the software necessary to run the result is generally available (config.sw.available). For example, if the operating system is available in Aug-2007, but the JVM is not available until Oct-2007, then the software availability date is Oct-2007 (unless some other component pushes it out farther).
Single Supplier or Parts Built (config.hw.system_source)
The date when the test is run. This value is automatically supplied by the SPECpower_ssj software; the time reported by the system under test is recorded in the raw result file .
The date when this report will be published after finishing the review. This date is automatically filled in with the correct value by the submission tool provided by SPEC. By default this field is set to "Unpublished" by the software generating the report.
Possible values for this property (config.test.method) are:
Possible values for this property (config.hw.system_designation) are:
Possible values for this property (config.hw.power_provisioning) are:
Any inconsistencies with the run and reporting rules causing a failure of one of the validity checks implemented in the report generation software will be reported here and all pages of the report file will be stamped with an "Invalid" water mark in case this happens. The printed text will show more details about which of the run rules wasn't met and the reason why. (see section 2.5.2 of the SPECpower_ssj2008 Run and Reporting Rules).
This section describes the result details for all measurement intervals in a table and as a graph.
The first three columns of the results table show the measured throughput and the actual percentage of calibrated throughput compared to the target percentage.
The different target load levels derived from the calibrated throughput, starting with 100% of the calibrated throughput and decreasing to "Active Idle" = 0% or no throughput. The benchmark software schedules the required number of requests to actually achieve the intended throughput levels during each of the measurement intervals, each lasting 240 seconds.
The load levels actually achieved during the different phases of the benchmark as a percentage of the calibrated throughput. The percentages must match the target load of each phase with less than 2% deviation (positive or negative).
The number of operations finished during this measurement interval divided by the number of seconds defined for this interval, showing the throughput (workload operations per second) for this period.
The last measurement interval running without any transactions scheduled by the workload software. So there is no throughput reported for this interval only the power consumption will be measured and displayed.
This column of the results summary table shows the power consumption for the different target loads.
Average active power measured by the power analyzer(s) and accumulated by the PTDaemon (Power and Temperature Daemon) for this measurement interval, displayed as watts (W).
The average throughput divided by the average power consumption for each of the measurement intervals.
The overall score of the SPECpower_ssj2008 benchmark calculated from the sum of the performance measured at each target load level (in ssj_ops) divided by the sum of the average power (in W) at each target load including active idle.
The result chart graphically displays the results reported in the summary table in one diagram. The red bars show the performance to power ratio (throughput / W) of each target load given on the y-axis graphically (corresponding to the upper x-axis) and numerically as a label in the bar. Longer bars / higher numbers are better. By definition there is no throughput for the "Active Idle" level and so the ratio is always 0. The bold blue line with the markers corresponds to the lower x-axis and shows the average power consumption for each target load given on the y-axis. Lower numbers are better. The thin, vertical, straight line corresponds to the upper x-axis and shows the overall ssj_ops per watt result of the benchmark. A higher number is better.
In this section aggregated values for several system configuration parameters are reported. The section will be displayed only if more than one node is configured.
A user defined identifier (see (SETID) in runssj.bat/runssj.sh) used to identify the descriptive configuration properties that will be used for the system under test. For example, with a (SETID) of "sut", the descriptive configuration properties will be read from the file "SPECpower_ssj_config_sut.props" from the Director system.
The number of nodes per set and the total number of all nodes used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The number of processor chips per set and the total number of all chips used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The number of processor cores per set and the total number of all cores used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The number of processor threads per set and the total number of all threads used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The amount of memory (GB) per set and the total memory size for all systems used to run the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The number of operating system images per set and the total number of all OS images used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The number of Java Virtual Machine instances per set and the total number of all JVM instances used for running the test. The reported values are calculated by the benchmark software from the information given in the properties files and the benchmark startup script files.
The following section of the report file describes the hardware and the software of the system under test (SUT) used to run the reported SPECpower benchmark with the level of detail required to reproduce this result.
In this section hardware components common to all nodes will be described. The section will be displayed only if more than one node is configured.
A table including the description of the shared hardware components.
The model name identifying the enclosure housing the tested nodes (config.shared.enclosure).
The full SUT form factor (including all nodes and any shared hardware). (config.shared.form_factor).
For rack-mounted systems, specify the number of rack units. For other types of enclosures, specify "Tower" or "Other".
The number of power supplies that are installed in the tested configuration (config.shared.psu.installed) and the power rating for each power supply (config.shared.psu.rating). Both values are set to 0 if there are no shared power supplies.
The supplier name of the PSU and the order number to identify it.
(config.shared.psu.description)
"N/A" if there are no shared power supplies.
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the PSU must be specified here.
The number of network switches used to run the benchmark (config.shared.network.switch). "N/A" if there is no network switch.
The manufacturer of the network switch and the model number to identify it (config.shared.network.switch.description). "N/A" if there is no network switch.
The number of KVM switches used to run the benchmark (config.shared.kvm) "N/A" if there is no KVM switch.
The manufacturer of the KVM switch and the model number to identify it. (config.shared.kvm.description) "N/A" if there is no KVM switch.
Any additional shared equipment added to improve performance and required to achieve the reported scores (config.shared.other).
Description of additional performance or power relevant components not covered in the fields above (config.shared.comment)
Detailed hardware and software description of the identically configured nodes which constitute this set.
A unique identifier for this set of nodes. This number or string is read by the benchmark program from the "-setid" commandline parameter used to start the SSJ code of the benchmark and reported here. (see (SETID) in runssj.bat/runssj.sh)
A textual description of this set of nodes, e.g. the model name of a blade server (config.set.description).
The number of identically configured nodes which constitute this set. This number is read by the benchmark program from the "-numHosts" commandline parameter used to start the director code of the benchmark and reported here. (see (NUM_HOSTS) in rundirector.bat/rundirector.sh)
Additional comments related to this set of nodes (config.set.comment).
This section describes in detail the different hardware components of the system under test which are important to achieve the reported result.
Company which sells the hardware (config.hw.vendor)
The model name identifying the system under test (config.hw.model)
The form factor for this system (config.hw.form_factor).
In multi-node configurations, this is the form factor for a single node. For rack-mounted systems, specify the number of rack units. For blades, specify "Blade". For other types of systems, specify "Tower" or "Other".
A manufacturer-determined processor formal name. (config.hw.cpu)
Technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc (config.hw.cpu.characteristics).
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, this field should also list the feature and the maximum frequency it enables on that CPU (e.g.: "Intel Turbo Boost Technology up to 3.46GHz").
If this CPU clock feature is present but is disabled, no additional information is required here.
The nominal (marked) clock frequency of the CPU, expressed in megahertz.
(config.hw.cpu.mhz).
If the CPU is capable of automatically running the processor core(s) faster than the nominal frequency and this feature is enabled, then the CPU Characteristics field must list additional information, at least the maximum frequency and the use of this feature.
Furthermore if the enabled/disabled status of this feature is changed from the default setting this must be documented in the System Under Test Notes field.
The CPUs that were enabled and active during the benchmark run, displayed as the number of cores (config.config.hw.cpu.cores), the number of chips (config.config.hw.cpu.chips) and the number of cores per chip (config.config.hw.cpu.cores_per_chip).
The number of hardware threads available per core (config.hw.cpu.threads_per_core).
The number of CPUs that can be ordered in a system of the type being tested (config.hw.cpu.orderable).
Description (size and organization) of the CPU's primary cache. This cache is also referred to as "L1 cache" (config.hw.cache.primary).
Description (size and organization) of the CPU's secondary cache. This cache is also referred to as "L2 cache" (config.hw.cache.secondary).
Description (size and organization) of the CPU's tertiary, or "L3" cache (config.hw.cache.tertiary).
Description (size and organization) of any other levels of cache memory (config.hw.cache.other).
Total size of memory in the SUT in GB (config.hw.memory.gb).
Number and size of memory modules used for testing (config.hw.memory.dimms).
Detailed description of the system main memory technology, sufficient for identifying the memory used in this test.
(config.hw.memory.description).
Since the introduction of DDR4 memory there are two slightly different formats.
The recommended formats are described here.
DDR4 Format:
N x gg ss pheRxff PC4v-wwwwaa-m; slots k, ... l populated
References:
The number of power supplies that are installed in this node (config.hw.psu.installed) and the power rating for each power supply (config.hw.psu.rating). Both entries should show "None" if the node is powered by a shared power supply.
The supplier of the PSU and the order number to identify it. (config.hw.psu.description).
"Shared" if this node is powered by a shared power supply and does not include its own.
In case of a "Parts Built" system (see: System Source) the manufacturer
and the part number of the PSU must be specified here.
A description of the disk drive(s) (count, model, size, type, rotational speed and RAID level if any) used to boot the operating system and to hold the benchmark software and data during the run (config.hw.disk).
The supplier name and order number of the controller used to drive the disk(s) (config.hw.disk.controller).
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the disk controller must be specified here.
A description of the network controller(s) (number, supplier name, order number, ports and speed) installed on the SUT (config.hw.network.controller).
In case of a "Parts Built" system (see: System Source) the manufacturer name
and the part number of the NIC must be specified instead of supplier name and order number.
The number of NICs (ports) enabled in the Firmware, in the OS and actually connected during the test (config.hw.network.controller.enabled.firmware, config.hw.network.controller.enabled.os, config.hw.network.controller.connected).
The network speed actually used on the configured NICs during the test (config.hw.network.speed). A minimal speed of at least 1 Gbit/sec is required for a valid benchmark run.
The type of keyboard (USB, PS2, KVM or None) used (config.hw.keyboard).
The type of mouse (USB, PS2, KVM or None) used (config.hw.mouse).
Specifies if a monitor was used for the test and how it was connected (directly or via KVM) (config.hw.monitor).
Specifies whether any optical drives were configured in the SUT (config.hw.optical).
Any additional equipment added to improve performance and required to achieve the reported scores (config.hw.other).
For "Personal Systems" (see System Designation)
the vendor of the display device (monitor) and its model number must be added here, as this device has to be be included in the power measurement, (see section 2.11.2 paragraph 4 of the SPECpower_ssj2008 Run and Reporting Rules).
This section describes in detail the various software components installed on the system under test, which are important to achieve the reported result, and their configuration parameters.
This field shows whether power management features of the SUT were enabled or disabled (config.sw.power_management).
The operating system name (config.sw.os).
The operating system version. If there are patches applied that affect performance, they must be disclosed in the System Under Test Notes (config.sw.os.version).
The type of the filesystem used to contain the run directories (config.sw.filesystem).
The company that makes the JVM software. (config.sw.jvm.vendor)
Name and version of the JVM software product. (config.sw.jvm.version)
JVM command-line options used when invoking the benchmark. (config.sw.jvm.options)
Commands used to configure affinity for each JVM (config.sw.jvm.affinity)
Examples:
taskset -c [0,2;1,3]
start /affinity [0x3,0xC]
The quantitiy of JVm instances running. This number is detected automatically by the benchmark program and reported here.
How many megabytes initially used by the JVM heap. "Unlimited" or "dynamic" are allowable values for JVMs that adjust automatically (config.sw.jvm.heap.initial).
How many megabytes can maximally be used by the JVM heap. "Unlimited" or "dynamic" are allowable values for JVMs that adjust automatically (config.sw.jvm.heap.max).
The basic pointer size (32 or 64 bit) used by the installed JVM (config.sw.jvm.bitness).
A version number or string identifying the boot firmware installed on the SUT. (config.sw.boot_firmware.version).
A version number or string identifying the management firmware running on the SUT or "None" if no management controller was installed. (config.sw.mgmt_firmware.version).
The name and revision number of the workload program used to produce this result. This information is provided automatically by the benchmark software.
Indentifies the system which hosts the director controlling the different JVM instances (SUT, Controller or other). Locations other than SUT or Controller require additional description under Notes (config.director.location).
Any performance-relevant software used and required to reproduce the reported scores, including third-party libraries, accelerators, etc. (config.sw.other)
Free text description of what sort of tuning one has to do to the boot firmware (BIOS) to get these results, e.g configuration settings changed from the default. (config.sw.boot_firmware.settings).
Free text description of what sort of tuning one has to do to the management firmware to get these results or "None" if no management controller was installed, e.g configuration settings changed from the default. (config.sw.mgmt_firmware.settings).
Free text description of what sort of tuning one has to do to either
the OS or the JVM to get these results. Also additional hardware information not covered in the other fields above can be given here.
The following list shows examples of information that must be reported in this section:
The next section of the report file describes the hardware and the software of the system running the controller program.
This part of the report contains a brief overview of the hardware used to run the SPECpower Control and Collection System (CCS).
Company which sells/manufactures the controller hardware (ccs.config.hw.vendor)
The model name identifying the system running the controller software(ccs.config.hw.model)
The name of the processor installed in the controller system (ccs.config.hw.cpu) and some technical characteristics to help identify the processor, such as number of cores, frequency, cache size etc (ccs.config.hw.cpu.characteristics)
Total size of memory in the controller system in GB (ccs.config.hw.memory.gb)
Main software components installed on the controller system.
The name and the version of the operating system installed on controller system (ccs.config.sw.os)
The company which makes the JVM software (ccs.config.sw.jvm.vendor)
Name and version of the JVM software product. (ccs.config.sw.jvm.version)
The version of the controller program used to produce this result. This information is provided automatically by the benchmark software.
This section of the report shows the details of the different measurement devices used for this benchmark run.
Starting with version 1.10 of the benchmark there may be more than one measurement device used to measure power and temperature.
The following table includes information about the power analyzer used to measure the electrical data.
Company which manufactures and/or sales the power analyzer (ptd.pwrN.config.analyzer.vendor)
The model name of the power analyzer type used for this benchmark run (ptd.pwrN.config.analyzer.model)
The serial number uniquely identifying the power analyzer used for this benchmark run (ptd.pwrN.config.analyzer.serial)
Which interface was used to connect the power analyzer to the PTDaemon host system and to read the power data, e.g. RS-232 (serial port), USB, GPIB etc. (ptd.pwrN.config.analyzer.connectivity)
Input connection used to connect the load, if several options are available, or "Default" if not (ptd.pwrN.config.analyzer.input_connection).
Value of current range setting to which the power analyzer has been configured, or "Auto" if none (ptd.pwrN.config.analyzer.current_range).
Value of voltage range setting to which the power analyzer has been configured, or "Auto" if none (ptd.pwrN.config.analyzer.voltage_range).
Name of the national metrology institute, which specifies the calibration standards for power analyzers, appropriate for the Test Location reported in the FDR (ptd.pwrN.config.calibration.institute).
Calibration should be done according to the standard of the country where the test was performed or where the power analyzer was manufactured.
Examples from accepted result reports:
Country | Metrology Institute |
---|---|
USA | NIST (National Institute of Standards and Technology) |
Germany | PTB (Physikalisch Technische Bundesanstalt) |
Japan | AIST (National Institute of Advanced Science and Technology) |
Taiwan (ROC) | ITRI (Industrial Technology Research Institute) |
China | NIM (National Institute of Metrology) |
Name of the organization that performed the power analyzer calibration according to the standards defined by the national metrology institute. Could be the analyzer manufacturer, a third party company, or an organization within your own company (ptd.pwrN.config.calibration.accredited_by).
A number or character string which uniquely identifies this meter calibration event. May appear on the calibration certificate or on a sticker applied to the power analyzer. The format of this number is specified by the metrology institute (ptd.pwrN.config.calibration.label).
The date (DD-MMM-YYYY) the calibration certificate was issued, from the calibration label or the calibration certificate (ptd.pwrN.config.calibration.date).
The manufacturer and model number of the system connected to power analyzer and running the power daemon. If PTDaemon is running on the controller system a reference to this system can be reported instead, e.g. "Controller". (ptd.pwrN.config.ptd.system)
The name and the version of the operating system installed on the power daemon host system. If PTDaemon is running on the controller system a reference to this system can be reported instead, e.g. "same as Controller". (ptd.pwrN.config.ptd.os)
The version of the power daemon program reading the analyzer data, including CRC information to verify that the released version was running unchanged. This information is provided automatically by the benchmark software.
Free format textual description of the device or devices measured by this power analyzer and the accompanying PTDaemon instance, e.g. "SUT Power Supplies 1 and 2". (ptd.pwrN.config.analyzer.setup_description).
The following table includes information about the temperature sensor used to measure the ambient temperature of the test environment.
Company which manufactures and/or sales the temperature sensor (ptd.tempN.config.sensor.vendor)
The manufacturer and model name of the temperature sensor used for this benchmark run (ptd.tempN.config.sensor.model)
The version number of the operating system driver used to control and read the temperature sensor (ptd.tempN.config.sensor.driver)
Which interface was used to read the temperature data from the sensor, e.g. RS-232 (serial port), USB etc. (ptd.tempN.config.sensor.connectivity)
The manufacturer and model number of the system connected to temperature sensor and running the temperature daemon (ptd.tempN.config.ptd.system)
The name and the version of the operating system installed on the temperature daemon host system (ptd.tempN.config.ptd.os)
Free format textual description of the device or devices measured and the approximate location of this temperature sensor, e.g. "50 mm in front of SUT main airflow intake". (ptd.tempN.config.sensor.setup_description)
Additional important information required to reproduce the results from other reporting sections, i.e. not related to the SUT, that require a larger text area.
(config.notes).
The following section displays more details of the electrical and environmental data collected during the different target loads, including data not used to calculate the benchmark result. For further explanation of the measured values look in the "SPECpower Methodology" document (SPEC-Power_and_Performance_Methodology.pdf).
Load levels as described in paragraph Target Load
Average voltage for each of the target load levels measured in Volt (V).
Average current for each of the target load levels measured in Ampere (A).
Average power factor for each of the target load levels (PF).
Average active power for each target load level as described in paragraph Average Active Power (W)
Description of the line standards for the main AC power as provided by the local utility company and used to power the SUT. The standard voltage and frequency are printed in this field followed by the number of phases and wires used to connect the SUT to the AC power line (config.line.standard.voltage, config.line.standard.frequency, config.line.standard.phase, config.line.standard.wires).
Power factor average over all target load levels.
The minimum ambient temperature for each of the target load levels measured by the temperature sensor. All values are measured in ten second intervals, evaluated by the PTDaemon and reported to the collection system at the end of each target load level.
Minimum temperature which was measured by the temperature sensor during all target load levels.
Elevation of the location where the test was run. This inforamtion is provided by the tester (config.test.elevation)
This section describes the aggregated throughput for all JVM instances measured during all test phases including the calibration intervals in a table and as a graph.
Load levels as described in paragraph Target Load plus the calibration phases at the beginning. The number of calibration phases can be configured by the tester (config.input.calibration.interval_count), minimum = 3, maximum = 10.
This column shows the actual target loads as described in paragraph Actual Load.
The throughput scores both, target and actual values, for all test phases are printed in two columns.
The target throughput for the measurement phases calculated from the calibrated maximum throughput ssj_ops@calibrated=.
The actual throughput measured during all test phases including calibration as described in paragraph ssj_ops.
The calibrated throughput is calculated from the average throughput of the last two calibration phases. It is required to run at least three calibration phases and at most ten (config.input.calibration.interval_count).
The result chart graphically displays the throughput results reported in the aggregate performance data table in one diagram. The blue line with the square data points represents the target values and the red line with the rotund data points represents the actually measured throughput values for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line at the top shows the maximal throughput calculated from the calibration runs.
This is the second part of the SPECpower_ssj2008 full disclosure report.
This section shows the measured SPECpower_ssj2008 result and gives some general information regarding this test run. For more details see section Top bar.
This section presents the aggregated active power consumption (see Average Active Power (W)) and the minimum temperature (see Minimum Ambient Temperature (°C)) for all test phases (see Target Load).
The chart to the right graphically displays the power and temperature values from the summary table. The red line with the square data points represents the power consumption in W and the blue line with the rotund data points represents the minimum temperature values in °C for the different test phases as indicated on the x-axis. The power values are shown on the left y-axis, lower numbers are better. The temperature values are shown on the right y-axis.
The thin red horizontal line indicates the average power consumption for the target loads not including the calibration phases.
A description of the power analyzers and temperature sensors used for this test. For more details see section Measurement Devices.
A description of all configuration settings that have been changed from the default values. For more details see section Notes.
This section includes additional power information for all test phases for each power analyzer at a time.
Load levels as described in paragraph Target Load
Average voltage in V for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'.
The voltage range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer. If a power analyzer does not support range reading the values are taken from the (ptd.pwr1.config.analyzer.voltage_range) property in the "ccs.props" file.
Please note that automatic voltage range setting by the analyzer is not allowed for all currently accepted analyzers and will invalidate the result.
Average current in A for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'
The current range for each test phase as configured in the power analyzer. Typically range settings are read by PTDaemon directly from the power analyzer. If a power analyzer does not support range reading the values are taken from the (ptd.pwr1.config.analyzer.current_range) property in the "ccs.props" file.
Please note that automatic current range setting by the analyzer is not allowed for all currently accepted analyzers and will invalidate the result.
Average power factor for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'
Active power averages for each test phase as reported by the PTDaemon instance connected to power analyzer 'N'.
The average uncertainty of the reported power readings for each test phase as calculated by PTDaemon based on the range settings.
The value must be within the 1% limit defined in section "2.13.2 Power Analyzer Specifications" of the
SPECpower_ssj2008 Run and Reporting Rules document.
For some analyzers range reading may not be supported. The uncertainty calculation may still be possible based on manual or
command line range settings. More details are given in the measurement setup guide, see
SPEC-Power_Measurement_Setup_Guide.pdf.
This is the third part of the SPECpower_ssj2008 full disclosure report revising the configuration information and aggregate performance numbers from the first part plus adding more detailed performance information if more than one JVM instance was started.
This report is not created for benchmark runs using only one JVM instance.
This section shows the measured SPECpower_ssj2008 throughput results and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for the whole SUT and the average throughput at 100% target load per Host and per JVM.
This section describes the aggregated throughput for all JVM instances measured during all test phases including the calibration intervals in a table and as a graph. For more details see section Aggregate Performance Data.
This section repeats the information from the corresponding section of the main report file.
For more details see section Aggregate SUT Data.
This section repeats the information from the corresponding section of the main report file.
For more details see section System Under Test.
This section repeats the information from the corresponding section of the main report file.
For more details see section Shared Hardware.
This section repeats the information from the corresponding section of the main report file.
For more details see section Set: 'N'.
This section repeats the information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
This section repeats the information from the corresponding section of the main report file.
For more details see section Notes.
This section gives an overview of the accumulated throughput for the different sets.
This column of the table names the different sets, the aggregated throughput for all sets at 100% target load and the average throughput per Host and per JVM at 100% target load.
The aggregated throughput for all JVM instances of a set at the 100% target load level.
The result chart graphically displays the throughput results reported in the set instance summary table in one diagram. The colored lines represent the actually measured throughput values of each set for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per set.
This section describes the aggregated throughput for all JVM instances belonging to this set measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
This report represents the next level of the SPECpower_ssj2008 full disclosure report.
It describes the configuration and performance details for a specific set of nodes.
There may exist several of these reports, one for each set.
This report is not created for benchmark runs using only one homogeneous set of nodes.
This section shows the measured SPECpower_ssj2008 throughput results for this set of nodes and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for the whole set and the average throughput at 100% target load per Host and per JVM.
This section describes the aggregated throughput for all JVM instances of this set measured during all test phases including the calibration intervals in a table and as a graph.
For more details see section Aggregate Performance Data.
This section repeats set specific information from the corresponding section of the main report file.
For more details see section System Under Test.
This section repeats the information from the corresponding section of the main report file.
For more details see section Shared Hardware.
This section repeats set specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
This section repeats set specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
This section repeats set specific information from the corresponding section of the main report file.
For more details see section Notes.
This section gives an overview of the accumulated throughput for the different hosts belonging to this set.
For more details regarding the layout and the field names see section Set Instance Summary.
This column of the table names the different hosts, the aggregated throughput for all hosts at 100% target load and the average throughput per Host and per JVM at 100% target load.
The aggregated throughput for all JVM instances of a host at the 100% target load level.
The result chart graphically displays the throughput results reported in the host instance summary table in one diagram. The colored lines represent the actually measured throughput values of each host for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per host.
This section describes the aggregated throughput for all JVM instances running on host 'N' measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
This report represents the next level of the SPECpower_ssj2008 full disclosure report.
It describes the configuration and performance details for a specific node or host.
There may exist several of these reports, one for each host.
This section shows the measured SPECpower_ssj2008 throughput results for this node and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the aggregated performance at 100% target load for this node and the average throughput at 100% target load per JVM.
This section describes the aggregated throughput for all JVM instances of this host measured during all test phases including the calibration intervals in a table and as a graph.
For more details see section Aggregate Performance Data.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Notes.
This section gives an overview of the accumulated throughput for the different JVMs running on this host.
For more details regarding the layout and the field names see section Set Instance Summary.
This column of the table names the different JVM instances, the aggregated throughput for all JVM instances at 100% target load and the average throughput per JVM at 100% target load.
The aggregated throughput for all JVM instances of a host at the 100% target load level.
The result chart graphically displays the throughput results reported in the host instance summary table in one diagram. The colored lines represent the actually measured throughput values of each JVM instance for the different test phases as indicated on the x-axis. The throughput values are shown on the y-axis, higher values are better. The thin horizontal line shows the average throughput per JVM.
This section describes the throughput of one JVM instance measured during all test phases including the calibration intervals in a table and as a graph.
The layout and the information are similar to the "Aggregate Performance Data" section of the main report file.
For more details see section Aggregate Performance Data.
This is the lowest level part of the SPECpower_ssj2008 full disclosure report revising the configuration information and performance numbers from the previous level plus adding more detailed throughput information for different transaction types. There may exist several of these reports, one for each JVM instance.
This section shows the measured SPECpower_ssj2008 throughput results for this JVM instance and gives some
general information regarding this test run.
For more details see section Top bar of the main report file.
In contrast to the top bar in the main report file the headline does not show the overall metric but the performance at 100% target load for this JVM.
This section describes the aggregated throughput for this specific JVM instance measured during all test phases including the calibration intervals in a table and as a graph. For more details see section Aggregate Performance Data.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Set: 'N'.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section System Under Test Notes.
This section repeats host specific information from the corresponding section of the main report file.
For more details see section Notes.
This table gives more details about the transactions executed during the different test phases by this JVM instance.
For description see Target Load.
For description see Actual Load.
This column of the table names the different transaction types which are executed by the workload.
This column of the table displays the count of successfully finished transactions during the different test phases separately for each type.
The total amount of heap memory used by this JVM instance during the different test phases.
Product and service names mentioned herein may be the trademarks of their respective owners.
Copyright 2007-2012 Standard Performance Evaluation Corporation (SPEC)
All Rights Reserved