From Computer Architecture News, Vol. 35, No. 1 -
March 2007
ACM Special Interest Group on
Computer Architecture
SPEC CPU2006 Analysis Papers: Guest Editor's
Introduction
John
L. Henning, Secretary, SPEC CPU Subcommittee
john dot henning at acm dot org
(This introduction is also available
as pdf.)
During the development of the new benchmark
suite CPU2006, SPEC analyzed benchmark candidates for various
technical attributes, including time profiles, language standard
compliance, I/O activity, system resource usage, and many
other attributes. Many people contributed to the analysis,
as shown in the credits at www.spec.org/cpu2006/docs/credits.html.
This issue of Computer Architecture News presents a set of
articles flowing from that analysis effort.
Three papers discuss the benchmarks that
have been selected for the suite:
-
"SPEC
CPU Suite Growth: An Historical Perspective" by
John L. Henning examines how the SPEC CPU
suites have grown over the years, and discusses
some of the reasons for that growth.
-
Using a set
of hardware performance counters, Aashish Phansalkar,
Ajay Joshi and Lizy K. John consider similarities
among the benchmarks for their article "Subsetting
the SPEC CPU2006 Benchmark Suite".
The intent is to assist users of performance simulators
who, because of time constraints, prefer to simulate
a representative subset of the benchmarks. Note:
although SPEC is aware that users of performance
simulators may prefer to study subsets, SPEC does
not endorse any subset as representative of the overall
metrics.
-
The CPU2006
suite includes far more C++ content than previous
CPU suites. Michael Wong reviews C++ language and
standards issues that have affected the benchmark
selection and porting process, with his article "C++
Benchmarks in SPEC CPU2006".
Three
papers present information about the new suite's memory behavior:
-
"SPEC
CPU2006 Memory Footprint" by
John L. Henning graphs virtual and physical
memory consumption over time using the traditional
Unix metrics of rss (resident set size) and
vsz (virtual size).
-
In "CPU2006
Working Set Size", Darryl
Gove takes a closer look at memory use by
using the SHADE instruction analyzer to track
load and store instructions. The article
provides two metrics that track memory which
is actively used - a much smaller amount
than the traditional rss or vsz - and compares
CPU2000 vs. CPU2006.
-
Because SPEC
CPU2006 includes codes with a larger memory footprint,
there is substantial runtime variation depending
on the selection of memory page sizes. Wendy Korn
and Moon S. Chang show runtime effects from 4KB,
64KB, and 16MB pages in their article "SPEC
CPU2006 Sensitivity to Memory Page Sizes".
Four
papers examine technical behavior of the benchmarks in the
light of SPEC's goals for them:
-
The beginning
point for analysis of program performance behavior
is to profile where the time went. Reinhold P. Weicker
and John L. Henning provide that beginning in "Subroutine
Profiling Results for the CPU2006 Benchmarks".
They discuss SPEC's goals for usage of library routines,
and how those goals were adjusted as the subcommittee
learned more about the application areas represented
in CPU2006.
-
For the CPU
suites, SPEC seeks the compute-intensive portion
of applications, not the I/O intensive portion. By
the time SPEC completed its work, were the benchmarks
compute bound, with little I/O? Dong Ye, Joydeep
Ray, and David Kaeli answer that question in their "Characterization
of File I/O Activity for SPEC CPU2006".
-
Hardware
performance counters can provide insight into benchmark
candidates. John L. Henning discusses SPEC's use
of counters in the paper "Performance
Counters and Development of SPEC CPU2006".
-
SPEC allows
compilation with Feedback Directed Optimization (FDO),
also known as Profile Based Optimization. (For CPU2006,
FDO is permitted in peak; CPU2000 permitted it in
both base and peak.) Sometimes, it is hard to find
a useful FDO training data set. In their article "Evaluating
the correspondence between training and reference
workloads in SPEC CPU2006", Darryl
Gove and Lawrence Spracklen examine whether training
workloads visit the same instructions as reference
workloads, and whether branches behave similarly.
Finally,
SPEC's toolset is central to providing a controlled environment
for running the benchmarks with documented options and clear
reporting. Cloyce D. Spradling provides an overview of the
toolset, including how to use its hooks for analysis tools,
in his article "SPEC
CPU2006 Benchmark Tools". |