Standard Performance Evaluation Corporation |
|
|
2007 SPEC Benchmark Workshop The HPC Challenge Benchmark:
A Candidate for Replacing Linpack in the TOP500 ? The HPC Challenge suite of benchmarks examines the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. The HPC Challenge suite is designed to provide benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and provide a framework for including additional benchmarks. The HPC Challenge benchmarks are scalable with the size of data sets being a function of the largest HPL matrix for a system. The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. The suite is composed of several well known computational kernels (STREAM, High Performance Linpack, matrix multiply -- DGEMM, matrix transpose, FFT, RANDA, and bandwidth/latency tests) that attempt to span high and low spatial and temporal locality space. Benchmarking
Sparse Matrix-Vector Multiply in Five Minutes We present a benchmark for evaluating the performance of Sparse matrix-dense vector multiply (abbreviated as SpMV) on scalar uniprocessor machines. Though SpMV is an important kernel in scientific computation, there are currently no adequate benchmarks for measuring its performance across many platforms. Our work serves as a reliable predictor of expected SpMV performance across many platforms, and takes no more than five minutes to obtain its results. SPEC MPI2007 Benchmarks
for HPC Systems [slides (PDF)] Benchmarking
for Power and Performance There has been a tremendous increase in focus on power consumption and cooling of computer systems from both the design and management perspectives. Managing power has significant implications for system performance, and has drawn the attention of the computer architecture and systems research communities. Researchers rely on benchmarks to develop models of system behavior and experimentally evaluate new ideas. But benchmarking for combined power and performance analysis has unique features distinct from traditional performance benchmarking. Is CPU2006 the last of SPEC's
CPU benchmarks? [slides] Benchmark
Design for Robust Profile-Directed Optimization Profile-guided code transformations specialize program code according to the profile provided by execution on training data. Consequently, the performance of the code generated usind this feedback is sensitive to the selection of training data. Used in this fashion, the principle behind profileguided optimization techniques is the same as off-line learning commonly used in the field of machine learning. However, scant use of proper validation techniques for profileguided optimizations have appeared in the literature. Given the broad use of SPEC benchmarks in the computer architecture and optimizing compiler communities, SPEC is in a position to influence the proper evaluation and validation of profile-guided optimizations. Thus, we propose an evaluation methodology appropriate for profile-guided optimization based on cross-validation. Cross-validation is a methodology from machine learning that takes input sensitivity into account, and provides a measure of the generalizability of results. Designing
a Workload Scenario for Benchmarking Message-Oriented Middleware Message-oriented middleware (MOM) is increasingly adopted as an enabling technology for modern informationdriven applications like event-driven supply chain management, transport information monitoring, stock trading and online auctions to name just a few. There is a strong interest in the commercial and research domains for a standardized benchmark suite for evaluating the performance and scalability of MOM. With all major vendors adopting JMS (Java Message Service) as a standard interface to MOM servers, there is at last a means for creating a standardized workload for evaluating products in this space. This paper describes a novel application in the supply chain management domain that has been specifically designed as a representative workload scenario for evaluating the performance and scalability of MOM products. This scenario is used as a basis in SPEC�s new SPECjms benchmark which will be the world�s first industry-standard benchmark for MOM. SPECjbb2005
-- A Year in the Life of a Benchmark Performance benchmarks have a limited lifetime of currency and relevance. This paper discusses the process used in updating SPECjbb2000 to SPECjbb2005 and presents some initial reflections on the implications and effects of the update now active. Measuring
the Performance of Multithreaded Processors Nowadays, multithreaded architectures are becoming more and more popular. In fact, many processor vendors have already shipped processors with multithreaded features. Regardless of this push on multithreaded processors, still today there is not a clear procedure that defines how to measure the behavior of a multithreaded processor. Characterization
of Performance of SPEC CPU Benchmarks on Intel's Core Microarchitecture
based processor The newly released CPU2006 benchmarks are long and have large data access footprint. In this paper we study the behavior of CPU2006 benchmarks on the newly released Intel's Woodcrest processor based on the Core microarchitecture. CPU2000 benchmarks, the predecessors of CPU2006 benchmarks, are also characterized to see if they both stress the system in the same way. Specifically, we compare the differences between the ability of SPEC CPU2000 and CPU2006 to stress areas traditionally shown to impact CPI such as branch prediction, first and second level caches and new unique features of the Woodcrest processor. |