SPECweb2009 Frequently Asked Questions |
Last updated 2009-09-25
(To check for possible updates to this document, please see http://www.spec.org/web2009/docs/SPECweb2009_FAQ.html)
SPECweb2009 is a software benchmark product developed by the Standard Performance Evaluation Corporation (SPEC), a non-profit group of computer vendors, system integrators, universities, research organizations, publishers, and consultants. It is designed to measure a system's ability to act as a web server servicing static and dynamic page requests. The benchmark also measures the SUT power consumption while running the web server at various load levels.
SPECweb2009 is the successor of SPECweb2005. Rather than offering a single benchmark workload that attempts to approximate the breadth of web server workload characteristics found today, SPECweb2009 has chosen a 4-workload benchmark design: banking, ecommerce, support, and power. The change to the workloads in SPECweb2009 over SPECweb2005 is the addition of the Power workload and the inclusion of power measurement methodology. The benchmark is capable of measuring both SSL and non-SSL request/response performance, and continues the tradition of giving Web users the most objective and most representative benchmark for measuring web server performance as well as power. SPECweb2009 disclosures are governed by an extensive set of run rules to ensure fairness of results.
The benchmark kit consists of code for the primary client, clients that run the benchmark, the wafgen code to generate data sets on the server, the backend simulator code, and the power temperature daemon that collects data from the power analyzer and temperature sensor.
The benchmark clients run the application program that sends HTTP requests to the server and receives HTTP responses from the server. For portability, this application program and the prime client program have been written in Java. Note that as a logical component, one or more load-generating clients may exist on a single physical system.
The benchmark does not provide any of the web server software. That is left up to the tester. Any web server software that supports HTTP 1.1 and SSL (HTTPS) could be used. It should be noted that variations in implementations may lead to differences in observed performance. Also, in order to meet compliance with the Run Rules, any software used must be either commercially supported or adhere to the requirements presented in the Run Rules document for Community supported software.
To make a run of the benchmark, the tester must first set up one or more networks connecting a number of the driving "clients" to the server under test. The benchmark code is distributed to each of the drivers and the necessary fileset is created for the server. Then a test control file is configured for the specific test conditions and the benchmark is invoked with that control file.
SPECweb2009 is a generalized test, but it does make a good effort
at stressing the most basic functions of a web server in a manner
that has been standardized so that cross-comparisons are meaningful
across similar test configurations.
Each workload measuring the peak performance (SPECweb2009_Banking, SPECweb2009_Ecommerce, and SPECweb2009_Support) measures the maximum number of simultaneous user sessions that a web server is able to support while still meeting specific throughput and error rate requirements. The TCP connections for each user session are made and sustained at a specified maximum bit rate with a maximum segment size intended to more realistically model conditions that will be seen on the Internet during the lifetime of this benchmark. Power (in Watts) used while running the peak performance on each of these workloads is measured as well.
SPECweb2009_Power is a workload that is fully based on SPECweb2009_Ecommerce. SPECweb2009_Power is run at six different load levels, starting with the highest load level that corresponds to the maximum number of connections used in SPECweb2009_Ecommerce, ramping down to idle. The workload measures the performance to power ratio as the sum of simultaneous user sessions to the sum of watts used.
SPECweb2009_peak is the geometric mean of the three separate submetrics; SPECweb2009_Banking, SPECweb2009_Ecommerce, and SPECweb2009_Support. The individual submetric scores do indicate the total number of simultaneous user sessions the server can support. For more information, see section 3.1 of the Run and Reporting Rules.
SPECweb2009 includes the following improvements over SPECweb2005:
SPECweb2009 retains the three workloads from SPECweb2005: Banking,
Ecommerce, and Support. These were based upon statistics
analyzed from web server logs and observed client-side behavior.
For more information on each of these workloads, please see the
SPECweb2009 design documents. The SPECweb2009_Power workload is based
on the SPECweb2009_Ecommerce workload.
Yes. But SPEC will accept, and review SPECweb2005 results until
June 1st 2010. After that time only SPECweb2009 results will be
accepted. Any change to this date will be posted at SPEC's web site
No. Even though SPECweb2009 workloads are based on SPECweb2005,
the inclusion of the Power component changes the way the benchmark is
run. Therefore public comparisons of SPECweb2005 (or other
SPECweb results) to SPECweb2009 results would be considered a
violation of the run and reporting rules.
Initial SPECweb2009 results are available on SPEC's web site. Subsequent results will be posted on an ongoing basis following each two-week review cycle: results submitted by the two-week deadline are reviewed by web committee members for conformance to the run rules, and if accepted at the end of that period are then publicly released.
SPECweb2009 is a standardized test. The SPEC membership - leading vendors, systems integrators, universities, research organizations, publishers and consultants - has agreed on a benchmark suite with one standardized implementation and three new workloads.
Before any SPECweb2009 results are published, SPEC requires that the system under test and the methodology used adhere to agreed-upon standards (the run rules). All results available through SPEC include full disclosure information that reveals exactly what configurations have been used to obtain a particular result.
SPECweb2009 results published on SPEC's web site provide standardized, comparable results for those who cannot run their own web server performance tests.
SPECweb2009 is a standardized benchmark, which means that it is an abstraction of the real world. For example, SPECweb2005 does not attempt to model latency associated with obtaining data across a wide-area network (WAN) such as the Internet. This kind of behavior is difficult to simulate at this stage, since it requires elaborate hardware and software.
SPECweb2009 was not designed as a capacity planning tool for any specific data. SPECweb2009 provides information on how web servers handle the four workloads that the benchmark uses. The workload uncovers several key components of a good secure web server, including LAN performance, processing power, memory bandwidth, and power usage to name a few.
SPECweb2009 can be purchased on CD-ROM from SPEC at $1,600 for new licensees and $400 for eligible non-profit organizations. To order, contact SPEC's administrative office.
The benchmark comes with the code necessary to run the driver system(s), the server-side file set generation tools, and dynamic content implementations. It is up to the tester to install and configure the web server and testbed.
The benchmark does not include web server software, but any web server that is HTTP/1.0 or 1.1 compliant and supports SSLv3 can be used. In addition to licensed server software, open source software can be used to run SPECweb2009. See the run rules for more details.
The SPECweb2009 CD-ROM contains:
First, of course, you'll need properly running web server software on your server. On at least one client system, you'll need a 1.6.0 (or above) JVM for running the prime client and client.
There may have been some issues that have been raised about the benchmark since it was released. We are keeping a SPECweb2009 issues repository.
If your issue is not amongst the known issues, then bring it to the attention of SPEC.
Only SPECweb2009 licensees can submit results. SPEC member companies submit results free of charge. Non-members may submit results for an additional fee. All results are subject to a two-week review by web committee members. Non-member submissions are also subject to a preliminary review. If they pass preliminary review, they may be submitted for the standard member review, and barring any issues will be published by SPEC upon payment of a fee. First-time submitters should contact SPEC's administrative office.
SPECweb2009 submissions must include both the raw output file; during the review process, other information may be requested by the subcommittee.
The current version of the run rules can be found here.
The SPECweb2009 Design Document contains design information on the benchmark and workloads. The Run and Reporting Rules and the User Guide with instructions for installing and running the benchmark are also available. See: http://www.spec.org/osg/web2009 for the available information on SPECweb2009.
The issue is that by default session state is maintained in the memory space of the current worker process so each successive request that changes session data simply changes the in memory data for that worker process. If you have 2 worker processes subsequent requests can be made to either of the worker process for that web site or application with the net result of losing your previous session state data which is in the other worker process memory. If you need to implement more than 1 worker process you will need to change the session state mode settings using the session state feature for the ecommerce application. See the built in help for this feature describing the choices you have.
This issue is related to the way Windows OS authorizes a user. For Windows based systems the WEB_SERVER name must be the correct machine name for the system under test otherwise authorization will fail. This behavior will be the same regardless of script type. The correct fix is to have the WEB_SERVER parameter in the test.config file equal the COMPUTERNAME of the system under test.
Copyright© 2009 Standard Performance Evaluation Corporation. All rights reserved.