SPECweb99 Release 1.02 User's Guide |
1 Introduction
2 Installing SPECweb99
2.1 Pre-Installation Checklist
2.2 Client Setup
2.2.1 Installing and running the UNIX client
2.2.2 Compiling UNIX client software (if necessary)
2.2.3 Windows NT Client Setup
2.2.4 Compiling NT client software (if necessary)
3.1.1 Changeable benchmark parameters
3.1.2 Configuration description parameters
3.1.3 Benchmark Constants
4 Understanding the benchmark screen display
4.1 Test setup summary
4.2 Operational validation
4.3 Test Warnings
4.4 Results from each iteration
4.5 Screen display of ASCII output file
5 SPECweb99 Result Pages and Raw File
SPECweb99 is a client/server benchmark for measuring the maximum number of simultaneous connections that a web server is able to support. The benchmark load is presented by client software on client machines networked to server machines running an HTTP server software.
This document is a practical guide for setting up and running a SPECweb99 test. This user's guide covers some, but not all of the rules and restrictions pertaining to SPECweb99. Before running a test, you should read the complete "Run and Reporting Rules" contained in the kit. For an overview of the benchmark architecture, see the SPECweb99 Whitepaper also contained in the kit.
Here is a checklist of steps to complete before installing the SPECweb99 benchmark software.
Files | Size formula (approximate) |
---|---|
file_set | (25 + ( simultaneous_connections * .66)) * 4.88 Mbytes |
post log | simultaneous_connections * 0.06 Mbytes |
HTTP server log | consult HTTP server documentation |
The equations given are based on a speed of 400K bits/second and an average request size of 122K bits (based on the file sizes and the Zipf distribution used to select files). Using these values, a single connection can process a maximum of about 3.3 HTTP operations per second.
The post log formula assumes the default run time of 20 minutes warmup, 20 minutes run-time and 5 minutes rampdown. The post log gets zeroed out between iterations. When calculating space requirements for the HTTP server logs, remember that the server logs do not get truncated between the 3 iterations.
Note: Logs must be written to stable storage for a valid SPECweb99 benchmark run. Stable storage refers to non-volatile storage media. In the case of solid state disks, they should have battery back-up
This section describes the steps necessary to setup SPECweb99 clients. Detailed instructions are provided for the two major operating system types: UNIX and Windows NT. The setup instructions for most of the hardware platforms are very similar to the generic setup instructions for the two operating system versions. The SPECweb99 page of the SPEC website may contain additional instructions for particular platforms.
Note: You may use a mix of NT and UNIX based clients in your setup. Furthermore, a UNIX server can be tested with either NT or UNIX clients or a mix. An NT server can also be tested with a mix of either type of clients.
Designate one of your client machines as the prime client machine. The prime client will control the entire test and the other clients, so needs to be able to establish network connections to all the other clients and to the server. The prime client can be used just as the test controller or can serve as both controller and client. The following instructions assume it does both functions.
On all clients, do the following:
java setup
After accepting the license agreement, setup will ask you to "Select your architecture" followed by a list containing those operating systems and architectures for which the CD contains pre-compiled versions the SPECweb99 software. The list has 2 additional choices, none for installing client sources and toolsource for selecting SPEC tools sources.
If your architecture is not on the list, install none for regular clients or toolsource on the prime client. Follow the instructions for compiling UNIX client software before continuing. Once you have installed the pre-built software or built it from sources you will have the following compiled executables on your client:
Client executable
Server software that is built with the client executables
Prime-client tools
Note: For a result to be valid the connections between a SPECweb99 load generating machine and the System Under Test (SUT) must not use a TCP MSS greater than 1460 bytes. An MTU of 1500 bytes is the standard packet-size for IP over Ethernet.
Usage: ./client [options] -h # this help screen -D # debugging level -p # control port to listen on -i used to indicate client is being run out of inetd -t # daemon idle timeout (set larger than anticipated run) -c start a control session on stdin -d start in background as a daemon -m # size of shared memory segment (SYSV_IPC version only) -w dir work directory -s # Deck size -S # Deck cache size
The SPECweb99 kit includes sources for all the tools and software needed to run SPECweb99. It also includes prebuilt executables for many platforms. Occasionally, however, you may need to build or rebuild the client tools or SPECweb99 client executable. You will need your own C compiler to do build the SPECweb99 tools and executables.
To build the tools, such as specperl install the sources option from the SPECweb99 CD, then the follow the directions in the README file in the <installation-directory>/tools/src directory.
To build the client executables, you must first run configure to build the makefile, then make to build the executables.
SPECweb99 has been written to run on multiple platforms. You need to use the configure script to create a makefile for your specific platform. For the most part, the configure script is smart enough to make the right decisions. But there are cases when the script cannot make the right decisions or the user may want to disable the use of certain features used in the code and instead choose an alternative method.
The following features are available in SPECweb99 and can be used as arguments to the configure script to enable or disable the feature. Enabling or disabling these features affects the way the client program is built.
Type 'configure --help' to see the generic options supported by configure.
Note: It should not be assumed that all these options are available
or not available on a particular platform. Please check your product documentation
before using these features.
Compile the benchmark software by typing the following command.
make all
This will build the following executables:
./client ./Cadgen99/cadgen99 ./Upfgen99/upfgen99 ./Wafgen99/wafgen99
On all clients, do the following steps:
The SPECweb99 kit includes sources for all the tools and software needed to run SPECweb99. It also includes prebuilt executables for many platforms. Occasionally, however, you may need to build or rebuild the client tools or SPECweb99 client executable. You will need Microsoft Visual C++ to build the SPECweb99 executables and tools.
Note: in the following directions, %SPEC% refers to the installation directory.
To build the tools, such as specperl, get the tools sources from the SPECweb99 CD (see alternative installation methods, above) then follow the directions in the README file in the %SPEC%\tools\bin\src directory.
To rebuild the client software, you must install 'client' or 'prime client' from the CD. Both contain all the sources for building. The installation directory will be hereafter referred to %SPEC%. This is one of the environment variables you get after running shrc.bat.
The kit contains 4 separate executables that can be rebuilt. The 4 workspace files are in:
%SPEC%\Win32\client\client.dsw %SPEC%\Win32\Cadgen99\Cadgen99.dsw %SPEC%\Win32\Upfgen99\Upfgen99.dsw %SPEC%\Win32\Wafgen99\Wafgen99.dsw
Note: The last 3 files, Cadgen99, Upfgen99, and Wafgen99 are utilities that get put on the server machine. It is highly unlikely that you will ever need to rebuild these.
To rebuild, open the workspace file, then from the "Build" menu, select either build or rebuild all. Developer Studio will place the resulting executable in a subdirectory called "Release". Make sure you move the new executable to the correct place before preceding. The kit is shipped with the executables in the following places:
%SPEC%\client.exe %SPEC%\Cadgen99\cadgen99.exe %SPEC%\Upfgen99\upfgen99.exe %SPEC%\Wafgen99\wafgen99.exe
On the server the setup involves installing the HTTP daemon, creating the workload file set that will be accessed by the benchmark, and setting up the binaries for dynamic content.
The following instructions tell you how to run using the provided sample Perl implementation for the dynamic content. You may use your own API implementation in place of the sample Perl implementation. Please refer to the vendor's instructions for installation of the API binaries. The API implementation needs to conform to the specification in the "SPECweb99 Run and Reporting Rules". The API implementation must be submitted with your results and reviewed by the SPECweb99 committee.
Wafgen/wafgen99[.exe] Cadgen99/cadgen99[.exe] Upfgen99/upfgen99[.exe]
The wafgen99 builds the file_set. The other two utilities are called by the dynamic code during a Reset.
Usage: wafgen99 [options] [ [startload] targetload ] -C dir Change to 'dir' before creating files -h this usage message -n conns Set targetload for # simultaneous_connections -v Increase verbosity (currently 2 levels) -s If directory exists skip it -t Test Mode, no random characters in files -r If directory exists remove it -V validate files, -V again to validate all -f load Start directory load (# simultaneous_connections) -F num Start directory number
This will create a file_set directory tree. The SPECweb96 wafgen utility and the file set it creates are not usable for the SPECweb99 benchmark.
You will need Perl 5.004 or above in order to run this script. You may install your own Perl, or use specperl that comes with the SPECweb99 kit. See the client setup notes for your operating system for information on installing the spec tools.
UNIX only: Edit the top line of the file to point to perl or specperl on your system.
Connect to server and try a static get.
telnet <servername> 80 GET /file_set/dir00000/class0_0
Connect to server and try a dynamic GET
telnet <servername> 80 GET /cgi_bin/specweb99-cgi.pl?/file_set/dir00000/class0_0
Note: This is not the complete set of operations in the test. The manager will execute one of each kind of transaction prior to launching a test. The manager will halt if it finds any errors.
SPECweb99 is controlled by the manager script on the prime client. It reads a resource configuration (rc) input file. To run SPECweb99, customize the rc file to reflect your benchmark setup, then run the test from the prime client with the manager script.
Modify the rc file on the prime client. The rc file in the SPECweb99 kit is a template, containing every variable to describe a test. You may name the rc file anything you wish (and create as many as you like). The rc file has 3 main sections:
For example, if file_set is in a directory named "specweb99" then
you would use
" DYNAMIC_ROOT=http://server1/cgi-bin/specweb99-cgi.pl?/specweb99".
You may also use separate URL's for each class of DYNAMIC operation supported:
POST, standard dynamic GET, custom ad rotation (CAD) GET and Commands (fetch,
reset). You may set any or all of the 4 separate variables. The unset variables
will default to DYNAMIC_ROOT.
The Configuration description section should contain a full description of the testbed. It should have enough information to repeat the test results, including all necessary hardware, software and tuning parameters. All parameters with non-default values must be reported in the Notes section of the description.
The configuration description has 8 categories.
Each category contains variables for describing it. For example, Hardware contains variables for CPUs, caches, controllers, etc. If a variable doesn't apply, leave it blank. If no variable exists for a part you need to describe, add some text to the notes section. The notes sections can contain multiple lines by adding a 3-digit suffix to the variable. For example, the notes_http variable can become notes_http001, notes_http002, etc.
The rc file included in the kit contains examples for filling out these fields.
Any changes to the benchmark constants will invalidate the run. However, changing these constants can be useful for tuning the system. Note: the -C switch in manager will reset these constants for a test run, so you needn't re-edit the rc file to get a legal run.
Test timing parameters
Mix parameters - control the percentage of requests of each type. Set to values between 0 and 1.
Request type | Percentage of mix |
---|---|
Static GET | 70.00% |
Standard Dynamic GET | 12.45% |
Dynamic GET with Custom Ad Rotation | 12.60% |
Dynamic POST | 4.80% |
Dynamic GET calling CGI code | 0.15% |
Other parameters
Before running the benchmark check that the following are true:
Before running the benchmark, you must set up the SPEC environment by running the shrc file on the prime client. You only need to do this once per login session.
On NT:
On UNIX:
From the session you set up with shrc start the benchmark by executing the manager script with specperl:
specperl manager [options]
manager tests all the operations on the server, tells all the clients what to run, tells them to run it, waits until the clients finish, then collects the results and creates the results summary in various output formats.
Usage: specperl manager [options] [option=val...] [file...] -h this help message -v # verbosity level -o types Produce output in the specified types -I Do not abort on certain classes of errors -R files are reformatted and not used as config files -l Turn off client logging in log-client.nnn file -C set all parameters need for a compliant run -D turn on details in the ASCII output
The default input rc file is rc. You may specify your own rc file or files on the command line. If multiple rc files are used, they are run one after another, producing separate output files for each.
The output types available are: asc, html, ps, pdf, screen and raw. Results submitted to SPEC will appear on the SPEC website in the asc, html, ps and pdf formats. The manager script produces all output formats by default. It puts them into the results directory. In addition, the test details that appear on the screen get written to a res.<nnn> in the results directory. The 'detailed ASCII' output generated by the -D switch contains these same details. The 'detailed ASCII' format is for review, only, not publication.
The manager script outputs data to the screen as it runs. This output also appears in the results/res.nnn file. The manager script writes the same data summarized on a per-client basis into the results/log-client.nnn file. The following is an annotated res file.
The test preamble section gives information about the whole run, including:
.
The following example shows a problem with pdf and ps outputs.
Opened logfile results/res.248 WARNING: Unable to find PSPDF.pm -- no PDF or PS output will be made Opened client logfile results/log-client.248 CLIENT LIST: Client 1: name=localhost, port=, speed=1, host= Client 2: name=csg161, port=, speed=1, host=
Next, manager validates all the types of operations. When all the tests pass, you know that:
If any of these tests fail, manager will abort and let you correct the errors. For debugging purposes, you can set a failing operation type to 0% in the rc file to skip both the test and issuing of this operation. For example, if you haven't yet implemented the dynamic POST, set 'DYNAMIC_POST=0' in the rc file to run without it. In the following example, the CGI GET call fails.
Validating all paths for rc.short Checking Reset command with 'http://bbb116/specweb99/isapi/newz-xmit.so?comman d/Reset&maxload=100&pttime=100&maxthread=100&exp=1,100&urlroot=http://bbb116/spe cweb99' Checking static GET with 'http://bbb116/specweb99/file_set/dir00089/class3_6' Checking dynamic GET with 'http://bbb116/specweb99/isapi/newz-xmit.so?/specweb 99/file_set/dir00089/class1_6' Checking dynamic CGI GET with 'http://bbb116/specweb99/wrongcgi/specweb98-newc gi?/specweb99/file_set/dir00089/class1_6' **ERROR**: Can't fetch file http://bbb116/specweb99/wrongcgi/specweb98-newcgi? /specweb99/file_set/dir00089/class1_6 with a dynamic CGI GET request Is DYNAMIC_ROOT (or DYN_xxx_SCRIPT) configured correctly? Error: 404 Not found Checking dynamic custom ad (CAD) GET with 'http://bbb116/specweb99/isapi/newz- xmit.so?/specweb99/file_set/dir00089/class1_6' Checking dynamic POST with 'http://bbb116/specweb99/isapi/newz-xmit.so?/specwe b99/file_set/dir00089/class1_6' ****** FATAL ERRORS FOUND *******
The manager script warns you if you have reset any of the benchmark constants. For example:
WARNING: RUN WILL BE INVALID Invalid test: RUN_TIME is 30, must be 1200 Invalid test: RAMPUP_TIME is 30, must be 300 Invalid test: RAMPDOWN_TIME is 10, must be 300 Invalid test: WARMUP_TIME is 30, must be greater than 1200
The results from each iteration contains the following sections:
Iteration 1 of 3 with load of 50 connections Thu May 27 11:24:05 1999 ============================================================================= Number of clients = 2 Simultaneous Connections = 50 Warm-up time (seconds) = 30 Run time (seconds) = 30
The Error column in the OP COUNT portion shows operations where the client did not get the expected response. For example, the server returned an incomplete page. The UNIX client records all errors in the "work/log.cs<xx>.gen<yy>" files in the work subdirectory, where <xx> represents an unique ascending sequence number and <yy> represents the thread number. To find the correct logs, sort the files by date. The NT client records records in the Application Log. These are viewable by using the Windows NT Event Viewer application and selecting the Application Log from the Log menu. An error rate greater than 1% in any class will invalidate an iteration.
Note: The error count column only records errors during the actual measurement period of the iteration. Because the client logs all errors, the counts will generally not match.
-------------------------+-----------------+--------------------------+------ M I X | O P C O U N T | MEAN RESPONSE TIME Ms/Op | Total Class Target Actual | Success Error | Mean StdDev 95% CI | Time -------------------------+-----------------+--------------------------+------ class0 0.350 0.350 | 1738 3 | 16.01 6.68 0.17 | 1.9% class1 0.500 0.502 | 2490 0 | 105.33 36.22 0.34 | 17.5% class2 0.140 0.139 | 688 0 | 996.55 384.48 2.10 | 45.8% class3 0.010 0.010 | 48 10 | 10843.02 3456.81 23.82 | 34.8% -------------------------+-----------------+--------------------------+------ | 4964 0 | 301.41 1150.59 1.35 |
-------+--------------------------------+------------------+-------------------- |Individual OP Bit Rate(bits/sec)| Aggregate Bit | Weighted ABR Class | Mean StdDev 95% CI | Rate (bits/sec) | (%) -------+--------------------------------+------------------+-------------------- class0 | 393977.03 28417.23 11.35 | 386470.65 | 7186.22(1.80) class1 | 399464.10 6925.93 4.68 | 399403.09 | 70015.32(17.52) class2 | 399817.66 2650.34 5.51 | 399842.13 | 183226.37(45.86) class3 | 399991.49 38.46 2.51 | 399991.52 | 139140.58(34.82) -------+--------------------------------+------------------+-------------------- | | | 399568.50(100.00)
| | total | ops/sec/| | SPECweb99 | ops/sec | loadgen | msec/op -------------+------------+---------+---------+--------- RESULTS | 37 | 165 | 3.31 | 301.4 | | | | conforming* | requested | valid | invalid | (conf %) -------------+-----------+-----------+-----------+---------------- SIMULTANEOUS | 50 | 37 | 13 | 37 ( 74.00%) CONNECTIONS | | | | * a conforming connection is one that operates faster than 320K bit/sec | mean | min | max -------------+---------+---------+-------- AGGREGATE | | | BITRATES | 399568 | 397823 | 399986 (bits/sec) | | |
------------------------------------------------------------------------- Percentage of simultaneous connections conforming at various speed limits --------------------------+---------------------------------------------- Successive Speed Limits : | 380000 360000 340000 320000 300000 280000 Cumulative Conformance %: | 0.00 5.00 10.00 90.00 100.00 100.00
ERRORS FOUND Iteration 1: 13 invalid connections found 12 from load generators with 0 requests in a class or classes 1 from load generators with > 1.0% errors in a class or classes Iteration 1: % conforming connections is 74.0% must be >= 95.0% =============================================================================
Lastly, you will see ASCII output format on the screen and in the res file.
This is one of the formats available on the SPECweb99 result pages website.
See the section on Result Pages for a description
of this page.
Note: Creation of the pdf output causes warnings to be displayed
on some platforms. The following warnings are harmless:
Bad free() ignored at /usr/users/spec/spec/web99/bin/lib/site_perl/5.005/pdflib.pm line 282.
PDFlib warning: Symbol font Symbol doesn't use builtin encoding
The manager script can generate the output in four formats: ASCII, HTML, PDF and PostScript. They get written to the results directory with the names output.<nn>.[asc | html | pdf | ps]. In addition it creates an output.<nn>.raw file. Only the .raw file is needed to submit a result to SPEC, so errors in the creation of other output formats can generally be ignored.
The result pages contain the following elements.
Sample output pages are included with the kit ( ASCII, ASCII-detailed, HTML, PDF, PS).
The raw file contains all the inputs from the rc file and results from the test. You can make any of the other formats from the raw file with the manager -R command. The raw file is submitted to SPEC.
If problems are encountered, the following checklists may help identify the cause:
The first steps to tuning any benchmark, is analyzing the workload and look for possible bottlenecks in the configuration. There are a number of factors or bottlenecks that could affect the performance of the benchmark on your system.
These tuning tips are for configuring the system to generate the best possible performance numbers for SPECweb99. Please note that these should not necessarily be used to configure and size real world systems. Configuring and sizing real world HTTP server systems is an extremely difficult exercise. SPECweb99 quantifies certain important aspects of web server performance. However, other workload components may be involved in a given real world web server, e.g. directory name service, ftp, mail, news, etc. Due to the lack of accurate qualitative sizing guidelines, experience or even guesswork often becomes the determining factor in arriving at a configuration. Even experience of a similar environment can be misleading due to the huge variation in user and workload profiles across different end user sites.
Note: The SPECweb99 kit supplies a sample implementation that is operational and correct, but is not a good choice for high performance.
Many HTTP daemons have companion products to accelerate static pages. These products typically use caching to return pages from memory, rather than disk
Once you have a successful run, you can submit the result to the SPEC web committee for review by mailing the output.<nn>.raw file and the source code for the dynamic content implementation to subweb99 @ spec.org. When mailing the raw file, include it in the body of the mail message, don't send it as an attachment, and only mail one result per email message.
Save the HTTP daemon log file from the benchmark run as you may be asked to provide part of the log for review by the committee.
Note: The raw file uses the configuration and parameter information in the rc file. Please edit your rc file with the correct information prior to running the benchmark for submission.
Every submission goes through a two-week review process. During the review, members of the committee may ask for further information/clarifications on the submission. Once the result has been reviewed and approved by the committee, it is displayed on the SPEC web site at www.spec.org .