SPECweb2005 Release 1.20 Banking Workload Design Document
Due to the high variability in the security requirements and differences in the dynamic content in various web server workloads, it is becoming impossible to characterize the web performance of any server with any single workload. It is hence that SPECweb2005 will use three different workloads to characterize the performance of a web server. All the three workloads are based on real applications. The first workload is based on online banking. All requests in this workload are SSL based. The second workload is based on an Ecommerce application. This workload includes SSL as well as non-SSL based requests. The third workload is based on downloading Support patches for computer applications. The intent in the third workload is to test the ability to download large files. No SSL is used in the third workload. As an attempt to represent user experience, all the three workloads are based on requests for web pages, as opposed to requests for individual files. Each page request involves running a dynamic script at the server end, returning a dynamically created file. This is usually followed by requests for embedded images and other static files associated with the file. A page is thus fully received when all the files associated with the page are received. This new benchmark also includes QOS requirements for each page. For all pages except the large downloads, the QOS is based on page return delay. For the large pages, the QOS is based on byte rate.
This document presents a detailed description of the Banking workload. This description is based on our study of web server logs from a major bank based in the Texas area. Due to our agreement with the data provider and their request for confidentiality, we will only be able to discuss the data characteristics and will not be able to provide any further information about the data provider or share the data logs in their raw form. For the data that we could not obtain from the logs, we decided to base the information on anecdotal data that we collected by surfing various banking web sites while running some dummy transactions. In Section 2 we present summary information of the data collected from the web logs. Section 3 presents how the workload model would emulate client arrival pattern and activity periods. In Section 4, we present the details of the dynamic scripts proposed in the model including the parameters that need to be passed between the client and server as well the detailing the interaction of the server with the backend, while the backend functionality will be stubbed out. The script details are further elaborated in an accompanying document that presents the pseudocode for all the scripts in an html format. In Section 5, we present a Markov chain based model that would help us emulate the dynamic scripts run by each client and the static file requests within each page.
In this section, we present the summary of the data logs from a bank in Texas comparing the numbers with those from SPECweb99. Table 1 summarizes the request types and their fractions in SPECweb99 and the banking logs we studied. Table 2 shows a split of HTTP response codes as seen in the banking server logs. SPECweb99 does not represent any server response other than "200" (or OK), but client caching and the fact that the return of the "304" response may bring in a very different sort of resource usage on the server makes it important to represent this in the workload. Table 3 shows the various access protocols (HTTP 1.0 and HTTP 1.1) for both, the banking data as well as SPECweb99. As it was pointed out in [Nahum], the relative usage of HTTP 1.1 is growing and in fact depending upon the client machines used by customers, this fraction can be even higher. Based on this, it was decided that SPECweb2005 will base all requests on HTTP 1.1.
Request Type |
Percentage in Banking logs |
Percentage in SPECweb99 |
GET |
95.06 |
95.06 |
POST |
4.93 |
4.93 |
HEAD |
0 |
0 |
Table 1: Request types and their frequencies
Response Code |
Percentage in Banking Data logs |
Percentage in SPECweb99 |
200 OK |
68.1 |
100 |
206 Partial Content |
0.12 |
0 |
302 Found |
4.5 |
0 |
304 Not Modified |
27.05 |
0 |
403 Forbidden |
0.03 |
0 |
Table 2: HTTP Response Codes
Protocol Version |
Percentage Banking Data logs |
Percentage in SPECweb99 |
HTTP1.0 |
21.78 |
30 |
HTTP 1.1 |
78.22 |
70 |
Table 3: Protocol Versions
Table 4 shows a comparison of file sizes as seen in the banking server log with the ones represented in SPECweb99. The file size distribution listed here is based on the sizes obtained for static files only, since the server did not log the response sizes for dynamic requests. We see a significant shift in the size of files. SPECweb99 does not represent the file sizes in the range of 0-100 bytes and over-represents very large files, not seen at all in the case of banking data. Figure 1 shows a distribution of file sizes for the files at the server. The data showed 169 unique files, all of these are static image files. Note that there may be other files in the server directory that were never requested in the logs we obtained, but we do believe that such files must be minimal given that our logs spanned a period of 2 weeks including13+ million requests. Our logs also showed that there were 2-3 requests for static files for each dynamic request on the system and about 30% of them resulted in a "304" response from the server.
Size Range |
Class |
Banking Data |
SPECweb99 |
0-100 B |
-1 |
59.20% |
0.00% |
100-1KB |
0 |
34.00% |
35.00% |
1KB-10KB |
1 |
6.30% |
50.00% |
10KB-100 KB |
2 |
0.50% |
14.00% |
100KB-1MB |
3 |
0.00% |
1.00% |
Table 4: Workload File size distribution for requests
Since we did not find any information regarding the size of the file returned as a result of running a dynamic script from our server logs, we did an anecdotal survey of the size of the results returned by various banking programs. While the data collected was based on numbers from different servers and represented user sampling as opposed to server based sampling, we found that almost all scripts returned pages in the range of 10-100 KB. In the lack of other detailed information from servers, the size of the pages in this workload will be based on the actual sizes for each type of request that we sampled.
In our study of the banking data, we also contrasted the file popularity distribution with the one seen in SPECweb99. Our data suggests the use of higher Zipf exponent for Banking data, since the files with higher ranks seem to be more popular in this case than in SPECweb99. Since our anecdotal data collected included a sample of embedded image(along with their sizes), we will use the actual data from this sample to determine the size of the embedded files.
In a companion paper [Weber and Hariharan] we compare and contrast various trace generation methodologies. In the above document, we propose a method that will naturally emulate the client arrival and activity pattern based on information from log files. The model used to generate the client traffic in this case is very similar to the one described in the above paper.
Customers arrive at a given rate l. They go through several activity periods. Each activity period is characterized by a burst of requests from this client. We call these activity periods as "ON" time. While analyzing the logs, we characterize the "ON Time" when requests from this user are separated by less than a given threshold, the threshold used in our analysis is 5 sec (which seemed as an appropriate choice based on intuition as well as analysis). After each activity period the client may either leave the system or go through an "Off Period" (which is equivalent to a Think Time) and start an "On-period" again. The parameter for client arrival rate may be increased to stress test the server and when used with a threshold for response time and throughput, is a possible direct or indirect metric for the server capacity. In the data that we analyzed, we found that the Off-time has a distribution that is "long-tailed", i.e. there is a finite probability that the Off-time values may be very large. Given this scenario, we may treat a client as having left the system if the Off-time value generated exceeds the duration of the benchmark run. Thus there are two ways a client leaves the system – a. After each "On Time", there is a finite probability that the client leaves the system. b. A client may request the "Log-off" page and leave the system. In the first case, since the client abandons the system, the server will still be left with a "hanging session" from the client, whereas in the second case, the session will come to a clean close.
The above model forms the basis of traffic generation. However, instead of generating customer arrival at a rate l, we maintain the same number of connections through the length of the benchmark run, after having ramping up the user sessions over a rampup period. Each session however includes an "On" and "Off" period. "Off Periods" are akin to Think Times and "On Period" is when the customer makes a bunch of requests.
In our study of the logs, we noted that most "On periods" had exactly one dynamic request, but a few different static requests, thus signifying the fact that all the other requests within the same "On Period" represented the embedded static files for that dynamic request.
Each ON period essentially models a "click" at the browser. For the Banking application, this results in a Page request. Each Page request runs a dynamic script at the server and a corresponding page is generated dynamically. Following this, the browser makes a series of requests for images and other static files embedded in that page. Thus the duration of an "On" period is just the time it takes to launch a page request, including the requests for the subsequent embedded static pages.
Our analysis of the logs for the "Off time" distribution showed that the "Off time" could be modeled using a Pareto Distribution. However, in the workload model we use a Geometric distribution to model the Think Time between requests. The average Think Time used in the case of Banking workload is 10 sec.
The off-period described above is essentially the user think-time between requests. The Think Times used in this benchmark follow a geometric distribution.
Here is the algorithm used to generate Think Times. Let T=thinkTime, I=thinkInterval, M=maxThink. They are 10, 2, 150 for Banking and Ecommerce and 5, 1, 75 for Support respectively
A random variable r is generated as follows
r = - (T - I/2) * log(x) where x is a Uniform random variable (0, 1]
If e is greater than M, then another x is drawn and e is recalculated. To calculate the effective range for x, let x0 be the value that makes e equals to M, i.e.,
- (T - I/2) * log(x0) = M
so x0 = exp(-M/(T-I/2)). Thus the effective range for x is (x0, 1].
Thus, the average of r when x is in range (x0, 1] will thus be:
((T-I/2) * (1 - (M/(T-I/2)+1) * exp(-M/(T-I/2))))/(1 - exp(-M/(T-I/2)))
Substituting for T, I and M, we get 8.99 (approximately 9) average think time for Banking and Ecommerce and about 4.5 for the Support workload. The code then converts r to discrete value in intervals of I, but rounding upwards. The average of the discretised Think Time values is about 9.9 seconds, which is what is used in both Banking as well as Ecommerce. (For Support workload, the average think time is 5.1 seconds.)
Based on our analysis, we propose the following Markov Chain based approach to model the access pattern for various scripts. Our banking data as well as our anecdotal experiences with accessing bank accounts shows that typically all banking transactions can be grouped into the following 4 categories:
Account information access or Account Maintenance related
Bill Pay related
Money Transfer
Loan related
While we tried to link the static files requested in the logs with the above scripts, it wasn't easy especially when compounded with the problem of deciding whether the file to be requested is in the cache or not because many clients share the cache on the client machine. Given this issue, we decided to based the request pattern somewhat on our anecdotal experience with “XYZ Bank”'s site, looking at the files embedded within similar dynamic requests.
First, we note that since every bank is different in what it presents to its customers over the Online Banking site. Hence the dynamic scripts as well as the embedded files that we found while surfing the Online Banking website were quite a bit different than the ones we found in the web logs that we analyzed. We note the following differences before we describe the dynamic scripts and the embedded files we saw within.
There were no Loan Related links in the "XYZ Bank"'s website. This is either because "XYZ Bank" doesn't offer Loans for its on-line customers or it is just that loan offering is offered through another website and we could not locate the link to this site from the usual on-line site.
"XYZ Bank" seems to have slightly larger sized embedded files, and hence fewer of them compared to what we saw in the server logs from the other bank. Based on my own surfing experience within the site, I could identify a total of 60 static files, compared to the 167 unique files that we saw in the server logs from the bank in Texas.
Typically each page had about 15-17 embedded static pages, but most (13-14) of these embedded files were common to all the requests. The cache expiry for all these files was set so the the files would never expire from the cache. This would mean that the very first time a particular customer accesses the service from a given machine, there will be a request for all the 17 files. But for subsequent requests within the session and subsequent sessions from the same machine, the client will make fewer requests. Given that the cache is not expected to expire at the client, it is not clear when these files will be re-requested or when these requests will result in a 304 response from the server. Thus we may have to make an assumption about the cache expiry of these files (at least some of them) to match the values that we see in log files.
For most of the dynamic requests found on the website, we list the function of the script and the embedded files and their respective sizes. In order to match the access frequencies for various scripts to the pattern that we observed in our study of the server logs, we modified the script functionality by merging two or three different scripts into a single one. This, we believe will make no difference to the nature of the workload model and will help us simplify model a great deal.
We will emulate the "304"s seen in the static file responses by requesting files with a "If-Modified-Since" clause a certain fraction of the time. This fraction will be different for different static files, based on the relative popularity of the file.
The following Table presents a list of various pages accessed in our anecdotal experience ith XYZ bank, the size of the page returned along with a 1 line description of the function accessed, a list of the embedded static files and their sizes.
Dynamic Script Number |
Dynamic Script |
Size of Returned Page |
Function |
Embedded Static File Numbers (for file names and sizes, see Table ) |
0 |
Login Welcome |
4KB |
Validate Password, establish user session and return summary information on each account held by user. |
1 |
2 |
||||
3 |
||||
4 |
||||
5 |
||||
6 |
||||
7 |
||||
8 |
||||
1 |
Account Summary |
17 KB |
Present monthly transaction details for the checking account and savings account. |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
11 |
||||
12 |
||||
13 |
||||
14 |
||||
15 |
||||
16 |
||||
2 |
Check Detail Output |
11 KB |
Presents the image of the front side of a check |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
11 |
||||
12 |
||||
13 |
||||
14 |
||||
15 |
||||
16 |
||||
18 |
||||
19 |
||||
20 |
||||
32 |
||||
3 |
Bill Pay |
15 KB |
Presents a list of Payees already added along with Payee account numbers, and payment dates of the last payment made to each payee |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
15 |
||||
17 |
||||
21 |
||||
22 |
||||
27 |
||||
28 |
||||
33 |
||||
4 |
Add Payee |
18 KB |
User sends a request to add a new payee, server returns form to add payee info |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
14 |
||||
15 |
||||
17 |
||||
21 |
||||
27 |
||||
5 |
Post Payee |
34 K |
User sends a post request adding payee information, Server passes the info to backend and returns post status |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
14 |
||||
15 |
||||
17 |
||||
21 |
||||
23 |
||||
27 |
||||
34 |
||||
6 |
Quick Pay |
22 K |
User sends a request for the quickpay and server returns a page with a form including all payees for user account that the user can fill out in order to make the payments |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
14 |
||||
15 |
||||
17 |
||||
21 |
||||
22 |
||||
24 |
||||
25 |
||||
26 |
||||
27 |
||||
33 |
||||
7 |
Bill Pay Status |
24 K |
User sends a request for Bill Pay status and the server returns a page giving details on the status of various payments scheduled |
1 |
2 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
11 |
||||
15 |
||||
17 |
||||
21 |
||||
22 |
||||
26 |
||||
33 |
||||
8 |
Profile |
32 K |
User sends request for Profile and the server returns the user profile including address, phone number etc, along with a form to change the profile |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
28 |
||||
29 |
||||
9 |
Change Profile |
29 K |
Post request from user with new profile information values. Server passes the profile information to backend and returns status |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
10 |
Order Check |
21 KB |
User sends request to order checks, server returns form to place the order, including images of several check models |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
30 |
||||
31 |
||||
35 |
||||
36 |
||||
37 |
||||
38 |
||||
39 |
||||
11 |
Place Check Order |
25 KB |
This is a post request including the information for new check order. Server passes the information to backend and returns status |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
12 |
Tranfer |
13 KB |
User sends request to transfer money, server returns the form to post the transfer information including the account details for the user, that the server obtains from backend |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
11 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
13 |
Post Transfer |
16 KB |
User submits money transfer details including $ value, to and from account numbers. Server passes the information to backend and returns status |
1 |
2 |
||||
3 |
||||
4 |
||||
7 |
||||
8 |
||||
9 |
||||
10 |
||||
11 |
||||
14 |
||||
15 |
||||
16 |
||||
17 |
||||
14 |
Logout |
46 KB |
User sends a logout request. Server ends session and returns page |
1 |
2 |
||||
3 |
||||
4 |
||||
5 |
||||
6 |
||||
7 |
||||
8 |
||||
15 |
Check Image Request (front) |
NA |
User sends a dynamic request specifying check number and side of check (this is a pseudo state, created to avoid embedding a dynamic request within another). Server verifies the check number and session id with back end, returns a page with check front image. |
None |
16 |
Check Image Requesr (back) |
User sends a dynamic request specifying check number and side of check (this is a pseudo state like the previous one, created to avoid embedding a dynamic request within another). Server verifies the check number and session id with back end, returns a page with check back image. |
None |
Table 6: Dynamic Scripts, what they do and the static files within
State |
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
0.32 |
0.32 |
0.08 |
0.08 |
|||||||||||||
1 |
0.56 |
0.08 |
0.08 |
0.08 |
|||||||||||||
2 |
1 |
||||||||||||||||
3 |
0.08 |
0.48 |
0.16 |
0.08 |
|||||||||||||
4 |
0.72 |
0.08 |
|||||||||||||||
5 |
0.72 |
0.08 |
|||||||||||||||
6 |
0.72 |
0.08 |
|||||||||||||||
7 |
0.72 |
0.08 |
|||||||||||||||
8 |
0.72 |
0.08 |
|||||||||||||||
9 |
0.72 |
0.08 |
|||||||||||||||
10 |
0.72 |
0.08 |
|||||||||||||||
11 |
0.72 |
0.08 |
|||||||||||||||
12 |
0.72 |
0.08 |
|||||||||||||||
13 |
0.72 |
0.08 |
|||||||||||||||
14 |
1 |
||||||||||||||||
15 |
1 |
||||||||||||||||
16 |
0.72 |
0.08 |
Table 7 : Markov chain transition Table
The Markov Chain in the above table shows the transition probabilities between various states. We note a few things about this table:
There is no transition into State 0. This is because each user starts a session in this state, always.
The probabilities on most rows (except from states 2, 14, and 16) add up to 0.8. This is because there is a 20% probability associated with the user leaving the system from any state. States 15 and 16 are not real states, they are part of state 2 and were created to ease implementation.
State 14 is the final or absorbing state, since once the user logs out the session ends.
The benchmark load is generated by clients running on one or more machines that run the client code. Each client thread represent one virtual user at any given time. Client threads may be added or deleted during the benchmark run, based on the arrival pattern of virtual users. In the benchmark run, the client threads are ramped up over the ramp-up period.
Each client thread while emulating a given user would start by generating a Login request. A Login request will involve SSL handshake between the client and server, setting up a new session. Following the system response to the login, which will be a personalized html page, the client will generate requests for all the static/image files associated with this page. Each client can issue a maximum of 2 parallel requests for image files. All these requests for image files, would use the same SSL session and TCP keep-alive header as the first request. All clients will use HTTP 1.1 protocol for all the requests. The think time would begin following the receipt of the last image file for a given page. The Think Times in the workload model is based on geometric distribution (See section 3.2). From each state, there is a finite probability of the customer leaving the system. This probability is 20% in Banking. If the client does not leave the system, then the client picks a dynamic query based on a pre-computed Markov Chain (as shown in Table 7) and issues the corresponding dynamic request, then requesting for the corresponding image files following the receipt of the dynamically created html file. The Markov chain presented is designed keeping in mind the popularity curve for various requests. In Table 8, we list the relative probability of each of the dynamic scripts in this design.
Most of the files requested by the requests for static files, following a dynamic script run, are independent of any client identity. The images of checks requested however does depend on client id and really is a dynamic request, where the server needs to communicate to the backend to identify the location of the image and then fetch the image. Given that the current implementation of the client code was not setup to handle a dynamic request within another dynamic request, it was modified such that the check image requests (front and back of the check) followed with probability 1.0, after the request of the check_image_html page. These are actually listed as states 15 and 16 in the descriptions within the config files.
Dynamic Script Index (Use these when using the Markov Chain above |
Name of the script |
Relative Probability % (computed from the proposed Markov Chain) |
0 |
Login Welcome |
21.53 |
1 |
Account Summary |
15.11 |
2 |
Check Details |
8.45 |
3 |
Bill Pay |
13.89 |
4 |
Add Payee |
1.12 |
5 |
Post Payee |
0.80 |
6 |
Quick Payment |
6.67 |
7 |
Bill Pay Status |
2.23 |
8 |
Change Profile |
1.22 |
9 |
Post Changed Profile |
0.88 |
10 |
Order Checks Request Form |
1.22 |
11 |
Place Check Order |
0.88 |
12 |
Transfer Money form |
1.71 |
13 |
Post Transfer Money |
1.22 |
14 |
Logoff |
6.16 |
Table 8: Relative frequencies (normalized to 100%) of each of the dynamic scripts1
The end-user speed emulated in the benchmark is 100,000 kb/s. This is a fraction of the maximum cable speed and was specified taking into consideration the realistic value of the bandwidth that a cable-user would actually get.The end user speed is emulated by forcing the client to sleep between requests for a period to accommodate the difference between LAN speed and user speed emulated.
The metric used in this benchmark is based on end-user experience. Based on published literature in the area of Human Factors, for Ecommerce and Banking customers, as long as the entire page is received within 2-3 seconds. The same literature also points to the fact that a delay beyond 4-5 sec is not tolerable for a normal user. Based on this, we defined two time limits TIME_GOOD and TIME_TOLERABLE such that at peak load we would expect 95% of the responses to be returned within TIME_GOOD and 99% of the responses to be returned within TIME_TOLERABLE. For the Banking workload TIME_GOOD is set at 2 seconds and TIME_TOLERABLE is set at 4 seconds. We do note that this metric is not applicable when downloading large files, such as the ones in Support Workload. In that case, we would have to meter the "goodness" of the page by the bitrate observed. For details, please see Support Workload Design Document.
Hence the benchmark metric would essentially be the maximum number of user connections that can be supported such that the TIME_GOOD can be met by at least 95% of the page requests and TIME_TOLERABLE can be met by at least 99% of the page requests.
Most static files will be requested as gif files that are embedded in the dynamically created html pages. Since the benchmark does not plan on making the client parse through the dynamic script, the gif files associated with a given dynamically created html file should be pre-programmed to be requested by the client, soon as the request for the dynamic request is receieved. Files listed in this table, other than the check images, are not user specific. The number of check images to be stored in the file-system will need to grow with the size of the run. For each user being emulated in the database, we store 20 check images. For each active user (simultaneous connection), there will be 100 user data included. In other words, even at peak load, we expect no more than 1% of the total number of customers to be active simultaneously. The image size of the check's front side is about 5 KB- 10 KB and the backside of the check varies between 11 KB- 15 KB.
File Name |
File Size |
“304” fraction |
|
1 |
Image 1 |
806 |
0.5 |
2 |
Image 2 |
246 |
0.4 |
3 |
Rectangle_Icon |
43 |
0.4 |
4 |
Logo_Icon |
1140 |
0.5 |
5 |
ForNewUsers_Icon |
36789 |
0.33 |
6 |
Continue_Icon |
764 |
0.33 |
7 |
Secure_Icon |
1522 |
0.7 |
8 |
House_Icon |
858 |
0.6 |
9 |
Mail_Icon |
216 |
0.3 |
10 |
Help_Icon |
205 |
0.2 |
11 |
Signoff_Icon |
316 |
0.1 |
12 |
Tab_Icon |
52 |
0 |
13 |
Account_Tab |
1285 |
0 |
14 |
BillPay_Tab |
694 |
0.1 |
15 |
Transfer_Money_Tab |
676 |
0.5 |
16 |
Customer_Service_Tab |
778 |
0.1 |
17 |
Account_Open_Tab |
488 |
0.3 |
18 |
DownloadButton_Icon |
764 |
0 |
19 |
Viewback_Icon |
731 |
0 |
20 |
Returnlost_Icon |
1059 |
0 |
21 |
Payees_Icon |
224 |
0 |
22 |
Ebills_Icon |
208 |
0.2 |
23 |
Eimage_Icon |
952 |
0 |
24 |
Add_Ebill_Icon |
646 |
0 |
25 |
Quick_Pay_Icon |
353 |
0 |
26 |
Display_Payment_Icon |
915 |
0 |
27 |
Payment |
234 |
0 |
28 |
Changenickname |
1043 |
0 |
29 |
Don't Change Nickname |
1178 |
0 |
30 |
OrderCheckcopy |
1024 |
0 |
31 |
Don'tordercheckCopy |
1170 |
0 |
32 |
Sort |
102 |
0.5 |
33 |
Billpayopen |
603 |
0.1 |
34 |
Addnewpayee |
943 |
0 |
35 |
Check_01.jpg |
11838 |
0 |
36 |
Check_02.jpg |
6481 |
0 |
37 |
Check_03.jpg |
12836 |
0 |
38 |
Check_04.jpg |
10868 |
0 |
39 |
Check_05.jpg |
11405 |
0 |
40 |
Check_06.jpg |
5792 |
0 |
41 |
Check_07.jpg |
10905 |
0 |
42 |
Check_08.jpg |
5458 |
0 |
43 |
Check_09.jpg |
11731 |
0 |
44 |
Check_10.jpg |
10757 |
0 |
Table 9: File Sizes and "304" percentage for all static files.
Based on our study of web server logs, the “304” or “File not modified” responses form a significant part of the web server load. This response code is only associated with requests for static files, which could potentially be cached at the server. As mentioned in Section 1, about 30% of the overall responses from the server fall into this category. Also, the “304” response percentage is higher for the more frequently requested files, i.e. The “304” response percentage increases with the popularity index of a file, the highest values that we saw being close to 70% for the most popular files to 0% for some of the less popular ones. Table 9 shows the fraction of “304” responses for each of the static files used in the benchmark. Since the request rate for the check images is very very low, we will assume that the response code for these check images will always be a “200”.
1Note that we have not included the relative % for the check image states since these are pseudo states.
Copyright © 2005-2006 Standard Performance Evaluation Corporation. All rights reserved.