These instructions assume you have already set up the six VMs that
support the modified SPEC CPU2006, SPECmail (IMAP), SPECjAppServer2004,
and SPECweb2005 workloads that make up a SPEC virt_sc "tile". Refer to the SPEC virt_sc
User's Guide and the workload-specific documentation included with
the version of these workloads provided in this benchmark kit. The
following instructions are intended to assist the user in setting up the
SPEC virt_sc prime controller that executes these workloads as subcomponents
of its virtualization-based server consolidation benchmark.
These instructions also assume you have read the SPEC virt_sc Design Document
and are familiar with concepts and terminology introduced there that are
used in these instructions.
Although these instructions are specific to client-side setup, they also
reference server-side components. Therefore, the following figure
representing all components of the testbed environment may be a useful
visual reference as you work through these instructions. Note that this
figure represents a single-tile test configuration. For each additional
tile, an additional client box and VM box would be required (and possibly
additional external storage, if applicable).
The SPEC virt_sc installer copies the required benchmark files into the
directory of your choosing. The instructions in this guide assume
installation of the SPEC virt_sc benchmark in /opt. Based on this assumption,
the following is the expected harness-specific directory structure after
running the installer:
/opt
/SPECbatch
/SPECimap
/SPECjAppServer2004
/SPECbatch
/SPECptd
/SPECpoll
/SPECvirt
/SPECweb2005
The setup documentation for the batch, mail, appserver, and webserver
workloads is located in these workloads' respective directories as well as
in the
SPECpoll is a process that serves as a simple polling driver/responder,
merely validating that the VM targets are running.
A SPEC virt_sc benchmark run has two measurement phases: a "loaded" phase and
an "unloaded" phase. During the loaded phase, SPECpoll is used to poll the
VMs while the other four workloads are creating request-generated load
against their corresponding VMs. During the no-load, or "active idle"
phase, SPECpoll is used to poll all VMs. (Please see the SPECpower
Methodology for more information about "active idle" power measurement.)
There are two different sets of setup instructions for SPECpoll: those
for the VMs, and those for the clients.
SPECpoll needs a polling "receiver" on the host VMs to respond to these poll requests. The instructions for setting up the "listener" on each VM to use the RMI port specified in the harness's Test.config file are:
From the SPECpoll directory, execute:
java -jar
pollme.jar [-n <hostname/interface>] -p 8001
Note: The pollme class by default binds to the network interface that
corresponds with the name by which the VM knows itself. For example, if
the infraserver VM's name resolves to the external network interface when
the process is required to be bound to the internal network interface,
then you need to specify a hostname that resolves to the internal network
interface when you invoke pollme to prevent it from binding to the
external one.
So if, for example, the host name "infraserver-int" in the hosts file
resolves to the internal network interface, you can direct it to the
correct interface by invoking pollme on the infraserver VM with the
following parameters:
java -jar
pollme.jar -n infraserver-int -p 8001
SPECpoll on the clients uses only specpoll.jar and specpollclient.jar; pollme.jar is not invoked or used. The prime client and client classes contained in specpoll.jar and specpollclient.jar, respectively, are invoked in the same way as workload prime and client classes.
Install the SPEC virt_sc harness code on all client systems that run any of
the benchmark workloads. This is because it is actually the SPEC virt_sc's
clientmgr class that starts these benchmark workloads at the beginning of
the run and it assumes the path to the workload is local. For our
purposes, we assume that the prime clients and all benchmark workloads of
one tile run on the same client. This is a straightforward way to
distribute the client processes to your hardware but is not required.
Appropriate modification of the Control.config file allows several
different client systems to drive the benchmark workloads for the same
tile.
While we recommend running the prime controller on a separate physical
system from those hosting the workloads, it can also run on any of the
systems hosting clientmgr processes. If you use virtual clients, ensure
they have enough resources to drive the workload. See the
SPEC virt_sc
User's Guide for information on client resource requirements.
Because editing the Control.config file requires adding information about your benchmark workload configurations, make sure you have installed the benchmark and set up each of the workloads before proceeding (see the workload-specific setup instructions in the SPEC virt_sc User's Guide for more information). See Appendix A for a list of keys in this file and their descriptions. Note that all subsequent references in this document to words that are in all capital letters refer to properties in the Control.config file.
The Testbed.config file in the /opt/SPECvirt home directory is where the testbed-specific configuration information must be entered describing the hardware and software used, tunings applied, and any other details required to reproduce the test environment. Note that any Testbed.config files that are a part of any of the individual workloads are not used by this benchmark (although the type of information entered is very similar).
With workload VMs set up according to the instructions in the SPEC virt_sc User's Guide, the following are the key properties used in setting benchmark load levels. Note that this section deals with benchmark load level modifications. The load levels of the individuals workloads are fixed and controlled by the WORKLOAD_LOAD_LEVEL[] values. While you can modify these values to run non-compliant tests, running at modified workload load levels may require changes to database, web server fileset, and mailstore size. These requirements for workload-specific load level modifications are beyond the scope of this guide and should be researched in the workload-specific benchmark documentation.
Benchmark load is increased in units of tiles. This is controlled by the property NUM_TILES in Control.config. Of course, adding four workloads worth of load when you want to increase the load on a server may not provide the level of load granularity desired. This is where the indexed version of the LOAD_SCALE_FACTORS[] property can help. For a single tile (specified by the index value) you can set the load for that tile to between 10% and 90% of full load in 10% increments. (Doing this for more than one tile is possible, but non-compliant.)
Of course there are times when running a compliant benchmark configuration is not an objective. For example, you may want to run a subset of the benchmark workloads to focus on issues specific to that one workload. Modifying NUM_WORKLOADS allows you to do that. The key point to keep in mind is that you may have to change the workload number indexes in the Control.config file. For example, if you wanted only to run the web workload, you would need to change the web workload index (and all related web-specific indexes) in Control.config from "1" to "0", and then set "NUM_WORKLOADS = 1".
If you want to run all workloads on all tiles at a higher or lower load
level than the default, changing the value of the property
LOAD_SCALE_FACTORS allows you to do this. (For higher load levels, you
need to ensure that the corresponding workload VM datasets are built to
support the higher load levels.) Note that the comma-delimited string of
numbers in the non-indexed LOAD_SCALE_FACTORS property determines the
number of measurement intervals to run in a single test. Also note that
the ",0" at the end of the LOAD_SCALE_FACTORS string only applies to power
measurement runs.
In the case of power-included runs, wherever a "0" value is included in
the LOAD_SCALE_FACTORS string, during that interval the prime controller
runs an active idle measurement. However, all "0" values are ignored for
non-power tests. The only reason to remove the active idle-specific load
reference in this string is if you wanted to skip an active idle
measurement in a benchmark run that includes power measurement.
Note: This is an advanced benchmark configuration technique that can safely be ignored when setting up your first, single-tile test. However, as these instructions hopefully make clear, this capability may become handy as you start adding tiles.
By default, tile index numbers must start at 0 and increase by one for
each added tile. Because the datasets for each tile are tile
number-specific, using this default methodology requires that you first
set up and run Tile 0, and then set up and add Tile 1, etc. Further, by
default each tile can only be run along with all of the lower numbered
tiles. That is, you cannot run Tile 1 by itself because the default
ordering scheme expects the first tile to be Tile 0.
This is where the TILE_ORDINAL property may come in handy. Using the
TILE_ORDINAL property supersedes the default ordering scheme. However, if
tile ordinals are used, then they must be specified for all tiles used in
a benchmark run. For example, if you use TILE_ORDINAL for a four-tile run,
the harness expects TILE_ORDINAL[0] through TILE_ORDINAL[3] defined in
Control.config. (It ignores any values for indexes greater than 3.)
The simplest and perhaps most common case for using TILE_ORDINAL is when
you have just set up your second tile and want to test only that tile in a
benchmark run. In that case, you set "TILE_ORDINAL[0] = 1" and then make
sure all other tile index references in Control.config for Tile 1 are
consistent with that tile (e.g. ensure the PRIME_HOST[1][w] values point
to the hostnames and ports for Tile 1, etc). When the prime controller
begins benchmark execution, it then sees that you want Tile 1 to be your
first tile and executes accordingly.
With TILE_ORDINALs, the only expectation is that the TILE_ORDINAL indexes
start at 0 and increase by one for each additional tile. The values used
for the tile numbers and their ordering are not bound by such constraints.
For example, assuming you had four tiles set up and wanted to run two of
them at a time, in addition to running:
TILE_ORDINAL[0]
= 2
TILE_ORDINAL[1] = 3
You could run:
TILE_ORDINAL[0]
= 1
TILE_ORDINAL[1] = 3
or even:
TILE_ORDINAL[0]
= 3
TILE_ORDINAL[1] = 0
Thus the TILE_ORDINAL property allows running any tile in any order in a
benchmark run, provided the corresponding tile indexes for the other
properties in Control.config are consistent. For example, in a four-tile
run using the TILE_ORDINAL property, LOAD_SCALE_FACTORS[3] no longer also
refers to the fourth tile in a run. It now refers specifically to Tile 3.
So if Tile 3 was not included as one of the values in the TILE_ORDINAL
list, it would skip this tile-specific load scaling and would instead run
all tiles at the default LOAD_SCALE_FACTORS rate.
At the start of every benchmark run, the prime controller
performs a two-phase time synchronization check between the clients, prime
clients, and VMs in the first phase and between the prime clients and the
prime controller in the second phase. Therefore, we recommend
synchronizing system clocks between all of these components before the
start of each benchmark run.
If this synchronization is performed via NTP, then you must ensure that
time synchronization does not occur in the middle of a benchmark run, as
time shifts during a run can compromise response time measurements on the
clients as well as compromise the appserver workload's ability to
accurately perform post-run database checks.
For each workload instance in every tile, you must start a dedicated client manager process that starts the prime clients at the beginning of a run. Similarly, you need to start one client manager process for each physical or virtual client used by these prime clients. If multiple prime clients use the same physical or virtual client, however, only a single client manager process is required to drive the clients used by all prime clients.
For each PRIME_HOST in Control.config, start a client manager process on the specified host and port.
For example, for PRIME_HOST[0][0] = "myhostname:1098", from the SPECvirt
directory on myhostname, start a client manager process as follows:
java -jar
clientmgr.jar -p 1098 -log
Repeat this for each PRIME_HOST entry in Control.config. For a compliant
benchmark, start four of these process for each tile (one each for batch,
mail, appserver, and webserver).
On each physical or virtual client used by any of the prime clients,
start a client manager process on the specified port. To do so, for each
unique host in WORKLOAD_CLIENTS, start a client manager process on the
port specified by CLIENT_LISTENER_PORT.
ex. For WORKLOAD_CLIENTS[0] = "myhostname:1091" and CLIENT_LISTENER_PORT =
"1088", from the SPECvirt directory on myhostname, start a client manager
process as follows:
java -jar
clientmgr.jar -p 1088 -log
Repeat this for each unique client host. Note that you do not use the port
specified in WORKLOAD_CLIENTS; that port is for the workload client to use
to listen for RMI commands from its prime client. The CLIENT_LISTENER_PORT
(1088) is used for communication between the SPEC virt_sc prime controller and
the client manager process.
Using the above instructions to set up the harness to run the
load-generating processes for a single tile on one client requires
starting four clientmgr processes for each of the four prime clients
(batch, mail, appserver, and webserver) and a single clientmgr process
that would start all of the client processes for these four workloads. The
following picture is a representation of these five processes on a single
client, listening for commands on their respective RMI ports. This is the
first stage of the startup process.
The following instructions apply only to users who intend to collect power and temperature measurements during a benchmark run and these instructions assume you have power and temperature meters already properly connected to the SUT and/or external storage. If you do not have existing power meters but wish to configure and test the use of power and/or temperature daemons during a benchmark run, you can configure the daemons to run in "dummy mode" (though obviously this results in a non-compliant benchmark test).
The following example also assumes the daemons are being started on a
"unix-like" prime controller and communication between the prime
controller and the daemons occurs via the controller's serial ports.
However, these daemons need not be local to the controller, and there are
Windows executable files available for daemons connected to Windows
systems.
Within the installation directory containing the SPECvirt and workload
directories (/opt by default for a Unix/Linux environment) is a "SPECptd"
directory that contains the ptd executable and script/batch files for
starting the power and temperature daemons. The format for starting the
(Linux) ptd is:
./ptd-linux-x86
[options] <device-type-#> <device-port>
From the /opt/SPECptd directory, running "./ptd-linux-x86" displays the
invocation options for this executable. For communicating with a supported
power meter, you can find the number that corresponds to your meter in
this output. ("0" starts the ptd in dummy mode.) Of the parameter options
listed in the output, the "-t" option (runs the ptd in temperature mode)
and the "-p port" option are the most commonly used. Since the ptd by
default tries to use port 8888, you must use the "-p port" option to
override this value if that port is already in use by another ptd or other
process.
As an example:
./ptd-linux-x86
-p 8890 8 /dev/ttyS0
starts a ptd daemon in power mode using port 8890 and communicates with a
Yokogawa WT210 power meter connected to /dev/ttyS0 (COM1) of a
(Linux-based) prime controller. Alternatively:
./ptd-linux-x86
-t -p8890 1000 /dev/ttyS0
runs the ptd in "temperature mode" with the ptd returning "dummy"
temperature data.
Once the PTD executables are able to communicate with the power meters
correctly when started, the next step is to tell the prime controller
about these PTD settings in Control.config. The first parameter to change
is to set USE_PTDS to "1". Once so set, the controller uses all PTDs
defined via a PTD_HOSTS[x] entry. PTD_HOST is the hostname of the system
running the PTD. For this example, since the four PTDs are running on the
prime controller, we can simply set:
PTD_HOST[0] =
localhost
PTD_HOST[1] = localhost
PTD_HOST[2] = localhost
Next tell the prime controller what port each of the PTDs are listening
on. This must match the ports specified when invoking the PTD:
PTD_PORT[0] =
8888
PTD_PORT[1] = 8889
PTD_PORT[2] = 8890
Lastly tell the prime controller what the specified PTD is measuring:
server power (SUT), external storage power (EXT_STOR), or in the case of
ambient temperature measurement, which component the temperature sensor is
near:
PTD_TARGET[0] =
"SUT"
PTD_TARGET[1] = "EXT_STOR"
PTD_TARGET[2] = "SUT"
For the temperature daemon the PTD_TARGET is either SUT or EXT_STOR,
depending on where ambient temperature is being measured.
SAMPLE_RATE_OVERRIDE and OVERRIDE_RATE_MS should generally not be
modified. LOCAL_HOSTNAME and LOCAL_PORT specify the local network
interface and port to use to connect with the PTD_HOST. (In most cases,
specifying these two properties is unnecessary, and they can be left
commented out.) The following picture shows the addition of three
power/temperature daemons, listening for commands on their respective RMI
ports.
The last step in configuring the PTDs is to link a specific PTD with a
specific power meter description. This is done through the PWR.PTD_INDEX[]
and TMP.PTD_INDEX[] properties in Testbed.config. For each power or
temperature meter listed in Testbed.config there must be a PTD_INDEX value
that corresponds to one of the PTD_HOST indexes in Control.config.
Now that all of the client manager listeners are up and any power and temperature daemons are ready to poll their respective meters, you are ready to start the prime controller to begin a benchmark test. To do so, open a console window on your prime controller system and from the SPECvirt directory, run:
java -jar
specvirt.jar -l
|
|
If the report requires editing, modify the properties in the raw file
rather than in Control.config or Testbed.config. Within the raw file,
however, RESULT_TYPE is the only editable property from Control.config.
Other than this, only the properties contained in Testbed.config may be
edited in a raw file.
Once edited, to regenerate the formatted results using the edited raw
file, invoke the reporter by passing it the name of the raw file as a
parameter. For example:
java -jar
reporter.jar -r <raw_file_name>
For a complete list of reporter invocation options, pass the reporter the
"-h" argument.
If you want to submit a benchmark result to SPEC for review and
publication and have not already edited and regenerated the raw file
manually, you need to run the raw file created by the harness
through the reporter with the same syntax as used for formatted HTML file
regeneration:
java -jar
reporter.jar -r <raw_file_name> [-t [1-7]]
If you wish to change the type of formatted result files generated without
changing the RESULT_TYPE property in the raw file, override the value in
the raw file by passing the -t parameter with the corresponding result
type to the reporter. Otherwise, you can omit this parameter from the
invocation string.
If you have a submission file and want to recreate the raw file from which
it was generated, you can invoke:
java -jar
reporter.jar -s <sub_file_name>
This strips out the extra characters from the submission file so that you
can view or work with the original raw file. This is the recommended
method for editing a file post-submission because it ensures you are not
working with an outdated version of the corresponding raw file and
potentially introducing previously corrected errors into the "corrected"
submission file.
For any of the keys, below, that use indexes, "w" always represents the
workload index and "t" always represents the tile ID. The following
describes the configuration properties in this file.
CONFIGURABLE BENCHMARK PROPERTIES | |||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KEY | DESCRIPTION | ||||||||||||||||||||||||||||||||||||||
NUM_TILES | NUM_TILES is the primary property used to increase or decrease the load on the SUT. | ||||||||||||||||||||||||||||||||||||||
SPECVIRT_HOST SPECVIRT_RMI_PORT |
These are the hostname and port on which the prime controller listens for RMI commands. Because the prime clients use this information to contact the prime controller, the hostname used must resolve to the same IP address on both the prime controller and each of the prime clients. | ||||||||||||||||||||||||||||||||||||||
RMI_TIMEOUT | This is the number of seconds SPEC virt_sc waits for the prime clients to start their RMI servers before aborting the benchmark run. If your benchmark run is failing because the prime clients need more time for their initial setup, you can increase this value. However, it is unlikely that this value is too small, so if you get a timeout, first look at the log files or console output on the prime clients and see if something else caused the clients to fail to start correctly. | ||||||||||||||||||||||||||||||||||||||
TILE_ORDINAL[x] | Use TILE_ORDINAL to control which sets of PRIME_HOST clients to use for the run. The value specified corresponds to the "tile" number index specified in the PRIME_HOSTS key (i.e. PRIME_HOSTS[tile][workload]. If commented out, then the benchmark starts with PRIME_HOST[0][workload] and increments the PRIME_HOST tile index until it reaches NUM_TILES. If used, you must specify the TILE_ORDINAL index and value for *all* tiles (starting with 0). | ||||||||||||||||||||||||||||||||||||||
PRIME_HOST[t][w] | This specifies the hostname and port number for each prime client (or workload controller). The indexes used specify the tile and workload index, respectively, and therefore must be unique. If there are multiple prime clients on a single host, then each must listen on a different port number. There is one PRIME_HOST per workload and "NUM_WORKLOADS" PRIME_HOSTs per TILE. The format is PRIME_HOST[tile][workload] = "<host>:<port>". Values for keys with indexes greater than NUM_TILES - 1 and NUM_WORKLOADS - 1, respectively, are ignored. | ||||||||||||||||||||||||||||||||||||||
SPECVIRT_INIT_SCRIPT SPECVIRT_EXIT_SCRIPT |
The values for SPECVIRT_INIT_SCRIPT and SPECVIRT_EXIT_SCRIPT are the full name and path of any single script you wish to run on the prime controller before or after a benchmark run, respectively. Specifying only the script without the full path is acceptable if the script exists in the current path of the prime controller. | ||||||||||||||||||||||||||||||||||||||
PRIME_HOST_INIT_SCRIPT[w] (or PRIME_HOST_INIT_SCRIPT[t][w]) PRIME_HOST_EXIT_SCRIPT[w] (or PRIME_HOST_EXIT_SCRIPT[t][w]) |
PRIME_HOST_INIT_SCRIPT and PRIME_HOST_EXIT_SCRIPT are used to run scripts on the prime client systems before or after a benchmark run, respectively. If you include a path with the script name, it must be the full path. Specifying a file name only assumes the file exists in the current working directory of the prime client (typically the location of clientmgr.jar). If you need to run tile-specific initialization or exit scripts, use the double-indexed form of this property. | ||||||||||||||||||||||||||||||||||||||
PRIME_HOST_RMI_PORT[w] (or PRIME_HOST_RMI_PORT[t][w]) |
The PRIME_HOST_RMI_PORT is the port on which each prime client is listening for commands from the prime controller. Note that if you have more than one prime client on the same system, you MUST use different port numbers for each. Also, if you run more than one of the same type of workload on the same client, then you must use the double-index ([t][w]) form of this key so that you can set unique port numbers for the identical workloads on different tiles. | ||||||||||||||||||||||||||||||||||||||
PRIME_PATH[w] (or PRIME_PATH[t][w]) |
PRIME_PATH is the full path to the prime client. SPEC virt_sc uses this path in order to start the workload's prime client. If you are running multiple prime clients of the same workload type (for different tiles), then you likely want to use the double-index ([t][w]) form of this key so that you can specify different workload paths for each of the workloads. If running only one tile per client or less, the single-index form is sufficient. | ||||||||||||||||||||||||||||||||||||||
POLL_PRIME_PATH | POLL_PRIME_PATH is the path to specpoll.jar that the harness uses during the active idle polling interval. | ||||||||||||||||||||||||||||||||||||||
CLIENT_PATH[w] (or CLIENT_PATH[t][w]) |
CLIENT_PATH is the full path to the client for a given workload. SPEC virt_sc uses this path in order to start the workload's client. If you are running multiple clients of the same workload type (for different tiles), then you likely want to use the double-index ([t][w]) form of this key so that you can specify different paths for each of the workloads. If running only one tile per client (or less), the single-index form is sufficient. | ||||||||||||||||||||||||||||||||||||||
POLL_CLIENT_PATH | POLL_CLIENT_PATH is the path to specpollclient.jar that the harness uses during the active idle polling interval. Note that this is used only for the active idle polling interval and not for the idle server polling. | ||||||||||||||||||||||||||||||||||||||
FILE_SEPARATOR | Use FILE_SEPARATOR if you want to override the use of the prime client OS's file separator. (This may be required when using a product like Cygwin on Windows.) | ||||||||||||||||||||||||||||||||||||||
PRIME_APP[w] (or PRIME_APP[t][w]) |
PRIME_APP is the workload prime client process that the client manager process starts for each benchmark workload, with indexes corresponding to the different workloads being run. The double-index form of this key should only be required if there are tile-specific differences between the values used. | ||||||||||||||||||||||||||||||||||||||
POLL_PRIME_APP | POLL_PRIME_APP is the invocation string for the idle polling application that the harness uses during the active idle polling interval. Note that this key is not used for idle server polling during a loaded run. | ||||||||||||||||||||||||||||||||||||||
CLIENT_APP[w] (or CLIENT_APP[t][w]) |
CLIENT_APP is the name of the client (workload driver) that the clientmgr process starts and that the workload prime client controls. Any arguments that you pass to the client application must follow the name. The double-index form of this key is only required if there are tile-specific differences between the values used. | ||||||||||||||||||||||||||||||||||||||
POLL_CLIENT_APP | POLL_CLIENT_APP is the invocation string for the idle polling client application that is used during the idle polling interval. Note that this key is not used for idle server polling during a loaded run. | ||||||||||||||||||||||||||||||||||||||
PRIME_START_DELAY | PRIME_START_DELAY is the number of seconds to wait after starting the clients before starting the prime clients. Increase this value if you find that prime clients fail to start because the clients have not finished preparing to listen for prime client commands before these commands are sent. | ||||||||||||||||||||||||||||||||||||||
WORKLOAD_START_DELAY[w] (or WORKLOAD_START_DELAY[t][w]) |
WORKLOAD_START_DELAY staggers the time at which clients begin to ramp up their client load by delaying client thread ramp-up by the specified number of seconds. Seconds specified is total time from the beginning of the client ramp-up phase. Therefore, if you have delays of 1, 5, and 3, respectively for three different clients, the order of the start of workload client ramp-up is first, third, and then second. | ||||||||||||||||||||||||||||||||||||||
RAMP_SECONDS[w] (or RAMP_SECONDS[t][w]) WARMUP_SECONDS[w] (or WARMUP_SECONDS[t][w]) |
RAMP_SECONDS and WARMUP_SECONDS supersede any values used in the workload-specific configuration files for ramp-up and warm-up time. (For example, RAMP_SECONDS overrides "triggerTime" in SPECjAppServer2004.) These values need not be identical between workloads or even between tiles, as the harness extends the runtime of any workloads, as needed, to ensure the required common polling interval. However, the minimum compliant RAMP_SECONDS value is 180 and the minimum WARMUP_SECONDS value is 300 for all tiles and all workloads. | ||||||||||||||||||||||||||||||||||||||
POLL_INTERVAL_SEC | POLL_INTERVAL_SEC is the number of seconds that data is collected once polling starts. This represents the "common" benchmark runtime interval when all workloads are in their runtime measurement phase. The compliant value is 7200. | ||||||||||||||||||||||||||||||||||||||
ECHO_POLL | ECHO_POLL controls whether client polling values are mirrored on the prime clients. If set to 0, this polling data is only displayed on the prime controller terminal. | ||||||||||||||||||||||||||||||||||||||
DEBUG_LEVEL | DEBUG_LEVEL controls the amount of debug information displayed during a benchmark run by the prime controller. | ||||||||||||||||||||||||||||||||||||||
WORKLOAD_CLIENTS[w] (or WORKLOAD_CLIENTS[t][w]) |
The WORKLOAD_CLIENTS values are the client hostnames (or IP addresses) and ports used by the workload clients. The hostname or IP address is specified relative to the workload prime client, and not the prime controller. For example, specifying 127.0.0.1 (or "localhost") tells the workload prime client to run this client on its host OS's loopback interface, rather than locally on the prime controller. If, for example, you use the hostname "client1" for all of your clients, and the corresponding prime client resolves this name to a unique IP address on each prime client used, then these keys can be of the form WORKLOAD_CLIENTS[w]. Otherwise, like the PRIME_HOST keys, these need to be of the form WORKLOAD_CLIENTS[t][w]. | ||||||||||||||||||||||||||||||||||||||
CLIENT_LISTENER_PORT | CLIENT_LISTENER_PORT is the port used by the clientmgr listener on each physical or virtual client system (driver) to start the client processes for each workload on that physical or virtual client. | ||||||||||||||||||||||||||||||||||||||
POLLING_RMI_PORT | POLLING_RMI_PORT is the port used to communicate with the pollme processes running on the benchmark VMs. Pass this value to the pollme listeners when starting them on all VMs. | ||||||||||||||||||||||||||||||||||||||
PRIME_CONFIG_FILE[w] (or PRIME_CONFIG_FILE[t][w]) |
PRIME_CONFIG_FILE is the list of any files to copy from the corresponding LOCAL_CONFIG_DIR directory on the prime controller to the PRIME_CONFIG_DIR directory on the corresponding PRIME_HOST. Leave these as empty strings if you do not want to overwrite the workload configuration files on each prime client. | ||||||||||||||||||||||||||||||||||||||
LOCAL_CONFIG_DIR[w] (or LOCAL_CONFIG_DIR[t][w]) PRIME_CONFIG_DIR[w] (or PRIME_CONFIG_DIR[t][w]) |
LOCAL_CONFIG_DIR is the source location on the prime controller for the configuration files to copy to the workload prime clients. PRIME_CONFIG_DIR is the target location on the workload prime client for the config files copied from the source location. | ||||||||||||||||||||||||||||||||||||||
POLL_CONFIG_FILE POLL_LOCAL_CFG_DIR POLL_PRIME_CFG_DIR |
These are the keys corresponding to PRIME_CONFIG_FILE, LOCAL_CONFIG_DIR, and PRIME_CONFIG_DIR, respectively, for the active idle polling interval. | ||||||||||||||||||||||||||||||||||||||
USE_RESULT_SUBDIRS | Setting USE_RESULT_SUBDIRS to 1 puts each set of result files in a different results subdirectory with a unique timestamp-based name. Setting to 0 avoids creating a unique subdirectory, and any earlier results in the parent "results" directory are overwritten by newer test results. Setting USE_RESULT_SUBDIRS to 0 is only recommended for use with Faban. (Conversely, setting USE_RESULT_SUBDIRS to 1 is not recommended when using Faban.) | ||||||||||||||||||||||||||||||||||||||
USE_PTDS | USE_PTDS controls whether the power/temp daemons (PTDs) are used during the benchmark. Set to 0 to run without taking power or temperature measurements. | ||||||||||||||||||||||||||||||||||||||
PTD_HOST[x] | PTD_HOST is the hostname of the system running the PTD. For more than one PTD, copy, paste, and increment the index (x) for each PTD. | ||||||||||||||||||||||||||||||||||||||
PTD_PORT[x] | PTD_PORT is the corresponding port the PTD is listening on. | ||||||||||||||||||||||||||||||||||||||
PTD_TARGET[x] | PTD_TARGET is the type of component the power/temp meter is monitoring. ("SUT" identifies meter as monitoring a main system/server; "EXT_STOR" identifies meter as monitoring any external storage used.) | ||||||||||||||||||||||||||||||||||||||
SAMPLE_RATE_OVERRIDE[x] OVERRIDE_RATE_MS[x] |
Setting SAMPLE_RATE_OVERRIDE for any PTD allows you to override the default sample rate for the power or temperature meter. This is not recommended in most cases. However, if overridden, OVERRIDE_RATE_MS is the sample rate (in milliseconds) used instead of the meter's default. | ||||||||||||||||||||||||||||||||||||||
LOCAL_HOSTNAME[x] LOCAL_PORT[x] |
LOCAL_HOSTNAME and LOCAL_PORT are used to specify the local network interface and port to use to connect with the PTD_HOST. In most cases, you do not need to specify these values. Leave them commented out unless needed. | ||||||||||||||||||||||||||||||||||||||
LOAD_SCALE_FACTORS[t] | This is the tile-specific format of the fixed property LOAD_SCALE_FACTORS. Tile "t" runs at the specified load scaling factor. Compliant values are between 0.1 and 0.9 in increments of 0.1. For compliant runs, this property allows for the last tile to run at reduced load. Non-compliant runs are 1) defining more than one tile to run at a reduced load, 2) running at greater-than-full load (i.e. LOAD_SCALE_FACTORS value > 1.0), or 3) running any tile other than the last tile at a reduced load. | ||||||||||||||||||||||||||||||||||||||
RESULT_TYPE | Use RESULT_TYPE to control the type of result submissions and/or
formatted reports you would like to create. The following table
lists the possible values and which combinations of reports are
generated for each value:
|
||||||||||||||||||||||||||||||||||||||
IGNORE_CLOCK_SKEW CLOCK_SKEW_ALLOWED |
Setting IGNORE_CLOCK_SKEW to "1" causes the prime controller to
skip the system clock synchronization check at the beginning of a
benchmark run. Setting to "0" (default) means the prime controller
and the prime clients perform this check to ensure all prime
clients, clients, and VMs are in time sync with the prime
controller. If set to "0", CLOCK_SKEW_ALLOWED is the number of
seconds of clock skew the prime controller and prime clients allow
at the beginning of a benchmark run without aborting. |
FIXED
BENCHMARK PROPERTIES (changing these values results in a non-compliant test) |
|
---|---|
KEY | DESCRIPTION |
NUM_WORKLOADS VMS_PER_TILE |
NUM_WORKLOADS defines the number of workloads per tile used to drive the SUT. VMS_PER_TILE is the number of VMs that are used in each tile. For a compliant run, NUM_WORKLOADS must be 4 and VMS_PER_TILE must be 6. |
WORKLOAD_LABEL[w] | These values serve as descriptive labels of each of the workloads used in the benchmark. Assuming NUM_WORKLOADS = 4, there should be four corresponding values for each of the workloads. |
IDLE_RAMP_SEC IDLE_WARMUP_SEC IDLE_POLL_SEC |
IDLE_RAMP_SEC, IDLE_WARMUP_SEC, and IDLE_POLL_SEC are the ramp, warmup, and polling/runtime values used for the active-idle measurement phase only. |
POLL_MASTERS | POLL_MASTERS controls whether or not to request polling data from the prime clients. If set to 0, the harness does not conduct prime client polling during the polling interval. |
INTERVAL_POLL_VALUES | Set this to 0 for cumulative polling data over the entire measurement interval. Set it to 1 if you want only the polling data that is added between polling intervals. Note: some workloads do not support polling-interval-based results reporting and ignore a non-zero value. Therefore, the only value that ensures consistency across workloads is 0. |
POLL_DELAY_SEC | POLL_DELAY_SEC is the number of seconds after all prime clients have started running that the prime controller waits before starting to request polling data. |
BEAT_INTERVAL | BEAT_INTERVAL is the number of seconds between prime client pollings. This controls the frequency that the harness polls the prime clients for runtime data (if POLL_MASTERS is set to 1). |
RESULT_FILE_NAMES[w] POLL_RES_FILE_NAMES |
RESULT_FILE_NAMES are the names of the results files created by the workload that the prime controller collects from the prime clients after a run has completed. The indexes correspond with the workload indexes. POLL_RES_FILE_NAMES is the corresponding equivalent result file collected during an active-idle run. |
USE_WEIGHTED_QOS | USE_WEIGHTED_QOS controls the manner of calculating QOS for the workloads. A value of 0 means to apply the same weight to all QOS-related fields used to calculate the aggregate QOS value. A value of 1 (or higher) results in a weighted QOS based on frequency being used to calculate aggregate QOS. |
PTD_POLL | Set PTD_POLL to 1 in order to poll the PTDs during the POLL_INTERVAL; set to 0 to avoid PTD polling. |
POWER_POLL_VAL | POWER_POLL_VAL selects which value to poll from any power meter used during the test (possible values: "Watts", "Volts", "Amps", "PF"). |
TEMP_POLL_VAL | TEMP_POLL_VAL controls which value to poll from any temperature meter used during the test (options: "Temperature", "Humidity"). |
LOAD_SCALE_FACTORS QUIESCE_SECONDS |
LOAD_SCALE_FACTORS is the list of multipliers to the load levels for the individual workload levels. For each value and in the order listed, the benchmark harness runs a full run at the calculated load rate with a QUIESCE_SECONDS wait interval between each point. The number of values in this list control the number of iterations the benchmark executes. |
WORKLOAD_SCORE_TMAX_VALUE[w] | WORKLOAD_SCORE_TMAX_VALUE is the theoretical maximum throughput rate for each workload. Comment these values out if you do not want to normalize scores to the theoretical max. Setting the value to 0 has the effect of not using this workload's score in calculating the result. |
WORKLOAD_LOAD_LEVEL[w] | WORKLOAD_LOAD_LEVEL supersedes any values used in the workload-specific configuration files to control client load. For the appserver workload, txRate is overwritten with this value. For web, SIMULTANEOUS_SESSIONS is overwritten. For mail, the number of users is set to this value. |
WORKLOAD_NUM_SHARED[w] WORKLOAD_AGG_AUDIT[w] |
WORKLOAD_NUM_SHARED specifies the number of tiles sharing the database. WORKLOAD_AGG_AUDIT enables an additional custom Audit of the workloads after the prime controller finishes collecting the individual workload client audits. The appserver workload requires a custom Audit class to aggregate counts from the appservers and the shared database and then to validate results. |
Due to network latency, we recommend
dedicating a client to the webserver workload. This section explains how
to create a webserver client whether you use a physical or virtual client.
See Section 3.0 in the SPEC virt_sc User's Guide
for
creating and setting up clients.
The instructions below describe driving each tile using two client VMs,
where the second VM is used solely to drive the webserver workload.
For each client, clone an additional client, for example client1 to
wclient1 and client2 to wclient2. For each wclientN, set up its network
and host file and verify that it can connect to the corresponding
infraserverN and webserverN.
On the prime controller update /etc/hosts and add the new webserver-only
client names, for example:
192.168.122.10
client1
192.168.122.11 wclient1
192.168.122.20 client2
192.168.122.21 wclient2
On the prime controller, edit Control.config and update to PRIME_HOST and
WORKLOAD_CLIENTS for each tiles webserver workload
([<tile#>][1]):
PRIME_HOST[0][0]
= "client1:1098"
PRIME_HOST[0][1] = "wclient1:1096"
PRIME_HOST[1][0] = "client2:1098"
PRIME_HOST[1][1] = "wclient2:1096"
WORKLOAD_CLIENTS[0][0] = "client1:1091"
WORKLOAD_CLIENTS[0][1] = "wclient1:1010"
WORKLOAD_CLIENTS[1][0] = "client2:1091"
WORKLOAD_CLIENTS[1][1] = "wclient2:1010"
If you use a Client manager script such as the one below on each client,
you'll need to update it to used the wclientN for the webserver workload
as shown below.
For one client per tile:
# Clientmgr.sh
[tile_index]
# Script called from runspecvirt.sh on the prime controller for each
clientN
#
java -jar clientmgr.jar -p 1098 -log > Clientmgr$1_1098.out
2>&1 &
java -jar clientmgr.jar -p 1094 -log > Clientmgr$1_1094.out
2>&1 &
java -jar clientmgr.jar -p 1092 -log > Clientmgr$1_1092.out
2>&1 &
java -jar clientmgr.jar -p 1096 -log > Clientmgr$1_1096.out
2>&1 &
java -jar clientmgr.jar -p 1088 -log > Clientmgr$1_1088.out
2>&1 &
For two clients per tile:
# Clientmgr.sh
[tile_index]
# Script called from runspecvirt.sh on the prime controller for each
clientN
#
java -jar clientmgr.jar -p 1098 -log > Clientmgr$1_1098.out
2>&1 &
java -jar clientmgr.jar -p 1094 -log > Clientmgr$1_1094.out
2>&1 &
java -jar clientmgr.jar -p 1092 -log > Clientmgr$1_1092.out
2>&1 &
ssh wclient$1 ". /root/.bash_profile ; java -jar clientmgr.jar -p 1096
-log > Clientmgr$1_1096.out 2>&1 & "
ssh wclient$1 ". /root/.bash_profile ; java -jar clientmgr.jar -p 1088
-log > Clientmgr$1_1088w.out 2>&1 & "
java -jar clientmgr.jar -p 1088 -log > Clientmgr$1_1088.out
2>&1 &