Version 1.0
Last Modified: October 20, 2007
Q1: | What is SPECjms2007? |
A1: | SPECjms2007 is an industry-standard benchmark designed to measure the performance and scalability of JMS-based Message-Oriented Middleware (MOM) platforms. It exercises most of the features typically used in the major classes of JMS applications including both point-to-point (P2P) and publish/subscribe (pub/sub) messaging, transactional and non-transactional messages, persistent and non-persistent messages, different message types, as well as one-to-one, one-to-many, and many-to-many interactions. |
Q2: | How is SPECjms2007 different from SPECjAppServer2004? |
A2: | SPECjms2007 is a comprehensive JMS-based Message-Oriented Middleware (MOM) benchmark which exercises many MOM features not utilized in SPECjAppServer2004. |
Q3: | Does this benchmark replace SPECjAppServer2004? |
A3: | No. While SPECjms2007 exercises
many more JMS features, it is not a Java Enterprise Edition
(Java EE) benchmark, so it does not require a complete Java EE environment
nor measure any other Java EE components covered by SPECjAppServer2004
such as Message-Driven-Beans (MDBs), Enterprise Java Beans
(EJBs), servlets, Java Server Pages (JSPs), etc. |
Q4: | Does this benchmark make SPECjvm98 or SPECjbb2005 obsolete? |
A4: | No. While they all utilize Java, SPECjvm98 is a client JVM benchmark and SPECjbb2005 is a server JVM benchmark. JVM benchmarks cover a different class of workloads not requiring JMS-based Message-Oriented Middleware. |
Q5: | What are the performance metrics for SPECjms2007? |
A5: | SPECjms2007 has two metrics, SPECjms2007@Horizontal and SPECjms2007@Vertical. SPECjms2007@Horizontal is the measure of the SUT performance in the Horizontal Topology. SPECjms2007@Vertical is the measure of the SUT performance in the Vertical Topology. |
Q5.1: |
What are the Horizontal and
Vertical Topologies? |
A5.1: |
Topology refers to the ways
that the benchmark may be scaled. With the Horizontal Topology,
the workload is scaled by increasing the number of destinations
(queues and topics) while keeping the traffic per destination
constant. With the Vertical Topology, the traffic (in terms
of message count) pushed through a destination is increased
while keeping the number of destinations fixed. |
Q6: | Where can I find published results for SPECjms2007? |
A6: | SPECjms2007 results are available on SPEC's web site: http://www.spec.org/. |
Q7: | Who developed SPECjms2007? |
A7: | SPECjms2007 was developed by the SPECjms Working Group within SPEC's Java Subcommittee. Technische Universit�t Darmstadt, IBM, Sun, Oracle, BEA, Sybase and Apache participated in the design, implementation and testing phases of the project. See the Credits page for more details. |
Q8: | How do I obtain the SPECjms2007 benchmark? |
A8: | To place an order, use the on-line order form or contact SPEC at http://www.spec.org/spec/contact.html. |
Q9: | How much does the SPECjms2007 benchmark cost? |
A9: | Current pricing for all the SPEC benchmarks is available from the SPEC on-line order form. SPEC members receive the benchmark at no extra charge. |
Q10: | How can I publish SPECjms2007 results? |
A10: | You need to get a SPECjms2007
license in order to publish results. All results are subject
to a review by SPEC prior to publication. For more information about submitting results, please contact SPEC. |
Q11: | How much does it cost to publish results? |
A11: | Contact SPEC at http://www.spec.org/spec/contact.html to learn the current cost to publish SPECjms2007 results. SPEC members can submit results free of charge. |
Q12: | Where do I find answers to questions about running the benchmark? |
A12: | The procedures for installing and running the benchmark are contained in the SPECjms2007 User's Guide, which is included in the product kit and is also available from the SPEC web site. |
Q13: | Where can I go for more information? |
A13: | SPECjms2007 documentation consists mainly of four documents: User's Guide, Design Document, Run and Reporting Rules, and this FAQ. The documents can be found in the benchmark kit or on SPEC's Web site: http://www.spec.org/. |
Q14: | Although there is no price/performance metric, you provide a BOM for reproducing results. Can I create my own price/performance metric and report it alongside SPEC's published results? |
A14: | SPEC does not endorse any price/performance metric for the SPECjms2007 benchmark. Whether vendors or other parties can use the performance data to establish and publish their own price/performance information is beyond the scope and jurisdiction of SPEC. Note that the benchmark run rules do not prohibit the use of $/"SPECjms2007@Horizontal" or $/"SPECjms2007@Vertical" calculated from pricing obtained using the BOM. |
Q15: | Can I compare the SPECjms2007@Horizontal
metric with the SPECjms2007@Vertical metric? |
A15: | No. The metrics are not
comparable because the workloads are different. |
Q16: | Can I compare SPECjms2007 results with any other benchmark results? |
A16: | No. SPECjms2007 uses a unique workload mix, run and reporting rules, and metrics. |
Q17: | Can I compare SPECjms2007 results to results from other SPEC benchmarks? |
A17: | No. There is no logical way to translate results from one benchmark to another. |
Q18: | Do you permit benchmark results to be estimated or extrapolated from existing results? |
A18: | No. This is an implementation benchmark and all the published results have been achieved by the submitter and reviewed by the committee. Extrapolations of results cannot be accurately achieved due to the complexity of the benchmark. |
Q19: | What does SPECjms2007 test? |
A19: | SPECjms2007 is designed to test
the performance and scalability of JMS-based Message-Oriented
Middleware (MOM) and each of the components that make up
the application environment, e.g. hardware, JMS server, JVM,
etc. See Section 1.1 of the SPECjms2007 Design Document for more information. |
Q20: | What are the significant influences on the performance of the SPECjms2007 benchmark? |
A20: | The most significant influences
on the performance of the benchmark are:
|
Q21: | What is the benchmark workload? |
A21: | The application scenario chosen
for SPECjms2007 models the supply chain of a supermarket
company. With the Horizontal Topology, the number of supermarkets
is increased. With the Vertical Topology, the amount of products
sold per supermarket is increased. For complete details see the SPECjms2007 Design Document. |
Q22: | Can I use SPECjms2007 to determine the size of the server I need? |
A22: | In general, SPECjms2007 should
not be used to size a JMS server configuration, because it
is based on a specific type of workload. However, in certain
cases, the benchmark can be used for rough sizing if the
target application workload can be approximated using a customized
SPECjms2007 workload. For this purpose, the benchmark provides
a freeform topology that can be used to build a custom workload
tailored to the customer requirements using SPECjms2007 components
as building blocks. It should be noted however that there
are numerous assumptions made about the workload, which might
or might not apply to other user applications. |
Q23: | What hardware is required to run the benchmark? |
A23: | In addition to the hardware
for the JMS server, one or more client machines are required,
as well as the network equipment to connect the clients to
the server. The number and size of client machines required
by the benchmark will depend on the scale of the topology
that is run. |
Q24: | What is the minimum configuration necessary to test this benchmark? |
A24: | A SPEC member has run the benchmark on a G4 1.3GHz laptop system with 1GB of RAM. The benchmark completed successfully with a Vertical base of 2. This is not a valid configuration that you can use to report results, however, as it does not meet the requirements of the benchmark. |
Q25: | What software is required to run the benchmark? |
A25: | In addition to the operating
system, SPECjms2007 requires JMS Server software and a Java
Virtual Machine (JVM). |
Q26: | Do you provide source code for the benchmark? |
A26: | Yes, but you are required to run the compiled jar files provided with the benchmark if you are publishing results. As a general rule, modifying the source code is not allowed. The only component of the benchmark that may be modified is the product-specific Provider Module. For more information, refer to the SPECjms2007 Run and Reporting Rules. |
Q27: | Can I use a JMS Server compliant with the JMS Specification 1.02 to run this benchmark? |
A27: | No. SPECjms2007 requires a server compliant with the JMS Specification version 1.1 or later. |
Q28: | What certification is required of the JMS Server? |
A28: | Currently, there isn't a certification requirement that needs to be met for the JMS Server in order to submit results other than being able to pass the benchmark and satisfy the SPECjms2007 Run Rules. The JMS Server must be compliant with the JMS Specification version 1.1 or later. |
Q29: | Is the benchmark cluster-scalable? |
A29: | Yes |
Q30: | How scalable is the benchmark? |
A30: | The SPECjms working group members have individually exercised the benchmark to significant levels of scalability including network scalability. |
Q31: | Can I report with vendor A hardware, vendor B JMS Server, and vendor C database software? |
A31: | The SPECjms2007 Run and Reporting Rules do not preclude third-party submission of benchmark results, but result submitters must abide by the licensing restrictions of all the products used in the benchmark; SPEC is not responsible for vendor (hardware or software) licensing issues. Many products include a restriction on publishing benchmark results without the expressed written permission of the vendor. |
Q32: | Can I report results for public domain software? |
A32: | Yes, as long as the product satisfies the SPECjms2007 Run and Reporting Rules. |
Q33: | Are the results independently audited? |
A33: | No, but they are subject to committee review prior to publication. |
Q34: | Can I announce my results before they are reviewed by the SPEC Java subcommittee? |
A34: | No. |
Q35: | Are results sensitive to the Client-side components of the SUT? If they are, how can I report optimal performance for a) fewer powerful Client machines or b) larger number of less powerful Client machines? |
A35: | SPECjms2007 results are not that sensitive to the type of client driver machines, as long as they are powerful enough to drive the workload for the given workload scaling. Experience shows that if the client machines are overly stressed, one cannot reach the throughput required for passing the run. |
Q36: | This is an end-to-end solution benchmark. How can I determine where the bottlenecks are? Can you provide a profile or some guidance on tuning issues? |
A36: | Unfortunately, every combination
of hardware, software, and any specific configuration poses
a different set of bottlenecks. It would be difficult or
impossible to provide tuning guidance based on such a broad
range of components and configurations. As we narrow down
to a set of products and configurations, such guidelines
are more and more possible. Please contact the respective
software and/or hardware vendors for tuning guidance using
their products. See also Section
6.3 in the SPECjms2007
User's Guide. |
Q37: |
Runs using small BASE numbers
always seem to fail. Is there a minimum BASE required to
get a passing run? |
A37: |
When running with a Vertical Topology and a low BASE you may experience issues with input rates not being reached and instability. This is because setting a BASE of 10 or less can result in the message rate per Driver thread being too low to allow for enough messages to be generated during the run such that the empirical message inter-arrival time distribution is close enough to the target distribution. Increasing the BASE will raise the message rates of the Driver threads and this will help overcome issues relating to instability and failed input rates. The minimum BASE for the Horizontal Topology is 5. |
Q38: |
Does SPECjms2007 require an
independent human auditor? |
A38: |
No. The tester takes responsibility for his or her results, see the disclaimer at http://www.spec.org/spec/disclaimer.html. SPECjms2007 does include a software process called the "auditor", which runs a set of tests described in Section 3.6 of the SPECjms2007 Run and Reporting Rules. |