New features in JMeter 2.13?

JMeter 2.13 is now here for sometime now. There are many new features in JMeter 2.13 . Some of the important and major enhancements are listed below.

Following are the Major enhancements in JMeter 2.13 version.

New Element – New Async BackendListener with Graphite implementation

A new Async BackendListener has been added to allow sending result data to a backend listener. JMeter ships with a GraphiteBackendListenerClient that allows sending results to a Graphite server using Pickle of Plaintext protocols.

New JMeter 2.13

New connect time metric

Starting with this version a new metric called connectTime has been added. It represents the time to establish connection. By default it is not saved to CSV or XML, to have it saved add to


95 and 99 percentile addition on Aggregate Graph and Report

The listeners Aggregate Graph and Aggregate Report previously showed only the 90 percentile (historical behavior), the 95 percentile and the 99 percentile have been added and are customizable. To setup the percentiles value you want, add to

New JMeter 2.13

Https test script modification

Now component is able to detect authentication schemes and automatically adds a pre-configured HTTP Authorization Manager with the correct Mechanism

What should a good Load Test report contain?

Load testing can be a complex activity, from understanding requirement, planning, defining the tests accurately to running them effectively. Reporting the load test is mostly overlooked. Load testing activity is not complete until the result is properly analysed, interpreted and reported. Performance reporting should follow the same basic rules of reporting:

  • “Who-What-When-Where-Why-How”
  •  A concise opening statement of theme
  • And some concrete argument to back up the theme statement

A performance report should deliver a high-level overview of how a web site is performing under load and also contain detailed visibility into the internal structure of the site and its infrastructure. Reports should illustrate the context and highlights of performance  clearly so that anyone could interpret them without having to be an expert about the data and metrics used to reach these indicators. 

The report should also have some degree of customizability, allowing the information to be re-contrasted as needed – often the most compelling analysis involves only the interaction of two or three key elements of the data gathered and the report should be able to be re-formatted to showcase them. Also, the ability to get the reports quickly and having multiple pass/run of the test results so that it is easier to gauge improvement over time.

Thus a good test report should be able to show:

A good report identifies and reports on the Key Performance Indicators (KPIs) of the website you are testing.

A good report should also be able to address discrete time duration, be it an instant test of a single component, or a duration test of multiple scenarios on a website happening over a weekend or longer period.

Where – A good test report should also show where the test was carried (infrastructure and hosts of the website)

Report should have some method to plainly identify the reason why the KPI objectives are not being met, by indicating an issue on an individual object or subsystem like graphical representation of the performance of each request made to the website. It helps in determining a course of action for possible solution.

What’s new in JMeter 2.10


JMeter 2.10 has been released. There are few notable changes/improvements in the new JMeter 2.10. Some of the changes are listed below:


New CSS/JQuery Tester in View Tree Results

A new CSS/JQuery tester has been added in view tree result. This will help test expressions very easily.

jmeter 2.10 css jquery tester

Improvement in HTTP(S) recorder

“HTTP proxy server” element and has been renamed to “HTTP(S) test script recorder”.

HTTP(S) recording have been improved with many bug fixes around this feature.

jmeter 2.10 httpsrecorder

MongoDB support

Now you can load test MongoDB  using new MongoDB Source Config.

jmeter 2.10 mongodb

Kerberos Authentication support

Authorization Manager now support Kerberos authentication along with Basic_DIGEST

jmeter 2.10 kerberos

New Functions

New functions (__urlencode and __urldecode) are now available to encode/decode URL encoded chars.

jmeter 2.10 newfunctions

Improvement in Distributed testing

  • Number of threads on each node is now reported to controller.
  • Performance improvement on BatchSampleSender
  • Addition of 2 SampleSender modes (StrippedAsynch and StrippedDiskStore)

jmeter 2.10 distributedtesting_threadsumarizer.


  • Webservice (SOAP) Request has been removed by default from GUI as Element is deprecated. (Use HTTP Request with Body Data , see also the Template Building a SOAP Webservice Test Plan ), if you need to show it, see property not_in_menu in
  • HTTP(S) Test Script Recorder if Grouping is set to Put each group in a new Transaction Controller , the Recorder will create Transaction Controller instances with Include duration of timer and pre-post processors in generated sample set to false. This default value reflect more accurately response time.
  • Transaction Controller now sets Response Code of Generated Parent Sampler (if Generated Parent Sampler is checked) to response code of first failing child in case of failure of one of the children, in previous versions Response Code was empty.

For more detail feature click here.

Effective Performance Analysis

Successful Performance Analysis depends on the ability to identify bottlenecks in the application or system infrastructure. A bottleneck can be caused by any element on a page that is taking longer than other page elements to fully load, or it could be an overloaded segment of the network or a security process that is delaying the browser’s requests and responses.

Analysis and tuning is not a “one time” type of event, but rather a cyclical process of evaluation and elimination of performance bottlenecks, using iterative load testing of the application. Every cycle of a load test could uncover new issues in the system, as the elimination of one larger issue in a previous cycle might unmask other issues.

Listed below are few effective performance analysis steps/methods:

Response Size and compression

This should be the first step in performance analysis. It can be easily done using any browser plugins such as http watch/ firebug netexport/IE developer toolbar etc. The response size should not be too large and it should be compressed.

Effective Performance Analysis Size and Compression


External interactions/interfaces

A page might be accessing some external resources to get data like a web service hit or some 3rd party java script. These external interactions might take longer time that expected. In one of the project in which I was doing performance testing, there was a 3rd party javascript which was accessed every time the page was loaded. This javascript was not cached due to licensing issue. It happened that the URL which was used to access the javascript was blacklisted and it returned 403 errors. This type of issue can increase page response time.

Time to first byte

A long first byte time indicates that the host server is delayed in getting the beginning of the requested information back to the browser. This is most frequently caused by an overloaded host server, or a congested pathway between the server and the browser. Solution for element with a long first byte value might include having it served from a different host, moving the serving host to another location in the infrastructure, or possibly optimizing the object through the use of a Content Delivery Network (CDN)

Response transfer time

A long transfer time might indicate that the element itself might be oversized for the application. This could be rectified by allowing the application to call down the information contained in this element through a series of smaller requests or compressing it if not done.

Gaps between requests

It might appear that sometime there is a short delay between the conclusion of one request and the beginning of processing for the next request. The most frequent reason for this is that the browser requires extra time to establish a connection to the target web server. This can be tuned in the application.

Above are few areas that can be looked into for identifying the Performance bottlenecks and help in effective performance analysis.

Distributed testing in Jmeter

You have reached the limit of one machine while doing your load testing and now want to distribute your load from different machines. In Jmeter it is commonly called as distributed load testing or remote testing.

Distributed testing or remote testing have 3 parts: JMeter Master, JMeter Slaves and Target.

Following figure explains the relationship between them.




Distributed setup prerequistie

1. All firewalls in JMeter master and slave machines should be turned off

2. All machines should be in same subnet

3. Use same version of JMeter in all machines( master and slaves)


Setting up the environment

Setup consists of  3 parts:

Master – running the JMeter GUI/command which controls the test

Slave – the system running the JMeter server, which takes command from the GUI/master and send request to the Target system.

Target – The web server which is to be load/stress tested.

Slave configuration

Make sure jmeter-sever.bat have proper path to rmiregistry.

Master Configuration

  • Open and edit the line “remote_hosts=”.
  • Add IP address of the slave machines e.g.: “remote_hosts=,,192.168.8”


Starting the Distributed test

  • Start jmeter-server.bat in all slave machines
  • Start jmeter.bat in master machine and open the test plan to run. Use remote start or remote start all option from the menu. Alternatively, you can use command prompt to run the test script as shown below.

jmeter -n -t my_test.jmx -l log.jtl –r

-n this specifies JMeter is to run in non-gui mode

-t [name of JMX file that contains the Test Plan].

-l [name of JTL file to log sample results to].

-r Run all remote servers specified in (or remote servers specified on command line by overriding properties)

Page Load time using Firebug and Selenium

Firebug is a well known tool for debugging and page load time.It provides detailed timing information about Http traffic initiated by the page. The Net panel which collects all the data can be used to export it into HAR file.


  1. Firebug – a Firefox plugin
  2. NetExport – Firebug extension for exporting data collected by the Net panel.
  3. Selenium 2 – Selenium is a suite of tools specifically for automating web browsers.

The sample test is developed using Java language. You can use any other language supported by Selenium.


  1. Download and install Firebug and NetExport. Also, make sure that both the files are kept in the same folder.
  2. Download Selenium selenium-server-standalone-X.XX.X.jar.

Below is the sample code.

import java.lang.InterruptedException;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.firefox.FirefoxProfile;

public class FirebugPerf {
public static void main(String[] args) {
FirefoxProfile profile = new FirefoxProfile();
File firebug = new File(“firebug-1.11.2-fx.xpi”);
File netExport = new File(“netExport-0.8.xpi”);
catch (IOException err)
// Set default Firefox preferences
profile.setPreference(“app.update.enabled”, false);
String domain = “extensions.firebug.”;
// Set default Firebug preferences
profile.setPreference(domain + “currentVersion”, “2.0”);
profile.setPreference(domain + “allPagesActivation”, “on”);
profile.setPreference(domain + “defaultPanelName”, “net”);
profile.setPreference(domain + “net.enableSites”, true);
// Set default NetExport preferences
profile.setPreference(domain + “netexport.alwaysEnableAutoExport”, true);
profile.setPreference(domain + “netexport.showPreview”, false);
profile.setPreference(domain + “netexport.defaultLogDir”, “C:\\Firebug_Automation\\output\\”);
WebDriver driver = new FirefoxDriver(profile);

// Wait till Firebug is loaded

// Load test page
// You can add more scenarios and pages as required

// Wait till HAR is exported
catch (InterruptedException err)




Compile it and run it. You will get the .har file in output folder as defined in the preference.

profile.setPreference(domain + “netexport.defaultLogDir”, “C:\\Firebug_Automation\\output\\”);

Note: If you are using Selenium for functional automation you can modify above code and integrate it with your existing functional test.

  1. This code is dependent on Firebug and hence will run only in FireFox
  2. The output is in .har file and to view this you need har file viewer.

For more info on this visit

Results Visualization using xsl stylesheet in JMeter

Using xsl stylesheet in JMeter with .jtl files

Jmeter provide several .xsl files to visualize the result in human readable format outside JMeter tool.
These are located in %APACHE_JMETER_HOME/extras folder

  • jmeter-results-detail-report.xsl
  • jmeter-results-detail-report_21.xsl
  • jmeter-results-report.xsl
  • jmeter-results-report_21.xsl

1. Open the Jtl file in wordpad or any other text editor and inser the following line:
<?xml-stylesheet type=”text/xsl” href=”<Path of Jmeter home>\extras\jmeter-results-report_21.xsl”?>
eg: <?xml-stylesheet type=”text/xsl” href=”D:\jakarta-jmeter-2.9\extras\jmeter-results-report_21.xsl”?>

This line should be inserted between the line <?xml version=”1.0″ encoding=”UTF-8″?> and <testResults version=”1.2″> as  shown below:

JTL file

2. Save the jtl file and open an excel worksheet.Drag the Jtl file into it. You are there.


XSL example

You see that *.jtl file is parsed and converted to Excel worksheet.

But wait a minute. You shoud save your result in xml format and not csv. To do so either change the default settings in JMeter properties files or in the configure option in Listeners as shown below:


Result Configure


JMeter – Transaction Controller

Transaction Controller

The Transaction Controller generates an additional sample which measures the overall time taken to perform the nested test elements.

Note that this time by default includes all processing within the controller scope, not just the samples; this can be changed by unchecking “Include duration of timer and pre-post processors in generated sample“.

Transaction Controller


The generated sample time includes all the times for the nested samplers, and any timers etc. Depending on the clock resolution, it may be slightly longer than the sum of the individual samplers plus timers. The clock might tick after the controller recorded the start time but before the first sample starts. Similarly at the end.

It is only regarded successful if all the sub samples are successful.

Transaction Controller Results

If “Generate Parent sample” is selected individual samples no longer appear as separate entities in listeners except tree listener. Also, the sub-samples do not appear in CSV log files, but they can be saved to XML files.

Benefit of having Transaction controller

  • It will help you to get response time of the page, provided you have grouped all the elements of the page correctly.
  • Do not consider this as browser page load time. JMeter does not emulate browser behavior.

There is an option in JMeter 2.5+ to get close to browser rendering time by using thread pool option.

In JMeter 2.5+ there is an option to emulate browser behavior of parallel connections. As in browser all resources / page elements are not loaded sequentially.

For more detail on this feature, click here to view my post on concurrent pool size and browser emulation.

Why do performance testing?

Objective/Need of Performance testing

At the highest level, performance testing is almost always conducted to address one or more risks related to expense, opportunity costs, continuity, and/or corporate reputation. Some more specific reasons for conducting performance testing include:

Assessing release readiness by

    1. Enabling us to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions. These predictions are also valuable to the stakeholders who make decisions about whether an application is ready for release or capable of handling future growth, or whether it requires a performance improvement/hardware upgrade prior to release.
    2. Providing data indicating the likelihood of user dissatisfaction with the performance characteristics of the system.
    3. Providing data to aid in the prediction of revenue losses or damaged brand credibility due to scalability or stability issues, or due to users being dissatisfied with application response time.

Assessing infrastructure adequacy by:

    1. Evaluating the adequacy of current capacity.
    2. Determining the acceptability of stability.
    3. Determining the capacity of the application’s infrastructure, as well as determining the future resources required to deliver acceptable application performance.
    4. Comparing different system configurations to determine which works best for both the application and the business.
    5. Verifying that the application exhibits the desired performance characteristics, within budgeted resource utilization constraints.

Assessing adequacy of developed software performance by:

    1. Determining the application’s desired performance characteristics before and after changes to the software.
    2. Providing comparisons between the application’s current and desired performance characteristics.

Improving the efficiency of performance tuning by:

    1. Analyzing the behavior of the application at various load levels.
    2. Identifying bottlenecks in the application.
    3. Providing information related to the speed, scalability, and stability of a product prior to production release, thus enabling you to make informed decisions about whether and when to tune the system.
In a Nutshell, it helps in
  • Determining business transaction response time
  • Identify system bottlenecks
  • Server resource usage e.g. CPU, Memory and Disk Space Network Bandwidth(throughput) and Latency(delay) e.g. Limited network throughput speeds serve to introduce a latency(delay) when transmitting larger amounts of data at specific location
  • Determining system (hardware/software) optimal configuration
  • Verifying current system capacity and scalability for future growth
  • Determining how many users the system can support
  • Determining if the application will meet its SLA