In this article, we are going to discuss the core things about throughput in performance testing.
Before starting with the details, let’s have a look at some essentials of performance testing.
Performance testing is important because:
You can start performance testing as early as possible in the development stages of your software application. In this way, you can optimize your web server and prevent business costs at later stages.
Discovering the problems in the performance after deployment of the application means lots of working hours to rectify the issues. So it can be very expensive.
As soon as the basic web pages of the application work, the quality assurance team must conduct the initial load tests. From that point onwards, they should perform performance testing regularly for every build.
There are different tools and criteria for performance testing of the applications. Here we will talk about an important measure, i.e., throughput.
Every software application or website has lots of users that perform different requests. Testers need to ensure that the application meets the required capacity of requests before going live.
There are some performance testing basics that need to be measured during the process. Throughput is one of them. Let’s find out what is throughput in performance testing.
Before starting the test, we need to set a realistic performance throughput goal, so that we can get more precise and reliable results.
These are some important factors to determine realistic throughput:
Here, we will explain the concept of throughput with the help of a real-life example. Imagine there’s a fast food stall named “Yummy Burgers.” They serve burgers and fries for the customers.
Let’s say that “Yummy Burgers” have three workers in the stall, and every single worker always takes 5 minutes to serve the food to one customer.
So, if they have three customers lined up to be served by three workers, it means “Yummy Burgers” can serve food to three customers in 5 minutes.
Hence, if we need to make a performance report of “Yummy Burgers”, it would show that its throughput is three customers per five minutes.
It is Yummy Burgers’ dilemma that, no matter how many customers are waiting out there for the food, the maximum number they can handle during a specific time frame will always be the same, i.e., three. This is the maximum throughput.
As more customers line up for the food, they must wait for their turn, creating a queue.
The same concept applies to the testing of a web application.
If a web application receives 100 requests per second, but it can handle only 70 requests per second, the remaining 30 requests must wait in a queue.
In performance testing, we denote throughput as “Transactions per second” or TPS.
Using Apache JMeter is quite popular to test the performance of a software application. JMeter is helpful to determine the maximum number of concurrent users that the application can handle and also provides a graphical analysis for performance testing.
JMeter provides lots of ways to record the value of throughput. Given here are some JMeter listeners that you can use for this purpose:
JMeter also provides a timer component, ‘Constant Throughput Timer’, which you can use to set the value of Transactions per Second (TPS) to test the load of the application.
Now, we will show the usage of throughput in the performance test using JMeter.Let’s say we are going to conduct a sample test with 100 concurrent threads and track the value of throughput.
Suppose we have the latest release of JMeter installed on our system, and we have already performed all other required configurations. Now, we have to build a test plan.
In this test, we are going to define five ThreadGroup elements. Each of these elements is going to have a different ramp-up time i.e., 0, 15, 25, 35, and 45. Ramp-up time is the duration to start every thread. We will configure 100 users in these ThreadGroup elements.
If we want to configure a larger number of users, then more ramp-up time will be required.
These thread groups will have an HTTP sampler that will generate requests on the homepage of a sample website (Suppose www.samplesite.com).
In Use Case 1, we have a ThreadGroup element that is configured with 100 threads, and its ramp-up time is 0.
It will have the “Number of Threads” field set as 100. This means 100 users will send requests at once. Similarly, we can configure the remaining 4 threads as well and set their ramp-up time as 15, 25, 35, and 45. Also, name the samplers for each thread group.
As mentioned before, these HTTP samplers will point to the home page of the sample website.
It is necessary to run these thread groups in a proper sequence. For this purpose, select “Test Plan” from the control panel, and check the field “Run Thread Groups consecutively.”
“Aggregate Report” is a listener that is used to analyze and observe the test results. To use this listener, right-click on “Test Plan” and select:
Add → Listener → Aggregate Report
Then click on the start icon to run the test.
Now, let’s see how to understand the results of throughput from the Aggregate Report.
The first thread group with ramp-up time 0 shows all the threads put an instant load on the server by starting at once. This scenario has a fairly high throughput, but it is not practical. So, this will not show realistic output.
The second and third thread groups have a ramp-up time of a realistic range, so they are more likely to show proper performance throughput and request load time.
Thread groups four and five have higher ramp-up time, which means their throughput will decrease.
Hence, the reliable output can be determined from the second and third thread group results.
The decision for deployment of a new release or change relies on the ability of the application to handle specific TPS. So, the performance test plan has certain throughput goals. But we need to make sure that these goals are realistic and represent the true characteristics of the production.
The test plan is all in vain if we pass it by using unrealistic conditions. For example, the test plan we described above had higher values of throughput for the first thread group, but it was not depicting the actual scenario of the live environment.
So, by using such methods, we cannot get the proper idea if our application is going to handle the actual load or not. Therefore, setting up suitable tests is crucial.
In a nutshell, throughput is a crucial performance indicator of web applications. But, depending only on throughput metrics is not enough. Hence, it needs to be checked with latency and response times.
It is also really important to create realistic throughput to achieve the set performance testing goals.