paint-brush
Throughput in Performance Testing: Why It's Importantby@qalified
1,109 reads
1,109 reads

Throughput in Performance Testing: Why It's Important

by QAlifiedAugust 2nd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Software testers practice several techniques to check the quality of the applications. One of them is Performance Testing, where the team tests the speed, response time, scalability, and reliability of the software. Throughput is a key metric that shows the number of requests a software application can handle in a specific time (second, minute, or hour).
featured image - Throughput in Performance Testing: Why It's Important
QAlified HackerNoon profile picture

Software testers practice several techniques to check the quality of the applications. Performance testing is one of the decisive measures where the team tests the speed, response time, scalability, and reliability of the software.

In this article, we are going to discuss the core things about throughput in performance testing.

The Basics of Throughput in Performance Testing

Before starting with the details, let’s have a look at some essentials of performance testing.

Why Performance Testing?

Performance testing is important because:


  • Other than commonly reported logical or functional issues, the applications also struggle with network problems that determine their reliability.


  • Customers easily get frustrated when their application accessibility experience is not good.


  • Application speed and performance changes according to the region where they are being used. So it is important to judge the performance of the application at different speeds and networks.


  • A system can work perfectly with a specific number of users, but it can behave differently when the number of users increases from that limit. So checking the performance testing basics under specific conditions is necessary.

When Should You Start Performance Testing?

You can start performance testing as early as possible in the development stages of your software application. In this way, you can optimize your web server and prevent business costs at later stages.


Discovering the problems in the performance after deployment of the application means lots of working hours to rectify the issues. So it can be very expensive.


As soon as the basic web pages of the application work, the quality assurance team must conduct the initial load tests. From that point onwards, they should perform performance testing regularly for every build.


There are different tools and criteria for performance testing of the applications. Here we will talk about an important measure, i.e., throughput.

What Is Throughput in Performance Testing?

Every software application or website has lots of users that perform different requests. Testers need to ensure that the application meets the required capacity of requests before going live.


There are some performance testing basics that need to be measured during the process. Throughput is one of them. Let’s find out what is throughput in performance testing.

Throughput is a key metric that shows the number of requests a software application can handle in a specific time (second, minute, or hour).

Before starting the test, we need to set a realistic performance throughput goal, so that we can get more precise and reliable results.


These are some important factors to determine realistic throughput:


  • Estimated quantity and types of users that are going to use the application or website.


  • User behavior, i.e., what actions they are going to perform using the application.


  • The connection types that are going to affect the response of the system and ultimately user experience as well.


  • The effects of pauses and delays on the system.

Throughput in Real-Life Scenario

Here, we will explain the concept of throughput with the help of a real-life example. Imagine there’s a fast food stall named “Yummy Burgers.” They serve burgers and fries for the customers.


Let’s say that “Yummy Burgers” have three workers in the stall, and every single worker always takes 5 minutes to serve the food to one customer.


So, if they have three customers lined up to be served by three workers, it means “Yummy Burgers” can serve food to three customers in 5 minutes.


Hence, if we need to make a performance report of “Yummy Burgers”, it would show that its throughput is three customers per five minutes.


It is Yummy Burgers’ dilemma that, no matter how many customers are waiting out there for the food, the maximum number they can handle during a specific time frame will always be the same, i.e., three. This is the maximum throughput.


As more customers line up for the food, they must wait for their turn, creating a queue.

The same concept applies to the testing of a web application.


If a web application receives 100 requests per second, but it can handle only 70 requests per second, the remaining 30 requests must wait in a queue.


In performance testing, we denote throughput as “Transactions per second” or TPS.

Throughput in Performance Testing JMeter:

Using Apache JMeter is quite popular to test the performance of a software application. JMeter is helpful to determine the maximum number of concurrent users that the application can handle and also provides a graphical analysis for performance testing.


JMeter provides lots of ways to record the value of throughput. Given here are some JMeter listeners that you can use for this purpose:


  • Summary Report
  • Aggregate Report
  • Aggregate Graph
  • Graph Results


JMeter also provides a timer component, ‘Constant Throughput Timer’, which you can use to set the value of Transactions per Second (TPS) to test the load of the application.


Now, we will show the usage of throughput in the performance test using JMeter.Let’s say we are going to conduct a sample test with 100 concurrent threads and track the value of throughput.


Suppose we have the latest release of JMeter installed on our system, and we have already performed all other required configurations. Now, we have to build a test plan.

1. Configuration of Test

In this test, we are going to define five ThreadGroup elements. Each of these elements is going to have a different ramp-up time i.e., 0, 15, 25, 35, and 45. Ramp-up time is the duration to start every thread. We will configure 100 users in these ThreadGroup elements.


If we want to configure a larger number of users, then more ramp-up time will be required.


These thread groups will have an HTTP sampler that will generate requests on the homepage of a sample website (Suppose www.samplesite.com).


In Use Case 1, we have a ThreadGroup element that is configured with 100 threads, and its ramp-up time is 0.


It will have the “Number of Threads” field set as 100. This means 100 users will send requests at once. Similarly, we can configure the remaining 4 threads as well and set their ramp-up time as 15, 25, 35, and 45. Also, name the samplers for each thread group.


As mentioned before, these HTTP samplers will point to the home page of the sample website.


It is necessary to run these thread groups in a proper sequence. For this purpose, select “Test Plan” from the control panel, and check the field “Run Thread Groups consecutively.”

2. Analysis of Test Results

“Aggregate Report” is a listener that is used to analyze and observe the test results. To use this listener, right-click on “Test Plan” and select:


Add → Listener → Aggregate Report


Then click on the start icon to run the test.


Now, let’s see how to understand the results of throughput from the Aggregate Report.


The first thread group with ramp-up time 0 shows all the threads put an instant load on the server by starting at once. This scenario has a fairly high throughput, but it is not practical. So, this will not show realistic output.


The second and third thread groups have a ramp-up time of a realistic range, so they are more likely to show proper performance throughput and request load time.


Thread groups four and five have higher ramp-up time, which means their throughput will decrease.


Hence, the reliable output can be determined from the second and third thread group results.

Important Points to Remember While Testing Throughput:

The decision for deployment of a new release or change relies on the ability of the application to handle specific TPS. So, the performance test plan has certain throughput goals. But we need to make sure that these goals are realistic and represent the true characteristics of the production.


The test plan is all in vain if we pass it by using unrealistic conditions. For example, the test plan we described above had higher values of throughput for the first thread group, but it was not depicting the actual scenario of the live environment.


So, by using such methods, we cannot get the proper idea if our application is going to handle the actual load or not. Therefore, setting up suitable tests is crucial.

Now, we will discuss some important points that we need to consider for testing performance throughput.

  • Appropriate Test Design: Test design determines if the generated throughput is realistic or not. In the real-time scenario, each request can be different and may also trigger complicated processes for required results. So we need to manipulate the tests according to the expected live environment.


  • Representation of Real Users: Each application user may have requests that affect the resources of the system. So, if real users are not being represented in the test scenario, the results may show inexact resource usage at the backend thus, the test will not emulate the right conditions.


  • **==Consider Pauses and Delays: ==**In a live environment, users need to think, take and process information, enter information in the fields, etc. But the servers still use resources during that pause. So, try to incorporate these user behaviors in your scripts.


  • Connection Speed: Users of the application are connected through different network speeds, regions, or via mobile networks. So, it is necessary to choose such bandwidth that represents such user connections as well.

Conclusions

In a nutshell, throughput is a crucial performance indicator of web applications. But, depending only on throughput metrics is not enough. Hence, it needs to be checked with latency and response times.


It is also really important to create realistic throughput to achieve the set performance testing goals.