What Tools You Must Use to Measure Your Product Performance
ISS Art provides comprehensive software solutions - ML, AI, NLP, Computer Vision https://issart.com/
Performance testing is a very crucial part of quality control for many applications. If an application is supposed to support multiple connections and/or numerous calls to a server, then it’s very important to be sure that it can handle the load. What’s good about an app that processes user’s requests at a snail pace?
In this article I want to look at different tools that can be very useful to conduct performance testing.
On what stages of development should we run performance tests?
Well, every time a functionality is implemented on the server side, we can check its performance by sending requests via corresponding API and measuring response time. Then, after each major change, we can rerun our tests using Continuous Integration approach.
But how exactly can we measure the performance of our app? To answer this question, let me briefly introduce you to a basic performance testing workflow and tools used.
Performance testing environment
First of all, an environment you plan to launch tests on should be similar to the production environment.
Besides, performance tests usually are resource intensive themselves, so, in order to test performance properly, you need to use a quite powerful load-generator machine or even consider distributed load tests.
Tips to create the environment:
- In the age of cloud-based servers, it’s natural to use the same cloud-platform for your performance testing environment, as for the production, using similarly configured instances.
- For load generators, you can choose from many existing solutions such as Blazemeter, NeoLoad, Redline13 and many others, that differ by cost, generated load and cloud-platform support. Or, alternatively, you can use separate server(s) on your cloud-platform, configured to run performance tests against your testing environment.
Before we begin designing our performance tests, we need to know how our application functions in questions of what API requests are sent and what responses are received. To do so, we can perform desired actions in the app and intercept requests being sent using one of the many web profilers/sniffers tools available. There is in no case an exhaustive list of available profilers:
Each of them has their own pros and cons, different features and limitations. Personally for me, the most convenient tool is Blazemeter Chrome extension
- it provides very basic functions, compared to alternatives, yet it is extremely simple and sufficient for my needs.
Also, it’s possible to import our recorded steps from this extension to .jmt format - useful, if you chose Apache JMeter as a load tool (see below).
Blazemeter Chrome extension
Also, here it worth mentioning the importance of API documentation. If developers properly document API implementations, then it greatly improves test design process, because tester is able to open API docs in Swagger,
for example, and get the whole information on requests, parameters and responses.
Performance tests design
Finally, we’ve reached the most interesting part of our article. Here we will take a glance on test design.
And of course, there is a wide choice of available tools such as Gatling
, Load Runner
that make requests to a server and gather responses. Amongst them, in my personal opinion, Apache JMeter
Apache JMeter is a free and open-source solution for performance testing with almost unlimited functionality and wide community support. Using JMeter you can implement requests almost over any existing protocol and fine tune your tests to imitate a behaviour of the system perfectly.
Almost unlimited extensibility, vast amount of built-in features, such as sample pre- and post-processing, data extractors, logical controllers, and so on, makes it a perfect tool to simulate as complex behaviour as you need.
For the simplicity sake, we can say that JMeter
- sends designated requests using a number of “load users” simultaneously,
- gathers responses data and response time/latency.
Gradually increasing the number of “load users”, we increase overall load and measure changes in response time/latency. That gives us the picture of the whole performance of tested APIs.
Also, you can check “throughput” of your system by using “Throughput Shaper Timer”, gradually increasing “requests per second” frequency and then looking into how servers handle this load.
So, let’s say we’ve found out that our application handles requests poorly. Is it enough?
No, to complete our testing we have to determine where exactly “bottlenecks” appears in the configuration.
To do so, we can watch resource utilisation metrics of tested servers. For example, the application server might handle a load just fine, meanwhile database server struggling to process all the requests.
So it’s very important to have servers resource utilisation metrics at the time when you run performance tests.
Most of cloud-platforms provide at least some monitoring features which always could be expanded using one of many existing third-party tools. Apache JMeter itself is able to collect server metrics using server agent and Servers Performance Monitoring plugin
. Usually it’s enough to monitor CPU and memory available on server, with adding new metrics to monitor if needed.
Now we know if there are any performance issues in our configuration and where they are exactly. With this information in hands developers could possibly fix it or managers could make a decision to increase performance resources of troubled server.
As we see, performance testing is a very useful and easy process. A vast amount of available tools might dim your vision, but, as a quick rule of thumb, consider to use the simplest! With addition of Continuous Integration to our process we can catch performance issues in timely manner.
Subscribe to get your daily round-up of top tech stories!