User support is always a story about people.
Your customers have questions or concerns, and the support team helps them cope with them; it’s easy. But every team needs to grow, and if it’s usually easy to measure productivity of the sales or marketing teams (all key metrics are easy to express in numbers), it’s more difficult to set goals for the support team (you’re not suggesting they increase their empathy by 20%, are you?).
The quality of support must be expressed in numbers. Large companies use a standard set of metrics: response speed, the number of issues resolved, the number of issues missed, the quality of agents’ work (many people express this metric via NPS). We will tell you how it all works for us at Dashly and how you can apply the same tools to your business.
You can track the quality of the entire team over a specific period, the result of each agent and channel individually, or how your users rate the performance of your support team.
It makes sense to first look at the big picture and then to study individual metrics.
What this metric means: how quickly the team responds to user questions.
How to evaluate it: the less time it takes to respond, the better.
How it can be improved: Find out what affects response speed (maybe agents don’t have time to take on a new issue because they are loaded by others, or statistics are spoiled by complex issues that can’t be resolved quickly) and try to fix it. Configure automatic responses if an agent can’t respond in time.
Speed is the key. Imagine that a user has 3 tabs opened in their browser with your competitors’ websites; whoever responds first will win. You can start communicating with your users right in the search results.If an existing customer contacts you, the stakes are even higher: they paid and want their issue to be resolved quickly. If this does not happen, they will leave, and you know that existing customers bring more profit than new ones?
Best of all, if the answer is instant — up to 10 seconds (here we are talking about the response to the first message from the user in a chat). It is very important that the first answer is at least slightly personalized.
Bad: “Good afternoon! We are processing your request, please wait.”
Good: “Hello, John! I just need 30 seconds to look at the catalog and help you choose the chairs”.
It is also important to consider the time between the following responses, but try to keep a balance. There are really complex issues that require a long and detailed answer, but it is very important to show the user that you didn’t forget about them. Automatic responses will help maintain their attention; if an agent does not have time to answer, a user will receive an automatic response and see that their issue is under control.
Do not forget to set up automatic responses during non-working hours (they will not only say that there are no agents at the place now but also help collect contacts to respond to in the morning). Then you can check the response rate during working hours and non-working hours.
What this metric means: do users ask the same number of questions every day, or on certain days, do they ask more/less?
How to evaluate it: it is hard to say for sure, but it is still better if there are not so many questions.
How it can be improved: optimize the support team working schedule depending on the time when most of the new questions arrive.
On the one hand, a large number of new questions means, firstly, that your chat is working, and secondly, that users are interested in the dialog and are ready to solve their issues instead of just quitting and leaving. On the other hand, if a lot of things are unclear to your website visitors, this is a red flag.
Analyze not only the number of questions but also their nature: perhaps one particular function causes difficulties. If the number of questions has risen sharply, this is also a reason to thoroughly consider.
You can also see the daily distribution of the number of new questions to identify the days with the highest and the lowest load. For example, you may find out that on release days, you have an increasing number of questions.
If you notice a consistent trend (for example, the number of questions increases dramatically on the last Thursday of each month), identify the reason and try to help the support team. During a sharp increase, they may simply not be able to cope with the load, and this will spoil your statistics and reputation among users.
What this metric means: how many questions a user has on average.
How to evaluate it: the fewer, the better.
How it can be improved: track bots, analyze frequently asked questions and set up hints.
Each customer can ask an unlimited number of questions. It is best when one question has exactly one problem; in this case, it will be more convenient for agents to solve it and it will be more convenient for you to evaluate their work. It sometimes happens that a user lists 8 complaints in one question; we still consider it as one question (just a very big one).
Therefore, there will always be (or almost always) more questions raised than there are users who open them. But if you see a clear imbalance, for example, that there are 50 dialogs and 2 users, pay attention to this. Check if such talkative users are bots and how reasonable questions they ask. If someone addresses the chat due to boredom and takes the time of your support team, think about how you can protect your team from their messages.
What this metric means: how many user questions the team manages to resolve for a given period.
How to evaluate it: the more, the better.
How it can be improved: if the team does not have time to handle all issues, it may be time to expand the staff. If certain questions cause difficulties (and these questions are often repeated), provide additional training on bottlenecks.
This metric will help answer the question of how many questions your support team has time to process. You can analyze the number of resolved issues by day or by week.
It is logical that the number of resolved issues should approach the number of open ones. If the gap is too wide, it means your support team can’t handle the load.
This metric is frequently expressed as a resolution coefficient, that is, the ratio of successfully resolved questions to the total number of questions.
What this metric means: how many users you have been able to help.
How to evaluate it: the more, the better.
How it can be improved: constantly analyze errors and do not close the issue until a user is fully satisfied.
This number shows how many people you made happy by solving their problems. It can be analyzed in relation to the total number of resolved dialogs. If the metrics are too different, something goes wrong. Perhaps many questions appear as users work or the agent closes the dialog before making sure that the client understands everything and is completely satisfied.
On the other hand, customers who ask a lot of questions may be the most loyal users who decide to thoroughly understand the product. In any case, it’s worth considering what happens.
What this metric means: at what time your support team is most loaded.
How to evaluate it: don’t evaluate it, just remember it
How it can be improved: distribute the workload based on the “hottest” hours.
Track your support team’s workload by hours. This is an important metric because with it you can optimize the work of the entire team.
When you know at what time agents have to process the most requests, you can adjust the operating mode and direct the maximum resources to those hours when the load is maximum.
Tracking the individual performance of each team member is just as important as their overall result. So you can make the support team’s work better: reward the best employees and help the lagging ones.
We recommend that you pay attention to the following individual metrics:
The agent’s response speed — when you know the average response time of the whole team, evaluate the response time of each agent separately during working and non-working hours.
The average score of an agent — this metric demonstrates how users rate the support team’s work. If employees of different departments communicate with users, you can distribute them across the channels and evaluate the work of each department. It’s convenient to look at the overall rating, and then consider the ratings of each agent
Questions the agent is involved in — here you can see the number of open dialogs in which this agent is involved. Please note whether they cope with the number of dialogs assigned to them (compare it with the number of resolved issues).
Questions resolved — the figure helps evaluate the amount of work that the agent does.
If you use several channels to communicate with users (email, chat, messengers, channels of different departments), it makes sense to collect data for each channel. Statistics grouped by channel help analyze the agents’ efficiency in each of them. There are all necessary metrics for this: new and resolved questions, users with new and resolved questions, response speed during working and non-working hours, and the length of the dialog.
After the agent has closed the dialog, a user can evaluate their work. In total, we offer 3 ratings: “Excellent!”, “Ok”, and “Bad”. It’s best to start the analysis with unsatisfactory ratings. Questions with the “Ok” rating also deserve your attention, and the “Excellent!” rating is there exclusively for your joy and happiness A user can comment on their rating: you can view comments on the “Bad” and “Ok” ratings in the same section.
By the way, in the same section, you can open the resolved question (any of them, regardless of the rating). This will help restore justice; whether the agent really worked badly or a user felt cranky.
Metrics help objectively assess the situation and respond in time if something goes wrong. Do not forget to monitor the quality of user support (all the more you can conveniently and quickly conduct all analytics in one place), and they will thank you for this with loyalty and a high LTV.