I’ve been working with massive data sets for several years at companies like Facebook to analyze and address operational challenges, from inventory to customer lifetime value. But I hadn’t worked yet on something this ambitious.
How do you take every conversation that you have with a customer, and use that data to improve your overall customer experience? Especially when you’re dealing with millions of conversations each year – across phone calls, chat and SMS – some exceeding thousands of words.
Over the course of the last year, we have gone all in to create an automated and scalable solution for improving the experience of customers that call our 1,000+ person call center. We believe that happy customers are more likely to buy something and recommend us to a
friend, and we believe that happy, engaged call center agents equipped with the tools and knowhow to make an effective sale feed this virtuous circle.
Easy in concept, not so easy to implement. One reason is that we have so much data. We have tens of millions of words, emotions (yes, emotions), silences, outcomes, purchases and more to evaluate. With so much data on hand, my team decided it was a perfect time to roll out an artificial intelligence model.
After much research, we decided that a neural network would be the best type of artificial intelligence to build. For those curious, a neural network is a type of AI uses several small decisions to make one big decision.
As an example, consider when you are driving and approach a stoplight that has turned from green to yellow. Before you decide to stop, slow down, or speed up your brain considers several factors. How fast are you driving? How far away is the intersection? Is there a policeman in the vicinity? Are there kids in the car? Once you consider the circumstances, your brain then determines whether you should keep driving, slow
down, or stop.
In the same way, a neural network aims to make one final decision by considering several circumstantial factors. In our case, a neural network would generate several scores on agent performance and customer engagement. As a side benefit, we’d be able to determine which factors
contributed to those scores the most.
So here is what I learned building my first neural network. Luckily, I have a great team that had experience building these before to help me avoid the pitfalls of an intense undertaking and time commitment.
The first major learning is that a simple, clear business goal created focus for the project. With thousands of agents interacting with customers, we needed a way to monitor and score the effectiveness of our agents. This is way too difficult to do at scale for reasons you can probably imagine (the pain point). And yet we needed a way to grade every single agent daily (the business goal) simultaneously and accurately. Given the validated pain points and clear business goal, a neural network made perfect sense. A neural network would address the pain of manually listening to every call and by producing automated, daily agent scores.
The second major learning is that you need to invest in acquiring tons of data. Lucky for us, the best part about analyzing conversations is the sheer amount of available data, especially when speech analytics converts audio into text files, emotions, sentiment and more. From a data perspective, we had access to millions of quantitative and qualitative data points from thousands of customer interactions. Also, we had access to the words and phrases that agents and customers alike were using and the level of customer agitation. Finally, we could also analyze call duration and amount of silence on a given call. All these data points were the biggest reason why building a neural network was a possibility.
After writing the code for the neural network, we started to feed customer interaction data into the neural network. The neural network considered whether calls resulted in a sale or non-sale, the provider, and even the words and phrases customers and agents were using throughout the call. The neural network was able to process millions of data points in a matter of weeks. The plethora of data provided ample context for the neural network to produce informed and accurate agent scores.
The final learning is investing in educating business stakeholders. The neural network generated automated agent scores that were accurate and concise, which is great, and we had to couple that with robust conceptual and detailed materials on how it worked and why it worked to build confidence with stakeholders.
As a result of our efforts, we are investing additional time and resources to build more neural networks. With more data and time spent on improving the neural network, even higher accuracy can be achieved. The results of the initial neural network were very promising and we’re now aiming to complete three more neural network models by the end of the year.
Daniel Chae is part of the data science team at Centerfield, a Los Angeles- based technology-driven marketing and customer acquisition company driving millions of sales for residential services, business services and telecommunications brands.