Chatbots for surveys, surveybots, conversational surveys, feedback chatbots, conversational survey tools, AI survey tools, chatbot questionnaires. The list of names goes on and on. But how do they compare and which one is the best performer?
Surveys have been around since ever and ever, and most people (most of all the respondents) absolutely despise them. They are impersonal, tedious and unengaging to fill out. But still, an incredibly important tool used by millions daily to collect feedback and opinions.
Since chatbots are making their entry into a myriad of similar areas, the movement of using them over surveys in order to collect insights has started to gain real momentum.
In the past year, there have been several combatants popping up who all claim they have what it takes to conquer the throne from SurveyMonkey, Typeform, Qualtrics, and Google Forms.
Their goal?
- More conversation, less survey.
Chatbots have seen increased popularity since recent advancements in artificial intelligence and machine learning. Chatbots and artificial intelligence are, however, not always inseparable instances. The level of intelligence in today’s chatbots might not always be… very high (cough).
The question on everyone’s lips is: Are chatbots any better than surveys when it comes to collecting feedback? Presenting a traditional survey in a chat interface isn’t very high up on the innovation spectrum. The purpose of using chatbots rather than forms is to achieve a humanlike conversation with thousands of people simultaneously and to extract more qualitative data at the same scale as surveys.
That’s why we are focusing on high-level concepts in this comparison instead of granular detail such as export formats, integrations and so on.
From what we’ve found on insightplatforms.com, there are basically six vendors competing in the conversational feedback domain. All have publically available trials for the chat interface.
These products were evaluated based on six different criteria:
Chatbot intelligence
Conversational quality
Generalizability
Results representation
User-friendliness (sender)
Survey nonresemblance
This new generation of tools approaches the feedback-giving experience differently. For example, some vendors focus on a few well-defined areas in which they serve customers, while others take on a broader approach.
When comparing these platforms it’s eminently obvious how these differences in tactics have an impact on the product. The compromise between conversational sophistication and broad use case is inevitable based on limitations in AI & NLP. Platforms with broad use cases have little to no conversational intelligence while those with a narrow use case have a much higher degree of intelligence (as illustrated below).
So let’s have a look at the combatants (in alphabetic order):
Self-describing property: Conversational survey tool.
Acebot features a comprehensible interface with intuitive steps on how to get started with conversational surveys. With Acebot, all surveys belong to a certain bot that you can name yourself.
The intelligent part of Acebot is the sentiment analysis (called Emotion Tracking) and can give different responses depending on if the input is negative or positive.
Acebot supports open-ended questions but doesn’t have any underlying intelligence (other than Emotion Tracking) that interprets incoming responses. This entails some pretty mundane conversations and unfortunately makes the overall experience mediocre.
The way results are presented leaves much to desire. No graphs are generated from numerical data and text input are simply stacked into a table.
This is the results generated from a test survey. (Pay extra attention to Q1, What is your name?).
What is really good with Acebot is the flexibility. There’s a large number of templates ready to use and editing questions is very easy.
Verdict
Chatbot intelligence — 2/5
Limited but more than nothing.
Conversational quality — 2/5
Supporting open-ended questions is good, but real-time understanding and interpretation is non-existing.
Results representation — 2/5
Presents the data but leaves much to desire.
Generalizability — 5/5
One of the best tested. Can be used for pretty much anything.
User-friendliness — 4/5
Good UX except for the results. Setting up and phrasing questions is very smooth.
Survey nonresemblance — 1/5
Just like a survey but in a chat interface
Total score: 16/30
Self-describing property: Chatbot Survey Platform.
Surveybot takes a different approach and focuses solely on Facebook and Facebook Workplace as the medium for their surveys. Surveybot features a small set of prebuilt templates but setting up a custom survey is easy in the editor.
There’s no sign of any underlying intelligence to support conversation, but as a survey in chat format, it’s doing its job very well.
In the results-section, the user can find percentage graphs for most quantitative data. Unfortunately, no text analytics can be found. Which means analyzing open responses manually.
Verdict
Chatbot intelligence — 1/5
404 — No intelligence found.
Conversational quality — 2/5
Good-looking but unengaging.
Results representation — 2/5
Looks like any general survey tool with nothing out of the ordinary.
Generalizability — 3/5
Could potentially be used for a variety of areas but limited to function within Facebook.
User-friendliness — 3/5
Well-designed and easy to understand.
Survey nonresemblance — 1/5
A normal survey in a chatbot interface.
Total score: 12/30
Self-describing property: Online survey platform.
SurveySparrow reminds somewhat of AceBot but with less emphasis on chatbot functionality, leaning heavily towards a traditional survey in a chatbots’ clothing. The user has access to a wide array of question types, ready-made templates and survey logics/piping.
Open-ended questions work just like in a normal survey and don’t seem to undergo any real-time analytics to help support the conversation. As soon as a question is answered, the next one pops up conveying little conversational engagement. Dynamic replies to positive/negative content are generally absent.
In the results, the user is presented with a per respondent overview of the results. Very limited text data aggregation aids only marginally in identifying points of improvement.
On the plus side, as long as you keep to traditional survey questions and is happy with the results, SurveySparrow is a much more engaging option compared to other survey tools.
Verdict
Chatbot intelligence — 1/5
Non-existent.
Conversational quality — 2/5
A well-designed interface that doesn’t support meaningful conversations.
Results representation — 3/5
Good looking design of numerical data. Text is presented, not analyzed.
Generalizability — 5/5
Extremely versatile with loads of prebuilt survey templates.
User-friendliness — 5/5
Best in the test. Very well designed and user-friendly.
Survey nonresemblance — 1/5
A survey in a chatbot interface.
Total score: 17/30
Self-describing property: Conversational survey.
The Rival platform makes a great first impression and showcases how big brands like to use the product. The design of the homepage is well thought through and is easy to navigate.
Unfortunately, in terms of trying the product, there’s no option to open a free account in the same fashion as the rest of the products in this test. There is however a prebuilt demo chat to try.
The chat experience features a minimum of open conversational questions and relies heavily on preset buttons.
The chat interface is nicely designed with colorful images adding to the experience.
Rival seem to target custom-built conversational surveys and features use cases ranging from retail to professional football which demonstrates a wide area of application.
As there’s no way to publically see any results representation, giving Rival a rating would not be fair. Rival is therefore disqualified for a rating in this test.
Self-describing property: Conversational Feedback Platform.
Wizu has taken a distinct step towards added conversational intelligence to dig up richer insights.
According to Wizu, the follow-up questions are issued on certain keywords, sentence length, and sentiment analysis which sounds very impressive. And it works surprisingly well.
The conversations are qualitative and Wizu seems to understand most things we throw at him(it?). A nice touch is that Wizu in some sense anticipates what you are going to write and prefills the reply section with a few opening words which can prevent typing fatigue — maybe not the most bias-free solution, but still.
In terms of generalizability, the use cases are very broad. There are options for pretty much everything, you can even edit the exact phrasings that Wizu utters.
The interface is however poorly designed and difficult to understand. A more user-friendly experience is needed to make it possible to actually use Wizu.
Same thing goes with the results section. It looks advanced, but it’s extremely difficult to understand what’s going on. A demo data report could help to make things more clear.
Chatbot intelligence — 4/5
Wizu makes an intelligent impression and manages to insert some meaningful follow-up questions here and there.
Conversational quality — 5/5
Very good conversational flow.
Results representation — 3/5
Advanced but very cluttered and difficult to understand.
Generalizability — 4/5
Very versatile with many prebuilt survey templates. A bit difficult to set up new scripts.
User-friendliness — 3/5
Room for design improvements but well-engineered
Survey nonresemblance — 5/5
Looks nowhere near a survey.
Total score: 23/30
Chatbots have come a long way in recent years, but the road to passing the Turing test is a looooong and audacious one.
In contrast to commonly found FAQ-bots, the novel field of conversational feedback bots doesn’t have prebuilt frameworks such as Google available. The companies in this study have ventured down different paths trying to tackle the challenges of automatic intelligent conversation. which is an impressive feat in itself.
The downside with having to build advanced machine learning algorithms from scratch without the help of 300 Google engineers is that the end-results simply aren’t as good.
All of the tested chatbots are still at an embryo stadium. Some more advanced platforms in this study show real promise to soon deliver jaw-dropping results once effective training frameworks are in place.
The end result of this test is a draw between the Wizu and the Hubert.ai platforms, both putting most emphasis on conversational intelligence.
The feeling of transmitting feedback via chat versus filling out a 1–5 survey only has to be experienced once to understand that this is the future in respondent data collection.