paint-brush
Testing the voice application on the real users before the developmentby@pavelgvay
765 reads
765 reads

Testing the voice application on the real users before the development

by PavelSeptember 25th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The difference between voice and graphics interfaces is that a user has no restrictions of “screens” and “buttons”, he is able to say anything he wants. Therefore it’s a high priority to know how the user actually uses your application.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Testing the voice application on the real users before the development
Pavel HackerNoon profile picture

The difference between voice and graphics interfaces is that a user has no restrictions of “screens” and “buttons”, he is able to say anything he wants. Therefore it’s a high priority to know how the user actually uses your application.

This article I’ll share my own experience of user-testing voice application even before the first code-line has been written. Also, I’ll try to blow apart the myth the user-testing is difficult.

When is it the right time to begin the test?

The brief answer — as soon as possible. The sooner we can find the mistakes, the easier and cheaper it possible to recover.

Beginning the testing right after you’ve completed the basic part of your design is an ideal moment. This moment you’ve done enough to check your ideas on real users, but also you haven’t spent much time and resources.

In general, the testing is not the one-use process. It must be held iteratively. Our team try to start testing applications as soon as possible, then put some improvements, add some new pieces and test again, and so on.

If you aren’t experiencing failure, then you are making a far worse mistake. You are being driven by the desire to avoid it.

Wizard of Oz technique

Let’s suppose we have already done the basic part of the design and faced we want to watch how people will use our application.

“Wizard of Oz (WOz) testing occurs when the thing being tested does not yet actually exist, and a human is “behind the curtain” to give the illusion of a fully working system.” — Cathy Pearl, “Designing Voice User Interfaces”, 2016.

We will pretend the application is absolutely ready when we actually have only the medium-fidelity prototype. Then the user will speak to it being sure it’s working. Also, we gonna simulate the work of the app.

Overall the process will look like we are starting the prototype, the user speaking to it, he can hear the answers and also giving his answers and we are choosing what the prototype should say.

The prototype

For carrying out the testing we need the medium-fidelity prototype. This is what he must to be able to do:

  • Be voiced, so users can hear it
  • Give us opportunities to choose the variants of the answer

Let’s check out the tools which can help us to create such a prototype.

In this article, such tools as Storyline or Sayspring won’t be considered, because this tools can completely make an application and therefore you have no ways to control the user and application dialog.

TTS tools

The first and easiest way is to use any TTS system. If you are developing an IVR, you can play audio files yourself. In other words, on each user answer, we are voicing the text or playing the audio file.

We can use anything as a tool for voicing the text. It may be Google Translate (where you can voice the text), Google Text-to-Speech or something else.

Google Cloud Text-To-Speech

Google suits me most because it has a wide choice of voices. Also, you can tune the speed and pitch and the tool has TTS supporting.

Pros:

  • Simplicity. It’s easy to use the tool, there are no any difficulties using it.
  • Flexibility. You can add new prompts right on the go at any unaccounted situations.
  • Pricing. Most tools either completely free or allow to voice a certain amount of symbols.

Cons:

  • Bulkiness. Every answer has its own opened tab, it is easy to mess up between them.
  • Huge time expenses. A prototype is a kit of tabs, actually, it doesn’t exist. Every time you want to carry out the user testing, you have to open every tab over again and put in the text. Complex application takes more time.
  • Detachment from the rest of project deliverables. For sure you have the sample script, prompt list, flowchart. Highly likely the prototype isn’t connected to them in any way. For example, if you change some phrases in a prompt list you are going to have to manually sync it at every tab. So it will make your work a bit more complicated.

As a result, this is a pretty simple and of course the working method to carry out the user testing, but he is pretty uncomfortable and time-consuming.

I’ve found two tools which help to improve this process, “gathering” all tabs together. They are Say Wizard and VoiceX.

Tortu

tortu.io: Workspace

Tortu is a tool not just for prototyping. It is more like a VUI designing tool. It lets you map out your conversation into a flowchart, also it helps to keep all your prompts right at flowchart elements.

tortu.io: Prototype

Working methods with Tortu are simple:

  1. We create a flowchart of the conversation. If it becomes too big here is an opportunity to divide the flowchart into logically connected parts.
  2. We write the necessary prompts. For every step of the dialog, we can add an unrestricted amount of prompts variants.
  3. Launching the prototype and testing the application either with user or ourselves.

When we are testing the app with a user, actually we are choosing the answer variants at our prototype. So the prototype just reading aloud the replicas on behalf of the application.

Here is an opportunity to add new branches right on the go. It is very comfortable in such cases when we have some unaccounted situations.

Also here is a wide choice of voices and also an opportunity to tune the speed and pitch of the speech.

Pros:

  • Simplicity. You are literally building the conversation with dialog steps on behalf of a user and an application. Then you are simply following the conversation step by step.
  • Speed and flexibility. A prototype creates in one click. A flowchart and prompts are placed at one place, we don’t spend time too sync them.
  • Pricing. The tool is absolutely free.

Cons:

  • No slots supporting. In cases when a user setting some data, we can’t account them.
  • No opportunity to attach your own audio files. It is highly important to have an opportunity to attach your own audio files to steps when we are testing IVR.

Testing tips

So you have a clear view on how to test your conversations, we’ve found the user and created the prototype. Before the beginning here are some tips:

  • Record your testing sessions. You must record not only the sound but also a video. Nonverbal signals can tell much about a user when he is giving them while speaking to the app.
  • Invite one more colleague. You are going to be very busy about controlling the prototype and speaking with a user. And additional mind helps you to notice things you won’t.
  • Analyze results with a whole team or share them with it. It’s very important when the whole team is aware of all improvements. Remember there are always some improvements after the test.
  • Learn more sources about user testing. More knowledge helps you to receive more information from one session. I recommend beginning with a Chapter 6 of Cathy Pearl “Designing Voice User Interfaces” book.

Conclusion

Testing with users always goes beyond the borders of your comfort zone. Therefore sometimes it’s difficult not to shy away from it. This article I tried to show it is not so difficult as much people think.

It’s obvious I haven’t reviewed all tools and methods. If you have something to share, I’ll be glad to discuss it with you.