Hallway Testing [A Deep Dive Analysis]

Written by marypeters | Published 2020/03/01
Tech Story Tags: hallway-testing | what-is-hallway-testing | what-are-hallway-tests | examples-of-hallway-tests | how-to-set-up-hallway-tests | ab-testing | hackernoon-top-story | marketing-top-story

TLDR A “Hallway Usability Test” is a quick usability test of the interface being developed. It’s purpose is to make sure that users perceive it as intended by the developers. This includes the user interface (UI) of applications and programs, as well as content such as landing pages, emails, and articles with all of their components (images, texts, CTA, etc.) Dashly.io provides a step-by-step guide that will help you set up and conduct a test correctly.via the TL;DR App

So you’ve got a fantastic idea for improving your product’s interface. Problem is, it’s gonna cost time, money and energy to implement. You’re pretty sure it’s gonna be good, but how can you tell for sure? Simple. Use a “Hallway Usability Test”, which will help you find out early on whether you’re on the right track.

Based on our own (Dashly.io) experience, we have prepared for you a step-by-step guide that will help you set up and conduct a Hallway Test correctly. You’ll learn how to pick the right respondents, what to ask them to make sure you get relevant responses, and how to record your data to get the most out of it. We’ll also tell you about some common mistakes that could really break your test, and how to avoid them.

What are Hallway Tests?

A Hallway Test is a quick usability test of the interface being developed. It’s purpose is to make sure that users perceive it as intended by the developers, and to identify any obstacles that arise during its use.
Let’s say you’re about to release an update, and you want to determine whether you’ve picked the right approach.
Here’s how you do it.
First, your team needs to set up a prototype and share it with a group of users. Next, you will need a procedure for observing how “real people’s” perceptions of the layout compare with the expectations of the team.
These users may be any of your colleagues who are not involved in the development of the update, or even just random passers-by. A typical interview will take from 5 to 15 minutes. This is the Hallway Test.
Hallway Testing got its name because of the way it is conducted. You literally step out into a hallway and conduct the test with three to five random people. This gives you actionable feedback in a minimal timeframe.
Additionally, Hallway Testing saves you money. It shortens and optimizes the standard development cycle by allowing you to get feedback and fix key problems before the release.
By the way, you don’t even need to literally “go into the corridor” or even conduct a test in person. For instance, many companies conduct such tests remotely, because their users may be spread all over the globe — even in the other hemisphere.

When are Hallway Tests suitable, and when are they not?

Hallway Tests aren’t a silver bullet solution for everything. For instance, when it comes to testing graphic design options, A / B tests are more suitable.
Instead, Hallway Tests excel at uncovering design errors that interfere with the user experience. This includes the user interface (UI) of applications and programs, as well as content such as landing pages, emails, and articles with all of their components (images, texts, CTA, etc.).


Hallway Test Examples

In this section, we’re going to share with you some Hallway Tests that we ran at Dashly. Find out what worked, what didn’t, and why.
The quiz gets people to answer a number of questions in order to understand the service better. Then, they’re asked to submit their email address, where we can send them further training materials.
The planned scenario is:
Users complete the quiz → Submit their mail
But things didn’t go quite as planned.
Marketer Ann participated in the Hallway Test as a client. After completing the quiz, she said that she likely would not submit her email. “I still haven’t understood anything yet, and they already want something from me”, she said.
The script does not work, and the prospect refuses to submit their email address.
This test showed us that the quiz needed to be changed. It wasn’t enough to just show the value of the service. We needed to explain the mechanics too, and the wording needed to be more specific.
So we ran a second iteration of the test accordingly.
This time, after the respondent was done with the quiz, he said that he expected to be shown how the service works, but instead was asked to submit his email address to receive further training materials. Even if not intended, this kind of mechanism may feel like a “bait and switch” tactic. This can be very off putting for respondents, which makes it much less likely that they’ll submit their email address.
Again, temporary defeat. But we learned a valuable lesson — the focus of the quiz needed to change. We decided to highlight the tasks that our clients face, and then link them to specific features within the service, providing a solution.
Here’s a slightly different example.
Paula, our email-marketing specialist at Dashly, conducted a Hallway Test consisting of an email with an invitation to help configure lead generation through pop-ups.
One of the participants in Paula’s Hallway test pointed out that there was something wrong with one of the animations. So Paula decided to remove it.

How to set up a Hallway Test

Where to begin
Suppose you’ve already prepared an interface or landing page to test. The Hallway Test begins by defining a scenario — a series of steps that you envisage the user following during the interaction, with the ultimate goal of achieving a result. Essentially, this is a very simplified version of the customer journey map.
The scenario can be represented as follows:
A scenario is a map that helps analyze how users behave at each stage, and identifies barriers that prevent them from getting the desired outcome. The task of the researcher is to find such barriers and eliminate them.
The next step is to create a legend, which is then shared with the user. Below, you can find a template to help you create yours:
Here’s an example:
“Imagine that you are the owner of a small online store. You want to accept payments from your users directly on the site. To do this, you need to install an online cash register, but you don’t know how the installation process works. You visit the support site and you see this page.”

How to search for respondents

For Hallway Tests, you don’t always need to look for targeted respondents. In many cases, tests can be performed within the team itself. When it comes to basic things, the people in your team will easily find problems that real users are likely to encounter.
But there are some instances where using team members may prove counterproductive. One such case is when you need to evaluate the interface from the point of view of a new user who has never encountered your product.
Now here’s a practical example.
Let’s say your team is working on updating the onboarding process. All of your colleagues have already gone through it, and you want to make sure that the wording is understandable to people who are not immersed in the context of your work. Additionally, you need respondents within specific professional roles and a certain level of experience.
Here are your options:
  • ask colleagues, friends or acquaintances;
  • ask passers-by to participate in the test;
  • find people among your existing, who fit the selected criteria;
  • ask in special chats and groups on social networks;
  • use paid services to search for respondents (RespondentUser InterviewsUserTestingUsabilityHub).
If you need something to coordinate time slots, communication channels and test tasks with respondents, Calendly is a good option:

How many respondents will be needed

For Hallway Tests, the concept of saturation applies, so the sample size must be sufficient to detect the main problems.
As soon as a pattern begins to emerge, it’s time to stop. To get a complete picture of the existing problems, 5 to 15 interviews will usually suffice.
In practice, this figure consists of several iterations, each of which will feature 3 to 5 respondents. But, if with your first iteration you already have three respondents successfully interacting with your interface, without encountering any obstacles, you can stop. Time permitting, you can conduct another interview as your “control”. But if three out of five respondents have experienced difficulties, then it’s a clear sign that you need another iteration.

How to organize the process

Rule 1: Prepare everything in advance.
The user shouldn’t have to look for items or download emails, articles or landing pages while participating in the test.
Rule 2: Record a video.
Why do you need a recording?
  • The user’s non-verbal behavior (facial expressions, gestures, emotions) can provide some very important information — sometimes more than what the user says.
  • If you’re just taking notes manually, you might miss some important details.
  • During the test itself, it is difficult to independently monitor non-verbal signals. Your focus should be on how the user interacts with the interface, and to find ways to expand the conversation further.
  • Reviewing the recording will give you a chance to see things you might have missed during the test.
  • The recording can be shown to colleagues as evidence of your innocence. Or so that they help you with the interpretation.
When filming, it is important to observe two points: the respondent’s face, and the interface itself.
Here is an example when the context is not visible:
    And this is an example where the context is visible but the face is not:
    This is the recommended way to do it. Note that both the face and the context are visible:
    It is also worth paying attention to how the user controls the screen.
    There are two options:
    • You show a screen demonstration and control the mouse.
      In this case, respondents will have to read out aloud and ask you to scroll further.
      This is a good option when you need to learn about the perception of each block, or you technically cannot give users access to the screen.
    • Respondents control the mouse themselves and scroll through the screen.
      This option is more advantageous. You can see how the person moves the mouse, which in turn indicates where the user is looking, and the context becomes clearer.
    Additionally, you won’t have to ask the user whether to scroll further, which can sometimes be a bit confusing.
    Here’s a hack from our product team:
    It’s a good idea to run hallway tests together with a colleague, even if it is being done remotely remotely and being recorded in Zoom. This makes it possible for one person to focus fully on the test itself while the second takes notes.
    If the test is full-time, this is the more convenient configuration, which allows one of you to fully concentrate on the conversation, while the second records the results. We have Hallway Tests often conduct product and product designer together.

    How to talk and what questions to ask

    At the very beginning, it is imperative to set the context so that a person understands the end goal. If there is no context, your respondents will come up with their own tasks and begin to speculate about what is wrong with the interface and how to improve it.
    To set the context, you need to provide a script.
    The next step is to ask three questions. These can be set iteratively. For example, show one part of the landing page and ask these questions, then the other – and ask the same questions again.
    Three main questions for Hallway Tests: “What do you see?”, “What do you understand?”, “What do you want to do?”
    “What do you see?”
    This is an introductory question. Its purpose is to understand how respondents navigate the context in which they have been placed, and what attracts their attention in the first place.
    “What do you understand?”
    This question seeks to understand how respondents interpret the information, whether they have understood the value proposition, and whether they have additional questions.
    “What do you want to do?”
    Other options are “What do you think will happen when you click on this button”, or “Where would you click to do this?” These questions are aimed at clarifying the goals of each action and expectations from the interface.
    The answers to these questions will help you see what respondents have understood from your landing page, what they saw, and what they didn’t see. This last point is very important, especially if most of your respondents aren’t noticing the “Buy button”.
    Now, let’s look at some of the questions that may arise during a Hallway test, but that should be asked with great care:
    “What would you change on this site / landing page?”
    The risk here is that the respondent will get carried away and start fantasizing about features that need to be added. You always need to find out what problems a person wants to solve with a feature, then to come up with a solution for them (and understand if this problem needs to be solved at all).
    “If we remove / add this feature, will you like this page?”
    People do not predict the future well, and in response to such a question, they will most likely not be completely truthful, because it contains your expectations. Instead of asking this question, it is better to study the path that led the respondent to the negative perception. For example, ask “Why is it important for you to have this feature?”
    It happens that the Hallway Test turns into an expert interview. A clear statement of the problem, script, legend and timing will help you avoid this. If this happens anyway, use the same set of key questions to get the respondent back on track.

    Mistakes to avoid when organizing the test

  1. Not preparing the script. There is no scenario – it is not clear what task users should solve and how they should do it. In addition, without a scenario, it will not be possible to compose a coherent legend.
  2. Not explaining the legend to the respondent before the test. If this is not done, then the respondent will not understand what needs to be done. As a result, they are likely to start speculating about what is expected of them, which in turn will make their responses unreliable.
  3. Do not record everything that the respondent says and does. Recording on audio and paper will not work, because they do not capture a huge layer of information – non-verbal behavior. Fix all help video.
  4. Talking too much. Talk only as much as is necessary to explain the product and what is expected of the respondent. If any questions arise, answer them to the point without getting chatty. If you derail the test with excessive conversation, the results you get will be unreliable.
  5. Interrupting the respondent or getting upset when they speak. Your role should be that of a facilitator. Give the respondent a clear direction and goal, and if they have doubts clarify them. The key here is active listening.
    Comments can be confusing. Perhaps the user wanted to take another action, but the instructions led him astray. The best strategy is active listening.
  6. Failing to keep the respondent actively giving feedback. During Hallway Tests, your respondent should be reading out aloud and voicing their actions so that you can follow their thought process. If a person does not say anything, then you need to prompt them with three questions: “What do you see? What do you understand? What do you want to do?” Avoid situations where the respondent is silently leafing through the content without offering any feedback.
  7. Failing to pay attention to emotions. Paying attention to the emotions of your respondents will help you effectively manage the attention of other users. If a landing page is delightful, then users are more likely to scroll down and see the subsequent blocks of content. If there is an emotional withdrawal somewhere, then your customers will leave.
  8. Defending the product instead of listening to the respondent’s criticism. Remember, respondents are here to help you, not hinder you. If you follow these guidelines carefully, you are more likely to receive valuable and insightful feedback. Not all of it might be pleasant, but it would be unwise to dismiss it.
  9. How to Analyze the Results

    How to organize the collection of results
    Once you’re done with the testing, you need to sort out the results in a way that makes them easy to understand and interpret.
    In our team, we use a selection of tools to help us synchronize all the research in the team, share useful materials, control the process and share the results. We have a dedicated channel in Slack, a board in Favro, a folder on Google.Drive and special meetings. We also have special meetings to discuss results and future actions.
    This is what the board looks like in Favro:
    Tip: In addition to the video, you should also save the materials themselves in the form in which you showed them to the respondent (prototype or screen), together with the script, legend and brief conclusions.
    This is what the interview card looks like in Favro:
    To store and analyze insights, create a document (Google Doc or Notion) or spreadsheet.
    For example, Konstantin from the Dashly product team takes notes on everything that the respondent did during the testing, and fixes all the problems that arose during the solution of the problem, and the insights received.
    Valerie, from our marketing team, came up with a great solution for fixing all the obstacles that users encountered on the Dashly knowledge base landing page, which we recently launched. She used Miro for data capture and visualization.
    The results of the Hallway Tests of the Dashly knowledge base landing page:
    • Left – The landing page itself and notes with new versions of text and visual materials
    • Right – All significant comments of the respondents that needed to be recorded.
The next important step when analyzing the results of the Hallway Test, is to discuss the results of the Hallway Tests with the team in order to synchronize.

How to Draw Conclusions

Remember the scenario that we drafted at the planning stage? That will come in handy as we analyze the results.
We’ve run the test and we have a recording of the process. Now it’s time to look through the recording and superimpose the script over the obstacles that the respondent had at eaxh stage.
Here’s how it might look, based on the example of Eugene’s test:
Now, the team can eliminate the obstacles identified in the test, create the next iteration, and then conduct another series of Hallway Tests to evaluate how effective these changes have been in smoothening out the respondent’s journey through the process.

Mistakes to avoid when analyzing results

  1. Giving too much weight to the respondent’s advice. Just because a respondent advises you to add the transition to chat on Facebook, it doesn’t mean you should do it immediately, or at all. Your primary job is to identify and deal with the problem that a person wants to solve.
  2. Only paying attention to what the respondent is saying. When going through the recordings, it is important to pay attention to the respondent’s emotions in order to identify problem areas. If a respondent looks flustered or frowns, they have likely stumbled upon an obstacle that will cause your users to leave the page.
  3. Useful services and applications:

    On a final note, here’s a selection of tools we’ve found to be useful when conducting our hallway tests.
    Calendly – for synchronization with respondents by time and channels.
    Zoom – for video calls with the ability to share the screen and record a call.
    Getcover – for demonstration of prototypes.
    Figma – to create a script or customer journey map.
    Miro – to create a script and frame for visualizing the results (and sharing with the team)
    Favro – for scheduling interviews, storing results and team synchronization.
    Now you have everything to become the king of the corridors.
    The point is small – start to carry them out.
    P.S.: You will laugh, but we also corridor this article!

    Published by HackerNoon on 2020/03/01