paint-brush
What I Learned When I Changed the UX Research System at my Companyby@magera
913 reads
913 reads

What I Learned When I Changed the UX Research System at my Company

by Maria PlievaJune 29th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Yandex's main ad service for targeting ads has settings to adjust cost-per-click. A client can set the maximum cost per click in the interface, but managers noticed that clients don't use this parameter. I set out to understand why.
featured image - What I Learned When I Changed the UX Research System at my Company
Maria Plieva HackerNoon profile picture

Lately I've been working at Yandex, and specifically my division has been doing research for advertising products like targeting ads. As a quantitative researcher I was confused as to why there are no unified quantitative metrics for UX. At the same time designers and managers very often came to me with requests to do some separate research.


The problem with these studies was that they covered a very narrow problem, while no one was responsible for UX as a whole.


Let's see how this worked  in a real case study.


Our main ad service for targeting ads has settings to adjust cost-per-click. The client can set the maximum cost per click in the interface, but managers noticed that clients don't use this parameter. My first approach was to measure some classic ux metrics, as I thought this would give a general understanding of the situation.


So I tried to measure the classic ux metrics:

  • Task Completion Rate - to measure the percentage of users who successfully complete a task of setting the maximum cost per click value.
  • Error Rate - to calculate the frequency and severity of user errors during interactions with the interface.
  • Time on Task - to determine  the time taken by users to complete a task of setting the maximum cost per click value.
  • Click-through Rate (CTR) - to evaluate the rate at which users click on specific elements, such as buttons or links.


In the output, however, it wasn't clear what the problem was - the quantitative metrics were within reasonable limits and on a qualitative level I didn't know what to do about it.


What really helped me was using the System Usability Scale survey. The SUS is a widely used questionnaire-based model that assesses the overall perceived usability of a system. It consists of a set of statements related to usability, and users rate their agreement on a Likert scale. The SUS provides a standardized measure of usability and helps in comparing different interfaces or iterations.


I will tell you a little bit about the methodology of conducting such a survey. The questionnaire consists of 10 statements and the respondent should score them from 1 to 5, where 1 - strongly disagree, and 5 - strongly agree:


  1. I think that I would like to use this system frequently.

  2. I found the system unnecessarily complex.

  3. I thought the system was easy to use.

  4. I think that I would need the support of a technical person to be able to use this system.

  5. I found the various functions in this system were well integrated.

  6. I thought there was too much inconsistency in this system.

  7. I would imagine that most people would learn to use this system very quickly.

  8. I found the system very cumbersome to use.

  9. I felt very confident using the system.

  10. I needed to learn a lot of things before I could get going with this system.


Of course, there are other important steps to keep an eye on besides the questionnaire itself:


  • Selection of Participants: it’s necessary to recruit a diverse group of users who represent the target audience of the application. The participants ranged from novice users to experienced professionals in the ads industry.

  • SUS questionnaire processing: participants are provided with the SUS questionnaire, consisting of ten statements and are asked to rate their agreement with each statement on a Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree).

  • Data Collection: The completed SUS questionnaires are collected and anonymized. Each participant's responses were scored, and the individual SUS scores were calculated using the standardized scoring formula for the SUS.

  • Analysis and Interpretation: The collected data was analyzed to determine the average SUS score for the software interface. Additionally, the individual responses to the statements were examined to identify specific areas of strength and weakness in the interface.


Turning to the results of my study, the average SUS score for the service under review was found to be 55, which is considered below average on the SUS scale. This indicated that there was room for improvement in terms of usability and user understanding of the feature. Analysis of the individual responses to the SUS statements revealed that participants found the interface to be complex and avoided using it. Accordingly, on standard metrics we did not see problems in this field.


Based on the findings from the SUS assessment, I devised an action plan to address the identified usability issues. First of all, a simplified Interface is needed. The company decided to streamline the software interface by decluttering unnecessary elements, simplifying terminology, and reducing visual complexity to make it more intuitive for users. Secondly, onboarding and more User Guidance: I advised implementing an interactive onboarding process to guide new users through the software's features and functionalities. This included interactive tutorials, tooltips, and contextual help to assist users in understanding and effectively utilizing the software.


By leveraging the System Usability Scale, I gained valuable insights into the strengths and weaknesses of their software interface. The SUS assessment helped identify specific usability issues, leading to targeted improvements in the interface design and user experience. Through a combination of interface simplification, improved user guidance, and enhanced support, our team aimed to boost user satisfaction, increase adoption rates, and ultimately achieve their business objectives.