Conducting a usability test

Last updated on
22 November 2021

General Construct – Moderated Usability Study

In a moderated usability study, there is a moderator (you) and a participant. The moderator guides the process and collects feedback. The test could take place remotely or in person.

Step 1: Goal-setting

Before starting the test, set the ground rules. Be clear what is the goal of this project.

Tip: Consider asking yourself or the stakeholders the following questions:

  • What is the business/usability goal?
  • Why are we testing this?
  • Who is the audience of the test?
  • What are the key parts to the test?
  • Where is the product in terms of development cycle?
  • What is the cost involved with the study?

Once you have that sorted out, you will be able to assess the following:

  • How much time will the session take? (Ideally an hour or less.)
  • How much will the participant be compensated?
  • How many participants do I want to test?

Tip: The current best practice is to include 5 to 8 participants for a test. Research shows that testing 5 participants reveals 80% of the problems, whereas 9 participants reveal 95% of the problems.

Step 2: Recruiting

Depending upon the scope and intention of your project, you need to select the right participants. You may want to profile your participants based on:

For example, imagine you're testing a new design of an existing module, like Feeds). You would want to test the Site Builder who is a first-time user for the new design but has used the old design before. So, you are looking for participants who are Site Builders, have used Feeds before and haven't seen the new design of Feeds.

Where could you find participants?

  • Personal social connections, like friends, family and colleagues (if they fit the profile)
  • Social networking platforms like Twitter, Facebook, LinkedIn, and so on (Twitter works pretty well!)
  • Local CMS or Drupal Cons/Camps/ Meet-ups
  • Web designer/ Front-end programmer meet-ups (if they fit the profile)
  • Additionally, pass the recruitment profile to the UX team—they could help spread the word!

Step 3: Preparing

Next, we have to consider the setup and structure of the study, like what we're going to ask the participants.

First, determine which device you'll use for the test and, if applicable, which recording tool you'll use as well.

A typical study normally consists of three parts:

Part One: Pre-session questions

These are the questions you ask the participants before having them start on the test tasks. Here are some typical focus areas that you may consider depending upon your goal of the test:

  • What do they do with Drupal?
  • [For a new feature or a new user] What is their expectation with a feature like that?
  • [For a new feature or a new user] How do they expect something like this to work?
  • [For an existing feature or an existing user] How was their experience? What were their pain and pleasure points?
  • [For a new or existing feature] What are the most important things that they would like to do with this feature?

Part Two: Tasks

This is the meat of the study. The tasks should be written in an unbiased and clear way.

The general tasks you would want to focus on are those that are

  • Typical or frequently performed tasks (like creating content)
  • Critical to operation (like deleting content)

Consider the following questions while selecting your tasks:

  • Don’t use terminology, jargon, or words that are unique to or included in the interface product being test.
    • For example, if you're testing the “Modules” page, you don't want to use the word “Modules” in the task.
  • Ensure the order of the tasks makes sense.
  • Watch out for dependencies between tasks.

Tip:At the end of each task, it can also be helpful to ask the participants, “How do you feel about doing this task?” to gather an overall sense of their experience.

Part Three: Post-session questions

This is the time for you to ask participants about their overall experience with the feature. Asking participants to rate their experience is of great value; some of the rating parameters are effectiveness, efficiency, satisfaction, ease of use, value, etc.

Typical post-session questions look can look like the following:

  • How was your overall experience?
  • What are the things that you liked the most and the least”
  • If you have to rate this feature in terms of ease of use on a scale of 1 to 5, where 1 being completely unusable and 5 being completely usable, how would you rate it?

Step 4: Conducting

Consistency is important to remember while testing. For each participant, you want to ask the same tasks, in the same order, using the same words.

As you greet and brief the participants, manage the their expectation for the test. Inform the participants how long is it going to take. Tell them you'd appreciate it if they would think aloud and be candid about their comments (good and bad). Inform them that you are a neutral observer and that, most importantly, you're trying to evaluate the feature, product, or software—not them.

A typical participant briefing looks like this:

Welcome! My name is [blank] and I’m [blank]. Thank you for taking the time to participate in this usability test. It shouldn't take longer than [blank] minutes. The basic structure of the session will be a few pre-test questions, a series of tasks, then a few post-test questions.

As you complete each task, please think aloud and describe your steps, or what you are looking for. Your comments are what is very important to us—we ask that you give open and candid opinions, both good and bad.

Feel free to ask for clarification if needed, but I will be neutral throughout the test. Keep in mind we are testing the software/feature/product, not you, so there are no wrong answers in this test.

Try to complete the tasks as if you were doing this for real. Spend as little or as much time as you normally would doing these tasks. It is OK if you cannot complete each task, and we may not get to every task.

Do I have your permission to record this session?

If so, I'll just record our audio and the screen. Later, I may post the highlights from this test online. Your name or any other personally identifiable information will not be associated with the data.

Any questions before we begin?

Moderating a usability test

They say moderating a usability test is a skill! With the right mindset and a few pointers, we guarantee that you will be able to extract the data you need. Here are some the tips to consider:

  • Respect participant’s rights.
  • Ensure participants’ physical and emotional comfort.
  • Make a connection with the participant.
  • Minimize interruptions.
  • Be unbiased.
  • Watch out for signs of extreme frustrations.
  • Don’t give away information inadvertently.
  • Give assists if needed (and note when you do).
  • Listen to the participant. Let them talk. You should talk only when needed.

See the 'Resources' section for examples of pre- and post-test questions that aren't really effective.

Taking Notes

You can take notes during the test or review the recordings and take notes later. Choose a way that is most comfortable to you.

You should note everything that the participant is doing: where they go and what they say. Note quotes and timestamps for relevant things. Also, look for verbal cues and facial cues (if in person).

Remember: Refrain from judging what is an issue and what is not. Doing that while taking notes adds to the note taker’s bias. See yourself as a scribe, taking notes without processing the information. This method helps to collect more and close to real data.

Step 5: Analyzing

Now that you have done all the work, it is the time to make sense of the data. Each of us has a different style to analyze data, so choose a style that suits you. One possible way involves the following:

  • Browsing through your notes.
  • Categorizing the notes (into a spreadsheet, maybe) into positives, issues, and observations by participant.
  • Review all the participant notes for any one positive, issue or observation and note how many times it was encountered. In other words, look for patterns.
  • Then, associate every issue with its severity (below) and its impact and frequency.

Severity scale

  • CRITICAL: Usability catastrophe; imperative to fix this before release
  • MAJOR: Major usability problem; important to fix, high priority
  • MINOR: Minor usability problem; low priority
  • NORMAL: Cosmetic problem only; fixed if extra time is available

Step 6: Reporting

Finally, write up your results to communicate them to the community! Think of yourself as a storyteller, tying to piece all things together and keeping the reader engaged. See Communicating the results.

If you write a report, be sure to address the following information:

  • Goals
  • Methodology
  • Participant demographics?
  • Executive summary (a few paragraphs which briefly explain the high-level findings)
  • Detailed findings
  • Next steps (if necessary)
  • Screenshots, quotes and videos whenever possible

Resources

Ineffective Pre- and Post-Test Questions

Question: “Rate the degree to which the software you just used has no performance problem."

Explanation: By saying “no performance problem”, you are biasing the question and insisting the participant agrees that there is no or little problem. The right way to ask would be as follows:“How would you rate this performance of this software on a scale of 1 to 5 where 1 is very bad performance and 5 is very good performance?"

Question: “Are you interested in obtaining another degree in the next 10 years?”

Explanation: This is almost a rhetorical question. It feels like you want the participant to say yes. You could eliminate bias by asking, “What are your thoughts on obtaining another degree in the next 10 years?”

Question: "About how many times in the last year did you use online help?"

Explanation: The first assumption this question makes is that the participant has used online help. Also, it makes the assumption that the participant remembers how many times they have used it. Participants are more likely to pick a number that is far from accurate. The goal of asking a question is to get most accurate information. Consider rephrasing the question as: “Have you or have you not used online help in the last year?” Then, if the participant says yes, ask them to estimate how many times they've used it from a range of numeric options (like 1-3 times, 4-10 times, 11-20 times, and so on).

Help improve this page

Page status: No known problems

You can: