Usability testing: the complete guide

Usability Testing: The Complete Guide Usability Testing: The Complete Guide

How to plan, conduct and report usability studies.

Photo by UX IndonesiaThis ‘complete’ guide to usability testing follows an overview in my UX research methods playbook articles. Whole books have been written about this topic — some of which are listed in my sources.

Introduction

If you’re responsible in some way for a digital product or system, you should be doing usability testing — whatever your sector, industry or role.

It doesn’t matter whether we’re talking about a website, mobile app, AI assistant, AR/VR or other wearable technology; you need to test with users.

Advertisement

This article explains:

what usability testing iswhy you should do it (the business case)which method you could adopthow to carry out testing and report your findingswhere you can find out more

What is usability testing?

Photo by KOBU AgencyUsability testing is a user experience (UX) research methodology carried out to uncover problems and opportunities in a design.

Basically, usability testing helps you find issues to fix and things to improve in a user interface — whether it’s a prototype or live product.

The terms ‘usability testing’ and ‘user testing’ are often used interchangeably. While they technically mean the same thing, I prefer usability testing because it places the emphasis on testing the usability of the product or system rather than the user.

But what is usability? Here’s a comprehensive definition:

“Usability is the extent to which a system, product or service can be used by specified users to achieve their goals with effectiveness, efficiency and satisfaction in a specific context of use.” — International Standards Organisation (ISO)

Ok, now we know the what, it’s time to look at the why

Why carry out usability testing?

Photo by David Travis

On a purely academic level, decades of research has shown it to be a proven methodology:

Usability testing has been found to be an appropriate method for studying a website’s learnability, efficiency and memorability, which helps designers to create more usable products.

Ok, that’s great for UX theory nerds.

But what about the business case? How do you convince stakeholders to invest the time, money and effort — all precious resources — in testing how usable a product or system is?

Let’s first look at what bad usability could be costing you, and how it could benefit your organisation.

Potential negative impacts

Here are some potential impacts of poor usability:

Loss of business — if your website is hard to use, some visitors will abandon it for a competitor. Not everyone will persevere through a bad UX to get a deal.Risk of exclusion — in some sectors there are no competitors, e.g. some health or government services. And if your users find it difficult to use, they might simply give up in frustration.Poor customer satisfaction— if users are struggling with usability issues, negative feedback and complaints will increase (which requires resources to manage).Legal issues and financial penalties — if usability issues could be considered accessibility issues, you might be in breach of regulations. That could result in legal consequences and even fines.Brand reputational damage— if you develop a reputation as a company or organisation that doesn’t care about usability, accessibility or inclusion, it could permanently tarnish your brand.

Positive impact of good usability

Here are some benefits of providing a usable experience:

Increased sales / transactions — more usable online services means fewer people dropping out at each step. Better usability also means more users starting a transaction.Reduced customer service costs — the fewer complaints and negative comments you receive due to poor usability, the less resource you have to commit managing it.Improved customer loyalty — better usability and UX leads to satisfied users, and that leads to customer advocating for your brand — including recommending your product or service to others.More inclusive and accessible brand — having a culture and mindset geared to meeting diverse user needs is good for your reputation, your staff morale / retention and public perception of your organisation or business.

I’ll also include a great quote here from one of the original advocates of web usability and pioneers of usability testing:

“All web users are unique, all web use is idiosyncratic and to design a great site it’s essential to test.” — Steve Krug

Which testing method should you use?

Photo by UX Indonesia

Ok, so you’ve persuaded your stakeholders to commit to usability testing. That’s great. But don’t jump straight into recruiting testers.

To get the most out of usability testing, you first need to work out which method is most appropriate for your specific project.

There are broadly three types of usability testing:

In-Person (Moderated)Remote (Moderated)Remote (Unmoderated)

Let’s go through each of these, establishing how they work and what the advantages and drawbacks are.

1. In-Person (Moderated)

During an in-person moderated usability test, participants are asked to attempt specific tasks using a system or product interface. Traditionally, all this happens in a controlled setting like a UX research lab.

UX lab setup (Lazar, Feng and Hochheiser, 2017)

In these tests, the researcher — called a moderator or facilitator — guides the participant through tasks, observing their behaviour and listening for feedback.

Participants are encouraged to think out loud to help the moderator understand their expectations and frustrations with the design.

However, UX labs are not necessary for in-person, moderated usability testing. A similar setup can be achieved by the researcher visiting the participant at their home, workplace or a calm, controlled neutral space.

Setup for moderated (in-person) testing in the user’s home

In-person (moderated) testing commonly focuses on qualitative data about how people use the product or service such as insights, findings and anecdotes creating a specific flow of information.

Moderated testing flow of information (Moran, 2019)

However, objective metrics such as task completion are also often recorded as part of qualitative-focused testing.

Benefits of in-person (moderated) testing include:

Fewer distractions for participantsModerators can observe non-verbal communication like body languageResearchers can build up rapport with participants, gaining deeper insights with prompts and follow-up questions

Benefits of testing with users in their own environment include:

Alignment with ethnographic ideals of testing in users’ normal context, and potentially more natural behaviourIt’s more inclusive, as it can be harder for disabled people to travel to a physical locations

Drawbacks of in-person (moderated) testing include:

It costs more time and resources than other forms of usability testingPotential for moderators to bias the participantPossibility of participants behaving unnaturally due to awareness of being observed (the ‘observer effect’)Usability testing using a screenreader in the participant’s home

2. Remote (moderated)

Remote moderated usability testing works similarly to in-person studies, except the moderator and participant are in different physical locations using screen-sharing software, like Zoom or Microsoft Teams.

As with in-person testing, this methodology is known as synchronous testing as the moderator and participant are interacting in real-time.

Multiple studies conclude that remote usability testing is effective and leads to usability findings similar to in-person testing.

Benefits of remote (moderated) testing include:

Its effectiveness is comparable with in-person testingBut it costs less time and money than in-person testingYou have access to a wider pool of participants without the limits of geographic locationThe project team can watch the testing in real-time and discuss it immediately after the session

Disadvantages of remote (moderated) testing include:

It can lead to longer task completion times and higher mental workload for complex tasksLimited observation of non-verbal and interpersonal cues due to the video conferencing technology

3. Remote (unmoderated)

This approach takes remote testing one step further by dropping the moderator completely.

Instead of a real-time interaction, the researcher uses online remote-testing software to create tasks and questions for the participant to complete in their own time. The researcher can then watch the recording back later.

Unmoderated testing information flow (Moran, 2019)

Remote (unmoderated) testing is also known as asynchronous testing as you and the participant aren’t interacting in the same time or space. It works better for summative testing and quantitative metrics, such as time on task and clickstream data.

Benefits of remote (unmoderated) testing include:

Quick results and low-cost scalingEasy collection and analysis of quantitative dataNo recruitment or moderation skills neededLess risk of moderator bias or observer effect

Drawbacks of remote (unmoderated) testing include:

Less control over test conditions and risk of distractionsUnrepresentative users, and varying levels of motivation and commitmentNo opportunity moderators to clarify, prompt, guide or respond to the participant

Summary of testing methods

Here’s an overview of the benefits and drawbacks for each testing method:

Benefits and drawbacks of different testing methods

Deciding on a method

You can see that each method has its upsides and downsides.

As a rule of thumb, opt for:

moderated testing (either in-person or remote) if you’re primarily interested in qualitative insights, and have the resources to recruit, carry out and write up testsunmoderated testing if you’re interested in quantitative data, and have little time to spend on organising testing sessions

How to carry out usability testing

Usability testing on a participant’s tablet

The rest of this article focuses on moderated usability testing, as the process for facilitating unmoderated testing will depend on your testing tools and their features / limitations.

Moderated (in-person or remote) testing, though, is mostly within your control. You can plan and manage testing sessions as needed.

But before you do any testing, you need participants…

Recruiting participants

The ideal scenario for any UX researchers is that your organisation already has a pool of users to recruit from — or an established recruitment process.

If you’re starting from scratch, there are a few channels you can use to ask for volunteers to take part in usability testing (or offer incentives if you have the resources):

Website bannersSocial media postsNewslettersCustomer feedback surveysOnline communities (e.g. Facebook, Reddit, Discord)Live eventsPosters, leaflets, etc.

How many participants do you need? That’s become a controversial question in recent years.

For a long time, the UX mantra was ‘you only need to test with five users’. This was based on research by UX guru Jackob Nielsen in the 90s that claimed that you could find 85% of usability issues with just five users. The idea was that testing beyond five users led to mostly finding the same usability issues and diminishing returns for your time, cost and effort.

In recent years the ‘five users’ mantra has been challenged. People have suggested all kinds of solutions with all kinds of numbers, including testing five users per segment.

In the end, my advice for most people is to follow this pragmatic and practical guidance:

Instead of saying, “how many users must you have?,” maybe the correct question is “how many users can we afford?,” “how many users can we get?” or “how many users do we have time for?” — Lazar, Feng and Hochheiser (2017)

Once you’ve recruited some users and organised the testing schedule, you then need to do some preparation for the actual sessions.

This includes writing your:

test scriptconsent formwarm-up questionstasks (and prompts)cool-off questions

Test script

Your test script should state purpose of the study, provide any forms required (like a consent form), describe the study setting, explain the test process, describe thinking out loud and ask the participant to share any questions or concerns.

From this point on, we’ll use an example scenario: let’s imagine I’m carrying out moderated usability testing for a new design prototype. This desktop prototype — which I’m also calling ‘Project Caffeine’ — is for a hypothetical coffee subscription website called Bean Bros.

Homepage hero for the Bean Bros. prototypeFull prototype for the Bean Bros. prototype

I want to know if there are usability issues with the prototype I’m designing, and how it could be improved.

My script as the test moderator would look something like this:

I’m Andy from [insert organisation]. I’m designing a website for a new coffee subscription service called Bean Bros.Our aim is to make the experience of buying coffee online simpler, easier, more fun and ensure it’s an inclusive experience.This session will take between 45 minutes and 1 hour, during which I’ll ask you to attempt tasks using the website.During the session I’ll ask questions and take notes. Please try to ‘think out loud’ to describe your expectations and frustrations.Please feel free to ask any questions, and remember you can end the session at any time.Before we start: I’ll ask you to sign a consent form [to confirm you’ve agreed to take part in the research.A moderator (right) going through the test session format

Consent form

Below is a boiler plate example of passages to include on a standard consent form. Obviously you’ll need to customise this template depending on how you facilitate your session and what data you capture.

I agree to participate in the Bean Bros. prototype testing research being conducted by [research organisation].I understand that participation in this research is voluntary, and I can withdraw my consent and ask for any of my data to be deleted at any time.I also understand that any data captured during this research will be stored securely by [research organisation], only used for the purposes of the research and will not be shared wider than the project team.I agree to immediately raise any questions, concerns or areas of discomfort during the session with the interviewer.

Warm-up questions

These are all about building a rapport with the participant. If they feel comfortable and trust you, it’ll make the session a better experience for them and you’ll get better insights.

Example questions for the Bean Bros. prototype testing could include:

How did you get here today?What led you to sign up to take part in this research?What’s your coffee routine at home?What’s your experience of ordering coffee online?

Tasks

This is the main course of your usability testing: what are you actually going to ask the participant to try and do?

It’s important that your tasks are clear and specific, so you know whether the participant has successfully completed it. For example, instead of saying ‘Find out how the coffee delivery works’ say ‘Find out what the delivery cost is for the subscription’.

Example tasks for the Bean Bros. prototype could include:

Find out how many coffee options are available on the Bean Bros. website.Find out if the ‘Aztec Gold’ Mexican coffee is in stock.Find out what a regular coffee subscription would cost.Begin an order by selecting your first two types of coffee to be delivered.Go through the checkout process — what is the delivery method and how much will it cost?Complete an order and give your feedback on the ordering process.Will users find the tasks easy using the prototype?

The power of moderated usability testing is that you can prompt, guide and dive deeper into the participant’s experience.

You should avoid the temptation to help them or give them the answer. Instead, use your prompts to understand the user’s behaviour, expectations and frustrations.

Stick to open questions where possible.

Example prompts could include:

Why did you decide to find information this way?What are you expecting to see on this page?How well do you think the content explains things?Are you able to describe what you’re struggling with?How would you proceed from here?

Cool-off questions

Once the tasks are done it’s time to wrap up: thank the participant for their time, and let them know their insights have been helpful.

Cool-off questions can yield some final insights, but are also just a natural, human way to bring things to a close.

Example questions could include:

How did you find the tasks?What are your overall feelings about the [prototype/product]?Is there anything that could have made the session today work better for you?What are your plans for the rest of the day?

How to report usability findings

Usability studies are usually reported in a slide deck

Every usability study should culminate with a report.

Usability testing reports are useful for project teams as a record of what they learned, as well as a helpful resource for stakeholders of what requirements and features might need to change.

A good usability testing report should include the following:

Executive summary — a hyper-condensed version of the report for people who won’t read the whole thing.Introduction — explaining the context of why the testing was carried out (research questions you wanted to answer).Methodology — outlining how the testing was carried out, from recruitment to testing, including participant demographics.Findings — your results, for each task describing the task completion rate, level of difficulty, issues (including annotated screenshots), a participant quote summing up the task, and recommendations (potentially including wireframes or user flows to visualise any proposed changes).Conclusion — discussion summarising the key findings and recommendations, and what future testing could cover.

The findings are a particularly important section. For example, the findings for one of the Bean Bros. prototype testing tasks might look like this:

Task 1:

Find out how many coffee options are available on the Bean Bros. website.

Results:

83% task completion (difficulty = 1 easy, 2 medium, 3 hard).

Issues:

1. The ‘Coffee’ page requires too much scrolling before users get to the actual coffee options.

Issue 1: Too much scrolling required

2. Users were confused that they couldn’t see at a glance how many coffee options were available (or how many were left to discover as they explored more).

3. Users were also frustrated that there was no way to filter the available coffees, or sort them in any particular order (e.g. price, rating, etc.)

Issues 2 and 3: Lack of information and customisation

4. Some users were overwhelmed by the amount of content below the coffee card listings, while others missed or ignored it.

Issue 4: Too much content under the coffee listings

Issue Level: Important

Users found this task difficult to complete — and it uncovered further usability issues with the user interface.

User quote:

“I can see this is where I choose the coffee, but it’s not easy to get a sense of how many there are or how to narrow it down to what I want.”

Recommendations:

Remove content at the top of the Coffee page to improve discoverability of coffee options and reduce interaction cost (scrolling).Add pagination to provide feedback about how many coffees are displayed (out of the total number). This also follows Jackob’s Law by making the page more familiar based on user’s prior experiences.Add filtering and sorting affordances for users to customise the coffee listings, which aligns to the usability heuristic flexibility and efficiency of use.Move the content below the coffee listings to a new page focusing on why users should sign up for a subscription. This will reduce cognitive load and improve the information architecture (IA) of the website.

Final thoughts

Ok, we’ve reached the end. I won’t add more wireframes or mock-ups showing recommended solutions as this guide is long enough as it is!

Hopefully the guidance, examples and template content presented here gives a good overview of how to approach usability testing — especially moderated sessions.

This article is based on practical experience, as well as usability testing books, articles and other resources. You should be able to put this guide into practice, but check out the sources below for more information.

Sources

Barnum, C. M (2020) Usability testing essentials: ready, set… test! 2dn edn. Cambridge, MA: Elsevier.

Hertzum, M. (2022) Usability testing: a practitioner’s guide to evaluating the user experience. 1st edn. Switzerland: Springer Nature.

Katunzi, S. (2022) Moderated versus unmoderated usability testing. Available at: https://www.uxmatters.com/mt/archives/2022/03/moderated-versus-unmoderated-usability-testing.php

Lazar, J., Feng, J. H., and Hochheiser, H. (2017) Research methods in human-computer interaction. 2nd edn. O’Reilly Online Learning.

Moran, K. (2019) Usability Testing 101. Available at: https://www.nngroup.com/articles/usability-testing-101/

Pernice, K., and Moran, K. (2020) Remote moderated usability tests: why to do them. Available at: https://www.nngroup.com/articles/moderated-remote-usability-test-why/

Vinney, C. (2024) The ultimate guide to usability testing for UX in 2024. Available at: https://www.uxdesigninstitute.com/blog/guide-to-usability-testing-for-ux

Usability testing: the complete guide was originally published in UX Planet on Medium, where people are continuing the conversation by highlighting and responding to this story.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement