Usability Testing Guide

Compiled and authored by Mifos volunteer, Denila Philip

The goal of usability testing is to validate the user experience of your app or website. Done correctly, usability testing is a great technique used in the user-centered design process, to understand how users interact with your product, and what frustrates or delights them about the product interface and workflows.

The following is a quick guide to setting up and conducting a usability test.

What and when to test:

The basis of user-centric iterative design is to start testing as early on in the design process as possible – initial tests can be conducted with simple clickable prototypes or low-fidelity wire frames using tools like Invision or Balsamiq.

Based on the feedback you get, make updates to the design, and repeat the process until you are ready to test with high fidelity designs.

Conducting usability testing at multiple points in your design process will help validate design assumptions, and provide you the ability to pinpoint the steps needed to achieve the project goals with as little disruption as possible.

Select users for testing:

Ideally, it is best to conduct usability testing with the actual customers / end users of the product. For e.g., if you’re building a self-service banking application, determine the target demographic that you are designing the application for, and identify if you can recruit users from that target base.

In many cases, you can use basic demographic criteria (such as age, gender, and income level) to recruit your study participants. If you have a broad target market or if you’re simply looking to identify basic usability issues in your product, then it’s fine to use broad demographics.

When recruiting users for testing, it is important to ensure you don’t include any bias – for example, testing the application within your team or close friends maybe a great way to get some quick peer review feedback, however keep in mind that there is a high chance of bias.

Some methods teams/companies use for recruiting users:

Recruiting method



Actual users within your target customer base


May have some bias if they have already seen an existing version of your product/app

Recruiting users via Craigslist or similar boards

Wider reacher

Higher compensation/incentives required

Greater variability in quality

Approach strangers

Less effort, cheap

Least likely to be representative, so feedback may not be useful

User testing companies such as Validately,

Wide reach, filtering capabilities, less work

Can be expensive, may be hard to find specialized groups

Determine your mode of testing:

Depending on your location, or the location of your users, there are different methods and tools you can use to conduct usability testing-

Moderated vs Unmoderated testing:

  • Screen-share tools for moderated testing  if users are remote – Google Hangouts, Skype
  • To record your test sessions and re-view them later, you’ll need a paid service like GoToMeeting or Zoom
  • For unmoderated testing, online services like Validately allow you to create a task-based instruction list for your test and then send it out for testers to perform

Create a test plan

Your test plan is the series of tasks and questions your participants will follow and respond to during the study. Here is a framework that you can follow for developing a usability test plan:

  • Define goals for the usability test

Pinpoint the objective of conducting your usability test. Initially you may simply want to understand how users perform core tasks and navigations within the app / website. As you conduct more tests and further iterate on your designs, you may want to get more granular.

For e.g, “Validate how users navigate to the creation of a loan application workflow within a self-service banking application”

  • Develop the user test script

The user test script should include the following:

An overview of the app/website, and an overview of the process. Here’s a sample introduction script.

“Thank you for participating in our customer feedback session. We are asking users of XYZ application to test out our prototype so we can see whether it works as intended, and if it meets your needs and ability to perform this workflow in an efficient manner. The purpose of XYZ application as you may know is <insert application overview here>

Do note that what you will be giving us feedback on today is not a finished product.  This is a clickable prototype that consists of several mockups of an application.  

The first thing I want to make clear right away is that we’re testing the site, not you. There are no right or wrong answers. As you use the prototype, I’m going to ask you as much as possible to try to think out loud: to say what you’re looking at, what you’re trying to do, and what you’re thinking. This will be a tremendous help to us. We are doing this to improve the application, so we need to hear your honest reactions.

If you have any questions as we go along, feel free to ask them.  I may not be able to answer them right away, since we’re interested in how our customers interact with the prototype without assistance.  However, if you still have any questions when we’re done I’ll try to answer them then.. “

 Initial Questions

Some examples:

  • Where are you geographically located?
  • What apps/tools do you use typically for managing your savings accounts?
  • How frequently do you use this app?
  • What information do you typically like to see when you first open the app?


 List of Task Based/Interactive prototype Questions:

 A task should be an action or activity that you want a user to accomplish at that time.

 Example of a task: Create and submit an application for a loan

 Use a question when you want to elicit some form of feedback from a user in their own words.

 Example of a question: You need to apply for a home building loan with a principal amount of $100,000 for a loan term of 40 weeks. How would you go about submitting the application?

Follow up question:

  • Where would you look for the status of your loan application?
  • Was anything difficult or frustrating about this process?

 General questions for each task or at the end of the test:

  • Which action/workflow was most challenging for you?
  • What features did you find most useful?
  • What additional feedback do you have on this feature?
  • Would you recommend this site/app to a friend? Why, why not?
  • How could the navigation be improved?

Conduct the test: What to watch out for

  • For task based questions, it is a good practice to ask the user to talk out aloud while performing the task.
  • Listen for strong emotional elements in responses - “I hate that I have to click through 3 pages to get to this”
  • After observing a user completing a task, it’s helpful to ask for a verbal reaction:
    • What are your initial thoughts on this screen?
    • Is this what you expected to see?
    • If this wasn’t a test, would you have completed this task?
    • Would you have done this differently? How?

  • Always ask non-leading questions.
    • An example of a leading question is “We developed this new feature that shows you a graph of your savings on a monthly basis. Do do you like it? Does it look nice?”
    • Instead this question can be framed as “What are your thoughts on this graph? How would you use it? What information does it provide you?
  • Note down any points where the user gets stuck while performing a task or completing an action
  • Note down time taken to complete a task, number of clicks, % users that complete task

Compile your results:

Here are a few ideas for analyzing and communicating your research findings -

  • Record screen shots, voice or video clips of user actions
  • Back up your claims with user quotes from the studies. Use word clouds to display the most common words used throughout your study.
  • Categorize your findings into broader buckets, especially if there are recurring themes in the feedback you’ve received; for e.g. “search,” “navigation,” “UI language” etc.
  • Convert each feedback point into an actionable task for you or your team to follow up on – an output of the findings analysis can be a list of user stories to track toward completion.
  • Prioritize the stories based on scope, resources, technical feasibility, business value, and value to users.

Reference links: