Upstream, you will have written the test scenarios (upcoming article: How to write an effective scenario for your user tests) and recruited your users.
These qualitative tests allow you to observe your target users in real conditions of use of the product. This makes it possible to detect blocking points and improve your product.
Users test severally your application.
Here are the steps for an effective test:
Quickly explain how the test went:
The first part of the test always starts with a few introductory questions.
Define a maximum of 5 questions to better understand your user's profile, product use, habits, and needs. This mini User Research will be useful to you throughout the construction of your user experience.
A few examples:
There are two schools, that of letting the user freely browse the application and that of giving him a path to follow. Personally, and to avoid being biased, I prefer to define a path to follow and list tasks. Tasks are user needs that need to be contextualized:
Be careful not to define tasks like “Press the blue button”, but rather questions like:
For each task, ask questions to the user to force them to comment:
If you need to get more information about a key feature of your product without biasing your user, rephrase the sentences in question: If they say “I don't find that clear.” ask “Why don't you find that clear?”.
At the end of the test, don't let the user leave right away. Summarize the highlights together with a few questions like:
You can also ask questions to measure the product market fit, know if the user is in the target or if the product is really missing something:
You can also ask questions with notes if you want to measure the evolution of your product:
Throughout the test phase, it is imperative to take notes, even if you record the sessions. Ideally, be 2 people. One to take notes and one to guide the user during the test.
Note the current task and user feedback, both positive and negative.
If your test is being shown in another room, ask colleagues to take notes. An effective technique is to draw a chart on the wall (if you have whiteboards) with one column per user, and one row per section of the prototype. Your colleagues will write each user comment (with a red marker the negative comments, in green the positive ones) from the user and stick the post-its on the board at the end of the session. Group duplicate post-its together. This will allow you to identify similar user comments very quickly.
If you are alone, take notes on your notebook and group your notes in the same way in an Excel spreadsheet.
No matter what results you get, there's no wrong conclusion.
On the go, listen to the recordings again and write your test report - you will better convince your customer with this kind of document: summarize the main comments by task/section of the prototype and specify how many users had this comment, include the video recordings, the script and all your notes by users in appendices.
It's up to you to play!
To complete this article, you can read: How to write an effective scenario for your user tests (article coming soon) and How to organize your user tests.