In unmoderated usability tests since researchers are not present to explain the task / question or do further probing with participants effective task / questions should be written.
In unmoderated usability tests since researchers are not present to explain the task / question or do further probing with participants effective task / questions should be written.
Based on our experience creating launching and analyzing hundreds of Unmoderated usability tests we have prepared some of the recommended ways to optimize your unmoderated tests.
This will ensure that participants understand the questions without ambiguity and can provide accurate feedback. Participants should have a clear understanding of what they need to do and the context behind the tasks. Otherwise task instructions will be interpreted differently by different participants and you will not get the data you needed.
“You are considering joining a gym and getting a membership. Enter the gym’s website and you enter the membership program section. Browse the membership information and explain if you understand the benefits you will receive upon application.” [very long-winded]
“You are thinking of signing up at a gym. Go through the membership program on the gym’s website. Tell us the benefits of membership as you see them” [tells participants what they need to know and what to do in a straightforward manner]
It’s important that your questions are neutral and don’t contain any leading or biased language as this could influence participants’ responses. Leading questions put ideas in users’ minds and lead them towards the way you wanted them to. In such cases the participants own thoughts or ideas would not be genuine even though all responses would appear consistent . So “ask” rather than “tell”.
“Look through the Image before answering the following question: After clicking on “$5 Discount” do you think it will show any description?” [leading question]
“Look through the Image before answering the following question: What are your thoughts about being able to click ‘$5 Discount’? Explain what you expect to see after clicking ‘$5 Discount’.” [Ask rather than lead them by telling]
Letting participants know when they should stop and proceed to the next task will prevent them from feeling stuck and unsure what to do next helping them finish the test efficiently. It is also useful to indicate a time duration you want them to spend on a task so that a majority of test duration is not spent on initial few tasks.
Including reminders to give spoken feedback helps. It is an easier way for participants to elaborate their thoughts and can reduce the burden of typing in answering written / open questions.
“Imagine you open an e-commerce homepage. Look through the detailed information on the page and give your feedback” [Participants would wonder if Task is completed]
“Add 3 items from “inFashion” to the cart and finish your purchase.
Your task ends after you check out the items. Speak and share your thoughts while you do this task.” [Clear to participants when task is complete]
You need to design your test to take about 15-20 minutes with no more than 15 tasks including follow up questions in each test.
Research has revealed that despite the fact that variations in people’s attention span the average adult human can only focus on a task for about 15 to 20 minutes [#1].
Plan your test properly to get high quality feedback and avoid getting poor results due to your participant fatigue.
i) Ask open-ended questions to encourage participants to elaborate their responses
Some people are good at expressing themselves by writing down their thoughts. An unmoderated test allows participants to respond in their own time and express their thoughts and feelings in their own words while in a real life context. Take advantage of this by using Open Text survey tasks in the UXArmy platform. Open ended questions also work as good warm up tasks.
“What are your thoughts about being able to book flight/train tickets on a social media platform instead of an online travel agency?”
“What are some reasons you prefer to shop for groceries online?”
ii) Use Close Ended Questions only for Data that need to be categorized
You may want to use closed-ended questions in some cases for unmoderated tests. Close-ended questions are suitable for information that can be easily categorized (e.g demographic data lists of brands quantitative data on users’ preference). It can help eliminate misunderstandings and categorize the types of answers.
UXArmy supports single select multiple select 5-point and 7-point Likers scale type close-ended questions.
When it comes to demographic questions you can also include them as a screener question and only if they are relevant to the study. Too many questions of this type can be an obstruction to achieve your test goal. These questions are easy and quick to answer and therefore may lower the interest of some participants in doing the other interactive tasks.
“Select the activities that you usually prefer to do while traveling.”
iii) Asking “WHY?” as a follow up to closed-ended Questions
If you think closed-ended questions are really needed for your test always follow up with “why?” questions. It will give you more than a rating Yes/No etc. Your participants can write and elaborate on their choices in the close-ended questions or you can encourage them to speak and explain “Why” by speaking out loud since UXArmy provides voice recording.
“Please rate how satisfied or dissatisfied you are with the interface design.” [reason of rating is not asked]
“Please rate how satisfied or dissatisfied you are with the interface design. Verbally explain why you choose that rating.” [reason of rating is included]
iv) Don’t ask about more than 1 factor in a question
Asking about multiple factors in a question is likely to be mis-interpreted by your participants. While analyzing results the researchers on your team would also not surely know if the factor to which the response belongs to. If given multiple factors participants may not consider the factors equally resulting in a response that is biased towards a specific factor. It is recommended to split up the factors and dedicate 1 question to each factor.
“Please rate the satisfaction and the clarity of the interface design.” [satisfaction and clarity are different factors]
“Please rate how satisfied you are with the interface design. Speak and explain why you choose that rating.”
“Please rate the clarity of the interface design. Speak and explain why you choose that rating.”
[satisfaction and clarity separated out]
v) Always Run a pilot study or dry-run tests
We should test the questions with a small group of users (e.g. internal users friends or colleagues) before launching to the actual participants. This will help us identify any confusing or unclear questions and allow us to refine them for maximum effectiveness.