Tagging: Research

  1. A Comprehensive Guide to Accessible User Research: Part 1 – Project Planning

    @BrianGrellmann’s first blog post of a series about taking an inclusive approach to accessibility in design research.

    Things to consider:

    • Goals
      • Continuous research with fewer participants or dedicated research with more participants (former is better)
    • Fatigue
      • Longer sessions ~90mins
      • Review the session design to avoid fatigue
    • Budget
      • Potentially a harder recruit, allow for more recruitment time and incentive cost
    • Location
      • Some labs may have specific accessibility software but need to be accessible
      • Paricipant’s home/workplace offer more contextual insight to other access needs
      • Location of test incurs more travel cost
      • Remote tests must use accessible software and ability to capture audio (screenreader)
  2. Zoompathy. Coming to a research project near you?

    @bowmast on the importance of context in qualitative, the tradeoffs we incur with remote research and how to challenge them.

    Tradeoffs and risk.

    …short-term reliance on research methods without explaining the tradeoffs may risk training our clients to accept what appears to be a more convenient option. As design-at-pace sometimes seems more user-scented than centered…

  3. Undemocratising User Research

    @SaswatiSM’s argument against democratised research.

    Love this definition of a researcher ❤️

    A researcher is a dynamic thinker who has to adapt their methods and questions based on who is in front of them, how much they have already learnt and what new areas could be probed on.

    Saswati makes some great observations about analysis paralysis.

    More data is not equal to better data.

    … the ambitious were burnt out and the less ambitious, lowered the quality of their output or just gave up. None of these are outcomes you would want in your organisation on a regular basis.

    Actual outcomes.

    So what we really got was not a new pool of researchers but an access to some skill sets which we would normally not have got if we went via the typical prioritization route for their time and skills.

    Goals for strategic researchers.

    As a UX Research Lead, I would like 80% of my team’s time to go towards discovering the right opportunities. This will require UX Researchers to enhance certain skills — do more literature reviews, get better at identifying and tracking trends over time, improve one’s story-telling and quantitative market sizing skills to be able to own the full narrative around an opportunity.

  4. Democratization is our Job

    @beh_zod describes the importance of democratising research.

    What is democratization?

    …the reality is that democratization is about being a good navigator and a good passenger.

    How does it work?

    You want to be able to partner with other disciplines, set expectations, and educate, enable, and empower them to be rigorous in their curiosity.

    What is the value?

    …any work we can do to help others with systems, processes, tools, and programs will make their work more impactful and give them a deeper understanding of why our expertise is valuable.

  5. More, Broader, Faster: A Brief Intro to Effective Remote Research

    @productherapist’s tips for better remote research.

    Advantages of remote Research:

    • Easier to reach a wider geographic audience
    • Lowers cost of attending physical spaces for research sessions
    • Easier for the participants to attend sessions

    Useful tips:

    • Have a backup telepresence, email, phone number
    • Slow down to ensure comprehension
    • Send setup instructions before the sessions
    • Know when to quit and reschedule
    • Prepare and test
    • Get there early to make a head start
  6. Don't jump: My tips for interpreting user research

    @keithemmerson’s gives some great advice about observing research sessions.

    Keith makes an interesting point about the attitudes and motivations of research session observers.

    “… what’s more dangerous: someone who doesn’t attend any sessions, or someone that attends 1 or 2 sessions but misinterprets what they see and hear?”

    What I came to realise is that the question is not whether people should attend research. It’s how they listen and observe during research, and whether they take part in group analysis after research.

    In the full article Keith highlights behaviours that are avoidable during research sessions. I found these particularly interesting:

    1. Go to sessions with an open mind. It’s all too easy to attend a single lab session with the intention of confirming an existing hunch or assumption.
    2. If you have it, compare your test participant’s behaviour with data from your live or beta service.
  7. The 4 questions to ask in a cognitive walkthrough

    Dr. David Travis outlines the 4 questions to ask during a cognitive walkthrough and gives some useful real-world relatable examples.

    The cognitive walkthrough is a formalised way of imagining people’s thoughts and actions when they use an interface for the first time.

    4 questions during a cognitive walkthrough

    1. Will the customer realistically be trying to do this action?
    2. Is the control for the action visible?
    3. Is there a strong url between the control and the action?
    4. Is feedback appropriate?
  8. How to Conduct a Cognitive Walkthrough

    IXD Foundation overview of the cognitive walkthrough method.

    If given a choice – most users prefer to do things to learn a product rather than to read a manual or follow a set of instructions.

    Four questions during a cognitive walkthrough:

    Blackmon, Polson, et al. in 2002 in their paper “Cognitive walkthrough for the Web”

    1. Will the user try and achieve the right outcome?
    2. Will the user notice that the correct action is available to them?
    3. Will the user associate the correct action with the outcome they expect to achieve?
    4. If the correct action is performed; will the user see that progress is being made towards their intended outcome?

    How cognitive walkthroughs differ from heuristic evaluation.

    • Cognitive walkthroughs - goal and task focused
    • Heuristic evaluation - focus on entire product
  9. Cognitive Walkthroughs

    Brad Dalrymple gives an overview of the cognitive walkthrough method and shares a useful test spreadsheet template.

    Steps

    1. Identify the user goal you want to examine
    2. Identify the tasks you must complete to accomplish that goal
    3. Document the experience while completing the tasks

    Cognitive walkthrough questions:

    • Will users understand how to start the task?
    • Are the controls conspicuous?
    • Will users know the control is the correct one?
    • Was there feedback to indicate you completed (or did not complete) the task?
    • Was there feedback to indicate you completed (or did not complete) the task?
  10. It’s Never a Good Time to Do Research

    @mulegirl’s take on the excuses we make not to run a research study and accepting the discomfort of asking questions.

    Why design research is postponed.

    The truth is that people tend to procrastinate and avoid activities that make them anxious in favor of those that deliver immediate satisfaction, and then justify their behavior with excuses after the fact.

    Terrific analogy for the role research in our own lives and how ridiculous it would be to use the same cadence or excuses for not doing research.

    Imagine you were going to buy a new car and I said that you couldn’t talk to anyone who had recently bought a car or read any reviews, or consider the real-world driving conditions. All you could do is run a 10-question survey of whoever volunteered to answer with no incentive and no follow-up.

  11. Five dysfunctions of ‘democratised’ research. Part 3 – Research as a weapon

    The third part in @leisa’s blog post series about scaling research. This post covering when team relationships go sour and research is ‘weaponised’.

    Why articulating the design is important.

    Another reason to see research being used as weaponry is to compensate for a lack of confidence or ability in discussing the design decisions that have been made.

    In a team where the designer is able to articulate the rationale and objectives for their design decisions, and there is trust and respect amongst team members, the need to ‘test and prove’ every decision is reduced.

    Research as a weapon.

    Feeling the need to ‘prove’ every design decision quickly leads to a validation mindset – thinking, ‘I must demonstrate that what I am proposing is the right thing, the best thing. I must win arguments in my team with ‘data”.

    If we focus entirely on validation and ‘proof’, we risk moving away from a learning, discovery mindset.

    Stoking the fire of conflict.

    Validation research can provide short term results to help move teams forward, but it can reinforce a combative relationship between designers and product managers.

  12. Elevate your research objectives - Write better research objectives to get better insights

    @productherapist explains how to refine your research objectives to gain better results from your research.

    What are research objectives?

    Objectives boil down to the main reasons you are doing the research; they are the specific ideas you want to learn more about during the research and the questions you want answered during the research. Essentially, the objectives drive the entire project, since they are the questions we want answered.

  13. Five dysfunctions of ‘democratised’ research. Part 2 – Researching in our silos leads to false positives

    @leisa’s second post about the common dysfunctions about ‘democratised’ research, this focusing on researching in silos, the query effect and false positives.

    How do we fall victim to the query effect?

    By focussing our research around the specific thing our team is responsible for, we increase our vulnerability to the query effect. That little feature is everything to our product team and we want to understand everything our users might think or feel about it, but are we perhaps less inclined to question our team’s own existence in our research?

    What is a false positive?

    Research that is focussed too tightly on a product or a feature increases the risk of a false positive result. A false positive is a research result which wrongly indicates that a particular condition or attribute is present.

    Why are false positives a problem?

    False positives are problematic for at least two reasons. Firstly they can lead teams to believe that there is a greater success rate or demand for the product or feature they are researching than is actually the case when experienced in a more realistic context. And secondly, they can lead to a lack of trust in research – teams are frustrated because they have done all this research and it didn’t help them to succeed. This is not a good outcome for anyone.

    How do we avoid positives and gain more relevant insight?

    The role of the trained and experienced researcher is to not only have expertise in methodology but also to help guide teams to set focus at the right level, to avoid misleading ourselves with data. To ensure we not only gather data, but we are confident we are gathering data on the things that really matter. Even if that requires us to do research on things our team doesn’t own and cannot fix or to collaborate with others in our organisation. In many cases, the additional scope and effort can be essential to achieving a valid outcome from research that teams can trust to use to move forward.

  14. The Horizon of Inquiry – To find time for research, don’t fight the flow—fit into it

    @mulegirl shares a model of fitting research into the flow of continuous work.

    The key is not to optimize for your comfort, but to rethink how research integrates with the rest of your product work or other business decisions.

    Continuous learning is no different from continuous shipping. Big releases take longer and big questions do too. We’re just used to thinking of the endpoint of research as a report rather than a decision—an artifact rather than an action.

    Generative: What problem might we solve? Descriptive: What is happening currently/happened historically? Evaluative: How well is our solution working? Causal: Why is [x] happening/Why did [x] happen?

    The lower the overhead of identifying research questions, planning the study, and recruiting participants (if necessary) the more realistic it will be to accommodate interviews, competitive research, or usability testing within a development cycle. Develop good habits and document the steps.

    You can try timeboxing small research projects. Say for example “What can we learn about [x] by the end of the day?” We do this all the time in our daily lives when planning vacations or making major purchases. It’s the exact same process with a bit more rigor and collaboration.

    Every organization has cycles, whether it’s the school year, the fundraising calendar, quarterly reporting, or a continuous series of iterative product development sprints. You can’t fight time, so work with it. When you think a bit ahead and map your questions onto your calendar, you’ll soon hit your stride of continuous learning.

  15. Five dysfunctions of ‘democratised’ research. Part 1 – Speed trumps validity

    @leisa’s calls out the tradeoffs of compromising speed with validity, the first of five research dysfunctions.

    It’s not a coincidence that people usually start with ‘build’ and rushing to MVP when talking about the ‘learn, build, measure’ cycle.

    What do we mean by validity? In the simplest terms, it is the measure of how well our research understands what we intend for it to understand.

    […] if the work that results from your research findings is going to take more than one person more than a week to implement, it might be worth increasing the robustness of your research methodology to increase confidence that this effort is well spent.

  16. What Really Matters - Focusing on Top Tasks

    My notes on researching Gerry McGovern’s Top Tasks article for an upcoming project

    The problem

    • The ease of publishing content leads to bloated website and admin systems that eventually require redesigning
    • These redesigns become glossy facade fixes atop of the unchanged mess of information and content

    Introducing Top Tasks Management

    Top tasks are:

    • Small set of the most important tasks for your customers
    • Numbering between 2-10 tasks

    The objective is to get these core tasks working as well as possible otherwise, you run a high risk of losing your customer. Doing this by reducing the complexity by identifying what really matters to the customer.

    Additionally involves deemphasising the smaller less important tasks that, over time, contribute to a much bigger drain of resource and value to the customer.

    Less important tasks typically generate more content by the organisation.

    Identifying Top Tasks

    Get the organisation involved gathering tasks

    Objective - build empathy with the customer, understand how they think.

    Change the mindset - what does the customer want rather than what the organisation want.

    Data sources for gathering tasks:

    • Organisational philosophy - strategy, vision and objectives
    • Customer feedback - survey, help inquiries, support team insights
    • Stakeholder insight - considerations for top tasks
    • Competitor or peer websites - review similar tasks across domain
    • Traditional and social media - open discussions on various channels
    • Site behaviour analysis - top visited and interacted pages and assets
    • Search analysis - most popular site and public search engine search terms

    Two reasons why most popular pages and search keyword aren’t enough:

    1. They reflect what content you have maybe not what your customers want. These pages might also be a mix of top and tiny tasks.
    2. Search doesn’t give you the bigger picture. Bookmarked top tasks and well-constructed navigations mean tiny tasks are more likely to be searched for.

    The gathered lists usually contain duplicates, overlapping areas, internal jargon.

    Generate a shortlist with stakeholders

    Objective - cut the list down to a maximum shortlist of 100 tasks.

    Duration - 4-6 weeks to do the research and generate the shortlist.

    Tips on shortlist generation:

    1. Use clear language - avoid jargon and other technological or marketing-centric terminology.
    2. Omit specific references to products or features and avoid using group names - use general terms that can cover all instances of product related tasks.
    3. Merge overlapping tasks - consider combining similar tasks into a single more generic task.
    4. Avoid high-level concepts and goals - try to maintain tasks to a similar level and differentiated from the overall customer goal. Goal = the change, Task = the thing the customer needs to do to help achieve that goal.
    5. Exclude audience and demographic - tasks should be universal.
    6. Use nouns for tasks - avoid using for tasks if possible, scannability is improved by omitting verbs.
    7. Avoid repetition - aim for no more than 4 tasks that have the same first word.
    8. Keep it brief - max of 7 words or 55 characters per task.

    Subtasks should include 2-3 examples and added to parentheses e.g. Task (subtask, subtask, subtask)

    The objective here is to involving as many teams and gain consensus from as many key stakeholders as possible. There may be a need to bend the rules to prove that one top task isn’t needed or to observe customers reactions.

    Get customers to vote and rank

    The shortlist is then sent to a representative sample of customers to complete.

    They must:

    1. Choose 5 tasks that matter most to them
    2. Rank the chosen 5 tasks - 5 = most important, 1 = least important

    The survey design is such for two reasons:

    1. It forces a gut reaction - what customers do vs what they say
    2. It exposes the top tasks and the tiny tasks as a hierarchy of importance

    Order tasks by highest/lowest vote

    Results of the survey will expose the top, medium, small and tiny tasks.

    See article for example results

    Benefits

    Top Tasks Management is an evidence-based collaborative approach which can be applied periodically to check customer’s top tasks.

    The value is also found for the organisation in cross-team collaboration and shared understanding.

  17. 5 Steps to Create Good User Interview Questions By @Metacole — A Comprehensive Guide

    5 Steps to Create Good User Interview Questions By @Metacole — A Comprehensive Guide

    • User interviews enable you to:
      • speak directly to your users
      • have specific questions answered
      • uncover previously unknown details and directions
    • Badly scripted questions can:
      • result in biased questions and therefore biased answers
      • lead to a flawed foundation to product and business decisions

    1. Start with a problem statement

    • What are the questions you want answered?
    • Create a list of all the questions you need answered to gain better understanding

    2. Reframe the problem statements

    • Rephrase the questions from different perspectives:
      • logical/rationale-driven
      • emotion/desire-driven
      • product/consumer-focused
    • Benefits:
      • uncovers additional opportunities to learn about your users, specifically those you hadn’t previously considered
      • creates the foundation of your interview questions

    3. Develop your questions

    • Avoid leading questions
      • Leading questions will influence the answers you receive from you interviewees
      • They infer that something is true where it might not be
    • Avoid speculative questions
      • if asking about the past, be as specific as possible
      • speculative questions invite interviewees to fill in the gaps or completely invent a scenario
      • aim for genuine and insightful data
    • Ask open-ended questions
      • open-ended questions invite interviewees to add details around the central theme
      • answers to open-ended questions unpack invaluable information that would otherwise be undiscovered by a more specific question
    • Ask multiple questions to inquire about one thing
      • offers an opportunity to verify that you’ve understood the interviewee and check for contradictions
      • avoid asking these questions concurrently, instead pick a the next natural moment in the flow of conversation
      • data triangulation can also help
      • Avoid asking if an interviewee would purchase or use the product
        • this is an uncomfortable position for your interviewee to be put in and they will probably say “yes” even if they don’t mean it
        • instead, ask about their intent to purchase

    4. Be prepared to paraphrase your questions

    • it’s possible an interviewee won’t understand your question
    • being prepared to rephrase a question will keep the interview flowing

    5. Add structure to your question list

    1. Introduction
    • put the interviewee at ease by explaining the purpose of the interview and where the data is going
    • avoid explainig too much to maintain natural responses to questions
    • Thank the interview for attending and introduce yourself
    • keep the introduction brief
    • ask permission: audio and video recording, photos etc
    1. Warm up
    • ask 3-5 generic questions
      • occupation/what’s an average day like?
      • hobbies
      • internet usage
    1. Main
    • ask as much as possible
    • start with specific past events then speculative questions
    • ask questions that suit the conversation, introduce the theme then dig deeper
    1. Wrap up
    • make it clear that the interview is over
    • ask if they have any questions
    • thank then for their time and contribution
  18. Interviewing Users

    Jakob Nielsen’s advice on interviews.

    • What users say and what they do are different
    • User interviews are appropriate when used in cases where they generate valid data

    What interviews can’t provide

    • Where user interviews fail:

      • when a user is asked to recall the past
      • when a user is asked to speculate the future
    • Our memories are fallible - we construct stories to rationalize what we remember, or think we remember, to make it sound more plausible or logical

    • The present is the only valid data a user can offer, everything else is recollection or speculation

    • Users are pragmatic and concrete - users (non-designers) can’t naturally visualize something that doesn’t yet exist, and similarly, designers don’t see the world from a users’s perspective. This explains the failure of a specification document and waterfall product development. It speculates that the product will succeed.

    • In contrast, An Agile team focused on learning will validate design decisions at each iteration.

    • Decisions on colours, html form element types, number of items, tone of voice are not something to ask users. Instead, these decisions will be determined from observing users use the product.

    • Avoid asking users:

      • Would you use (unbuild feature) - again, this is speculation
      • How useful is (existing feature) - these questions may lead to confused responses and unreliable data. caveat - if you do ask “how useful is (existing feature)” also ask the same for a non-existing feature
    • To gain this feedback more accurately:

      • pay attention to user comments while using these features
      • ask questions immediately after use

    What interviews can provide

    • Overall feelings of using the site after use
    • Acquiring general attitudes or “how they think of a problem” - use this feedback to design solutions
    • Use the critical incident method to ask to recall stand-out examples:
      • when they faced particular difficulty
      • when there was little friction
    • Avoid idealised examples by:
      • avoiding asking for their “usual” workflow - asking this can result in the omission of details that remove them from what they actually do

    The Query Effect

    • People make up opinions when asked for one
    • Asking leading questions can act as a catalyst
    • Be cautious not to use these opinions to make design or business decisions
    • To gain this feedback more accurately:
      • resist asking about particular attributes that might result in forced comments
      • take note of unprompted comments during usability testing

    Combining methods

    • User testing will always give you the most valuable data
    • Triangulate the findings to gain a better understanding