Skip to main content
  • Insights

The art of asking: When to use open vs. closed questions in UX research

Emergo by UL human factors specialists describe what constitutes an open versus closed question and share tips for when and how to use open and closed questions in user research.

interview hfrd

February 26, 2026

By Carolyn David and Tess Forton   

Asking the right question is essential to generating meaningful, reliable user insights. In user research, knowing when to use open versus closed questions is critical for aligning research methods and the data you generate with overall research goals. Open questions help uncover context, behavior and motivation, while closed questions offer structure, clarity and quantifiable data. Misusing either can lead to missed insights or misleading conclusions. In this article, we will break down the differences between open and closed questions, explain when using each type is most effective, and show how combining them can strengthen your study design and outcomes. 

Defining open and closed questions  

User research questions generally fall into two categories: open and closed. Each serves a distinct purpose and is typically chosen based on the type of data you aim to generate and the stage of the research process. 

Closed questions are designed to elicit specific, limited responses. These might take the form of yes/no answers, multiple-choice selections, or scaled ratings. Closed questions are particularly useful for collecting quantifiable data, identifying patterns across participants and supporting statistical analysis. For example:  

  • “Did you notice this touch key button on the screen?” (Yes/No)  
  • “How many times a week do you use the app?” (Multiple choice)  
  • “On a scale of 1 to 7, with 7 being extremely effective, how would you rate the effectiveness of your current approach to administering medication?” (1-7 Likert scale) 

Open questions, on the other hand, allow participants to respond in their own words. These are exploratory and better suited for gaining a deeper understanding of user behaviors, goals, and needs. Examples include:  

  • “What are your impressions of the product?”  
  •  “What would you change about this feature to better suit your needs?”  

Potential use cases for closed questions 

Closed questions are most effective when the research goal is to generate data that is standardized, measurable and verifiable. They can also provide a strong foundation for follow-up open-ended questions that explore the underlying ‘why’ in greater depth. These questions provide a consistent structure that makes data easy to compare, quantify and analyze at scale. Common use cases for closed questions include:  

  • Usability testing quantitative metrics: Gathering quantitative question-based data is particularly valuable when you need comparable data. For example, when evaluating two design options (A/B testing), tracking usability improvements across iterations or benchmarking your product against a competitor. Tools like the System Usability Scale (SUS) use fixed-response items to capture users’ impressions of usability in a consistent, quantifiable way. While core performance metrics (such as task success or time on task) are typically collected through observation rather than self-report, closed rating-scale questions complement these measures by providing structured insights.  
  • Surveys and questionnaires: These tools can efficiently gather structured feedback from a broad data set with minimal effort from the research team. However, because questions and responses are fixed, opportunities for follow-up or deeper probing are limited. Surveys and questionnaires typically rely more on closed questions because they produce standardized data that can be easily quantified and compared. For example, rating-scale questions like “How likely are you to recommend this product to a colleague?” (i.e., Net Promoter Score) enable reliable comparison across time and/or user segments and benchmarking against industry standards.  
  • Participant screening and background information: Closed questions can support quick, standardized participant screening by establishing clearly, easily measurable qualifying criteria. For example, instead of asking an open-ended question like “How often do you use an endoscope?” that can produce inconsistent responses (e.g., per day, week, month), a closed question such as “How many times per week do you use an endoscope?” provides comparable data across participants.  

It is worth noting that open-ended screening questions are sometime preferred, especially when researchers need participants to demonstrate authentic knowledge or experience rather than selecting answers from provided options. Closed questions, however, remain particularly useful when the goal is to efficiently confirm straightforward criteria, such as “Do you have experience using injection device at home?”  

Closed questions are especially helpful when aiming to gather feedback from a higher sample size since they produce data that is easy to quantify and compare. However, they can sometimes offer limited insight into user intent or behavior beyond the surface level. Another drawback is that closed questions may not generate any unexpected insights; you might learn what users prefer or prioritize but not why. For example, a participant might rank a list of features by priority, but without follow-up, their underlying reasoning remains unclear.  

Potential use cases for open questions  

Open questions are most effective when the goal is to explore user behavior and thought processes, uncover motivations or identify unanticipated needs. These questions are particularly valuable in early-stage or generative research, when assumptions are still being formed. Example scenarios that are well-suited for open questions include:  

  • Gaining insight into context of use: Asking broad, exploratory questions such as “How would you describe your typical day when using this app?” helps reveal the environment, workflows and conditions in which users would be interacting with the app.  
  • Understanding user motivations, values and decision-making: Questions like “What mattered most to you when choosing this feature over others?” help reveal personal, emotional or practical values that inform decision-making. These insights help uncover the more subjective or emotional drivers for user preferences, which are often difficult to surface through quantitative methods alone.  
  • Identifying root cause of usability issues: When debriefing with a participant after a use scenario, asking “What were you expecting to happen when you took that action?” can uncover mismatches between system behavior and the user’s mental model, helping researchers pinpoint true underlying causes.  
  • Surfacing pain points, workarounds and unanticipated needs: Questions such as “Tell me about a time when this device did not work as expected” invite users to share real-world examples that highlight unmet needs that might not appear in structured testing.  

Open questions allow for unfiltered feedback, making them valuable for gathering rich contextual data that closed questions are by nature not going to capture.    

Combining open and closed questions  

While open and closed questions serve different purposes, they are often most effective when used together within a research session or study. Open-ended questions support a holistic understanding of users’ experiences and how a product or service resonates with them, even when no closed questions are included. However, incorporating some closed questions can add value by creating consistency across participant responses, helping researchers identify trends, quantify patterns or make comparison between groups or design options. When combined, the two approaches provide both rich explanatory context and structured, comparable data.  

Example pairings include:  

  • During user interviews: “On a scale of 1 to 7, with 7 being extremely easy, how would you rate the expected ease of using the product?” (1-7 Likert scale), followed by “Why?”  
  • During usability test sessions: After timing task completion, asking “How confident did you feel while completing the task?” (closed) and “What influenced that confidence?” (open)  
  • During usability test sessions: For example, “What did you find most helpful about the training? What did you find most challenging or confusing?” (open) followed by “Did you feel that training met your expectations?” (closed)  

This mixed approach helps bridge the gap between quantitative clarity and qualitative insight. Closed questions allow for fast comparison across intended users, while open questions capture the reasons behind ratings or rankings, helping to build a more nuanced understanding of what to improve or preserve.  

Combining both types also supports effective storytelling for stakeholders. Quantitative data can highlight the scope of an issue, while qualitative quotes bring that issue to life. 

Comparison of open versus closed questions in user research  

 Open Questions Closed Questions 
Response Type Descriptive, detailed, narrative Brief, specific, structured  
Best For  Exploring context of use; uncovering motivations, values, and decision criteria; surfacing pain points; discovering unanticipated needs; generating opportunities  Statistical significance and comparisons; evaluating design alternatives; measuring usability improvements through iterations; benchmarking against competitors  
Limitations Off-topic answers, ambiguity, low scalability for large sample sizes  Lack of depth, missing nuance or full rationale for responses 

 

Contact our team to learn more about user research methods. Or, sign up for a complimentary account with OPUS, our team’s software platform that provides HFE training, tools, and templates.   

Carolyn David is a Human Factors Associate and Tess Forton is a Managing Human Factors Specialist at Emergo by UL.

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…