Skip to main content
  • Insights

Conducting Usability Tests with Multiple Analysts | Emergo by UL

Key considerations for data collection when conducting usability tests with multiple analysts.

multiple analysts conducting a usability study

September 20, 2023

By Michael Romero 

A fundamental aspect of conducting human factors usability tests is recording notes and documenting test data, such as participant actions, subjective feedback and findings, which serve as inputs for the usability test report. The test personnel documenting this data, who we refer to as “analysts,” must do so in an effective and consistent manner to facilitate the development of insightful and thorough, yet concise, usability test reports based on the data they collected in their notes. There is plenty that test staff must consider to successfully perform this role when only a single analyst is involved, but there is even more deliberation required when there are multiple analysts. This article will cover the additional obstacles test personnel should consider when conducting usability tests with multiple analysts such as ensuring consistency across analysts and structuring data collection documents. 

There are several reasons why a usability test might require or benefit from utilizing multiple analysts, such as when conducting test sessions in parallel (i.e., multiple test personnel conduct unique test sessions concurrently) or conducting usability tests that involve participant teams. In this blog, we focus more specifically on considerations for data collection when conducting usability tests that involve multiple analysts. 

Promote consistency across usability test analysts 

If you have concluded that your usability test requires multiple analysts to ensure it is conducted properly, you must next determine how to organize and format each analyst’s notes to ensure consistency. 

One aspect to consider is alignment on terminology when referring to specific product components. For example, a given product component might have an official name within the associated instructions or labeling but also have a colloquial name that participants or staff more commonly use. When analysts decide on a consistent term to refer to a specific product component in their running commentary of participant actions, it allows all analysts and other staff to accurately and efficiently interpret a given analyst’s notes when extracting relevant insights. 

Another detail on which analysts should align is how they will denote and format findings in their data collection document, such as observed use errors that participants commit. Notably, there are many valid options for formatting findings in a data collection document to ensure they are easily traceable and visually recognizable. One option we recommend is placing an acronym that represents the finding type such as “UE” for “use error” in brackets that precede a finding title, (e.g., “[UE] Did not attach needle”). This facilitates anyone’s ability to identify findings within the data collection document. If analysts input notes into a digital data collection document, we recommend also leveraging “conditional formatting” to automatically color-code findings based on their type (e.g., cell color, text color). Whatever finding notation method you decide on, consistent finding notation will enable all test staff to accurately identify all findings contained in any analyst’s notes. 

How to structure data collection in team-based usability tests 

Conducting a usability test that involves multiple analysts and participant “teams,” (such as a surgeon, scrub nurse and circulating nurse working together to operate a single surgical system), introduces unique considerations regarding how to structure the data collection document(s) effectively. 

The primary consideration here depends on where the analysts will focus their attention. If one or more analysts are assigned to focus on a particular participant or component for the full duration of the session, it might be beneficial to tailor each analyst’s data collection document to that specific participant’s or component’s tasks. 

Although tailoring the data collection document such that the test only contains the tasks relevant to a given analyst’s focus yields a more concise data collection document for that individual, it limits the context regarding a task’s place in the larger workflow. Taking this approach also means that test personnel must create multiple versions of the data collection document (i.e., one for each analyst) which, therefore, increases the work required for test preparation and future data extraction when compiling the test report. Our advice is to consider utilizing individualized data collection documents when your test team only has one or two additional analysts, or when the task workflow generally has a serial, rather than parallel, flow (i.e., one participant finishes their tasks before the next participant begins theirs). 

By contrast, creating a data collection document that contains all tasks participants will perform throughout the session generates a less concise data collection document that will have some irrelevant parts for each analyst. However, this more complete data collection document will provide more context regarding a task’s occurrence in the broader workflow. Notably, this approach also simplifies the data collection document’s creation process because you only need a single document version for all analysts. Additionally, a single document also means that all analysts input their notes into a consolidated data collection document, which can facilitate data extraction. Consider adopting the single-document approach when staffing three or more analysts to your test, which can make it too cumbersome to create individualized documents. 

The middle ground between these two options is to create one data collection document that contains all the tasks that participants will perform throughout the session, but wherein each analyst “greys out” tasks that are irrelevant to their focus in their personal version of the data collection document. This strikes a nice balance between providing sufficient task context related to the overall workflow, while still visually cueing which tasks are relevant to each analyst. Additionally, this method enables relatively streamlined data collection document creation and post-test data consolidation. We recommend taking this hybrid approach for most usability tests involving participant teams and multiple analysts because it better accommodates parallel task workflows and enables straightforward individualized document creation. 

The importance of optimizing usability tests with multiple analysts and participant teams 

Determining the optimal way to not only ensure consistency across multiple analysts’ notes but also to effectively structure their data collection documents is essential to conducting a productive usability test involving multiple analysts and/or participant teams. It is always crucial for multiple analysts to align on terminology and observed-finding notation. When participant teams are involved, test personnel also need to determine whether the added complexity of using tailored data collection documents is justified by analysts’ presumed increased ability to capture relevant but nuanced data using these documents. 

Michael Romero is Senior Human Factors Specialist at Emergo by UL's Human Factors Research & Design division. 

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…