EdTechLab
Back to Lab Notes
Lab Notes 10-12 min read

Why Research Tools Need Product-Quality Design

In research settings, usability is not decoration. When people cannot understand or trust the workflow, adoption quality falls, intervention fidelity drifts, and the resulting data becomes weaker than it first appears.

Why this matters

Research teams sometimes accept a tool because it is technically possible, not because it is genuinely usable. That distinction matters more than it looks. Product-quality design affects whether the workflow is completed properly, whether users need hidden support, and whether the data actually represents the intended activity.

Author
Saad Saihi
Section
Lab Notes
Focus
Usability, accessibility, and adoption quality

In brief

  • Technical correctness is not the same thing as usable design.
  • Adoption quality changes evidence quality because it changes what people actually do.
  • Accessibility is part of product quality, not a parallel concern.
  • Research teams should measure usability explicitly before treating a tool as ready.

There is a persistent gap in education technology between software that works and software that people can actually use well. In research contexts, that gap is costly. When a workflow is confusing, hidden support labour increases, user behaviour becomes inconsistent, and the eventual data reflects the friction of the tool as much as the intervention being studied.

That is why research tools need product-quality design. The argument is not that every research system needs consumer-app polish. The argument is narrower: the workflow has to be understandable, recoverable, accessible, and trustworthy enough that people can use it consistently under real conditions.

"It works" is not enough

Technology-acceptance research has been pointing in this direction for decades. Davis's Technology Acceptance Model centres perceived usefulness and perceived ease of use as key drivers of uptake.[1] UTAUT later extended that thinking through performance expectancy, effort expectancy, social influence, and facilitating conditions.[2]

Those theories are not product-design style guides, but they make an important point for research tooling: adoption is shaped by how the system feels to use, not only by what it claims to do. A tool that is too effortful, opaque, or support-heavy can undermine its own research value before analysis even begins.

Design quality changes data quality

When a research tool is difficult to understand, users hesitate, skip steps, rely on improvised routes, or abandon the workflow entirely. Those behaviours are not just usability problems. They alter the data that the system later surfaces. If a task is hidden, confusing, or easy to misinterpret, the platform is no longer observing the intended interaction cleanly.

Recent learning-analytics literature reinforces the same idea at the system level. Work on adoption barriers shows that implementation quality and socio-technical context are inseparable from the value institutions get out of analytics systems.[3] In other words, the path from interface to evidence is shorter than many teams assume.

Accessibility is part of product quality

Accessibility should be treated as a first-order product requirement. WCAG 2.2 offers the practical baseline for public digital systems because accessible structure, focus, media alternatives, and input support all affect whether a workflow can be completed reliably at all.[4]

In research settings, weak accessibility does more than exclude some users. It also distorts the intervention environment. If one group can navigate the flow directly and another group must rely on workaround support, the tool is already changing the conditions under which the study operates.

This matters even more when the pedagogy is active

High-impact active-learning literature gives this issue additional weight. Prince's review, Freeman and colleagues' meta-analysis, and Theobald and colleagues' equity-focused follow-up all point to the value of structured active learning in higher-education settings.[5][6][7]

If the intended pedagogy depends on participation, sequence, or timely interaction, the design quality of the tool becomes even more consequential. A platform that makes active work awkward can blunt the very pattern of engagement the intervention depends on.

Measure usability explicitly

Product-quality design should be inspected, not assumed. The System Usability Scale and related work remain useful here because they give teams a fast way to benchmark whether a workflow is acceptable before declaring it ready for broader use.[8]

That does not mean usability can be reduced to one score. Research teams usually need a small set of complementary checks.

  • Can first-time users complete the critical path without intervention?
  • Do people understand system state, sequence, and next steps?
  • Can they recover from errors without specialist help?
  • Are accessibility features part of the default workflow rather than an exception path?
  • Does the interface make the evidence logic of the tool easier or harder to inspect?

What this changes in practice

For research leads, this means usability should sit beside governance and analytics in tool evaluation, not after them. For institutional teams, it means refusing the old separation between "the academic idea" and "the product experience." If the experience fails, the academic idea is not being delivered cleanly enough to evaluate with much confidence.

Product-quality design is therefore not a luxury layer. In research systems, it is part of methodological seriousness.

Closing

The quickest way to weaken a research tool is to assume that functionality alone is enough. It is not enough for the system to exist. People have to be able to understand it, trust it, complete it, and recover within it.

That is the real reason product-quality design belongs inside research infrastructure work: because better design does not just improve impressions. It improves the conditions under which evidence is created.

References

  1. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. DOI
  2. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. DOI
  3. Alzahrani AS, Tsai YS, Iqbal S, et al. Untangling connections between challenges in the adoption of learning analytics in higher education. DOI
  4. World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.2. Source
  5. Prince M. Does active learning work? A review of the research. DOI
  6. Freeman S, Eddy SL, McDonough M, et al. Active learning increases student performance in science, engineering, and mathematics. DOI
  7. Theobald EJ, Hill MJ, Tran E, et al. Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math. DOI
  8. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the System Usability Scale. DOI

Continue the reading

Pair this with the infrastructure evaluation framework.

In this article

Core claim

Better product design is not only about satisfaction. In research settings it changes adoption, fidelity, and the meaning of the data that later appears to be objective.

Related page

Guidance for research leads and PIs

Continue into the audience page built around study-design fit, evidence quality, and operational adoption.

View the audience page