Reclaiming the Lost Science of UX Research (2024)

Reclaiming the Lost Science of UX Research (3)

Contents:

  • The Decline of UX Research
  • Research in the Product Development Cycle
  • Fitting this into an Agile World
  • Conclusion

The Decline of UX Research

I’m going to come out and just say it: the golden era of usability and user experience research is dead. It took place during the 90’s and early 2000’s, but ever since the mid 2000’s, UX and usability research have been on a decline. That doesn’t mean we can’t have a resurgence though… We can reclaim the lost science of UX Research.

Notice my choice of words here: “reclaiming the lost science of UX research”. I use the word “science” instead of “art” for good reason. Why? Because it’s scientific process that made UX research great, and it’s scientific process that we’ve stripped out of UX research that has led to the decline I mention. Specifically, I’m talking about the use of valid, standard research practices to enable valid, reliable, and trustworthy insights. For instance, we used to use appropriate sample sizes. We rarely do any more. We used to do thorough analyses of entire digital products and systems, and benchmark them. We rarely do anymore. We used to have entire research teams devoted to researching a single product. We often don’t anymore. We used to have research moderators with seperate note-takers, etc. We rarely do anymore. We used to engage in multiple types of research for different purposes. Now we usually focus primarily on usability testing.

So what changed? In short: The Internet era and our ability to push out software updates instantly changed everything along with the associated dawn of Agile development.

Let’s face it, quality research — trustworthy, valid research takes time, and as much as we can try to make it as efficient as possible, it will never be instantaneous. At a certain point there are diminishing returns to shortening the research process further — and tradeoffs that simply are unacceptable and completely jeopardize the validity of the research so much, that it would probably be best to do no research at all. After all, having a false sense of trust in data that is incorrect can lead to seriously negative consequences.

Case in point: various automobile manufactures have recently announced plans to cut back on electric vehicle development, because consumer interest in EVs has not been as strong as their market research had suggested it would be. Bad research that produced faulty inferences has led to a very costly mistake to the tune of billions of dollars.

Unfortunately, because quality research takes some time, in a world obsessed with speed to delivery, research is often seen more as a hinderance, rather than a necessary component to enable quality products and experiences. This is what happens when you make speed the primary goal instead of quality user experiences. This has led to researchers cutting corners and making tradeoffs that they probably shouldn’t. It has led to such a decline in research validity, to the point that we’re putting our respective companies at risk of basing decisions on bad data. It’s time we undid this and reclaimed UX research once more.

Research in the Product Development Cycle

It used to be that software development happened in very clear, linear, development stages over longer periods of time. These happened in 4 basic phases along with associated research (see The UX Research Cheat Sheet by the Nielsen Norman Group):

  1. Discovery / Generative Research — First, we had to figure out what we wanted to build — what was the problem we were trying to solve with new or updated software? Who were were solving it for? What were their needs? What pain-points did they have with current systems intended for the same or similar purpose? Early, discovery or generative research (ethnographic research, case studies, interviews, surveys, etc.) were fundamental in helping guide the strategic direction of product development.
  2. Explorative / Iterative Design / Shape Research — Next, there was the design phase, in which ideas were produced, options, mock-up screens and workflows, prototypes, etc. that were then tested via usability testing, to see if these solved the problems we were hoping to without introducing other usability or experience issues. This was done iteratively to refine designs. This iterative design research often took the form of early usability testing with prototypes. This was Nielsen’s “test with 5, 3 times”. It focused heavily on qualitative feedback and insights — not quantitative.
  3. Test / Evaluative Research — Once design decisions were settled on based on research insights, these were then coded. Once the code was produced, we often wanted some form of validation research. After all, the code itself could introduce usability aspects that were not discoverable through prototype testing. For instance, what if the system was really slow to respond? It would create a sub-optimal experience. We would not have discovered this via paper prototypes. This validation testing, after coding, but before “shipping” or launching, was often large-scale, summative, evaluative, benchmark type usability testing, with defined key tasks, scenarios, and metrics. These were often very heavily quantitative in nature.
  4. Listen / Monitoring Research — Lastly, once a product shipped or launched, we would often conduct research with real-world users — like surveys and analytics to monitor the ongoing experience of the product out in the wild. This in turn would often lead into a new phase of discovery research, and thus the product development cycle would repeat.

Thus research was used to inform decisions along every step of the development cycle — reducing the chances of bad decisions and better enabling good ones.

Fitting this into an Agile World

Agile development basically cut this process down into much shorter periods of time focusing on smaller components of software. However, this didn’t change the need for these different types of research. Some claim it did though — namely that the risk is less in releasing small, incremental, iterative updates, especially if we can retract them if something goes wrong. However, small things can have large impacts on experience, and even it can be quickly undone, it can leave a lasting impression in the minds of your users and degrade brand value and image. It’s not ideal. The notion of “fail fast” might be okay for some startups, but not necessarily for well-established companies who have a lot to potentially lose. It would seem we often have pushed ourselves to an extreme in the need for speed. Extremes of any kind are typically bad, in my opinion. Rather, a balanced approach is usually best — weighing the pros and cons, the risks and benefits, in a true cost-benefit ratio analysis. Sure, we can’t be doing academic level research — it would take WAY too long in the context of product development in industry, but taking research to the other extreme where it’s so far stripped down that validity is highly questionable is also not good.

The problem with Agile development is that all of the 4 phases of development can and often do happen concurrently. While some new feature is being coded, the next one is being designed, and the previous one is being released — all while trying to plan ahead strategically for what lies beyond the next feature or update.

This essentially means, if we want all of the decisions in these phases to be decisions informed by research, all the different types of research need to happen simultaneously, which is impossible for a single researcher. The only true way to achieve this is to have an entire team of 2–4 or more researchers working on a single product, with one researcher doing discovery research, another doing design iteration research, another doing evaluative research, and yet another doing ongoing monitoring research. Good luck selling this idea to your company. :) Nevertheless, we should try.

One alternative is to only focus on features that we hypothesize might have the largest impact on user experience. There’s some risk here though, as our hypotheses could be incorrect. What we thought would be a seemingly insignificant feature turns out to have major impact.

Another alternative might include favoring on earlier discovery and design research, where research can have the largest impact by guiding direction, and accepting the risks of not validating or continually monitoring.

Resurrecting Lost Research Practices

We ought to consider forms of research and steps in the research process that have often been cut out that we really should bring back because of the tremendous value they provide. Some of these include:

  • Peer Reviews — reviewing the work of other researchers and getting your own research work reviewed enables us to provide feedback, catch any hidden biases, and overall improves the quality, consistency, and validity of our research. We really ought to have research plans reviewed before executing research. Likewise, we should have someone observe our moderation for continued feedback on how we might be biasing our research in moderation. Lastly, we should have our analysis and findings reports peer reviewed for clarity and accuracy. No one knows everything — even experienced researchers. In academia, if you want your article published, it still has to go through a peer review process, even if you’ve been doing research in your respective field for 30 or more years.
  • Discovery / Generative Research — if you don’t work on the right problems and develop a strategy to do so, you risk things like scope creep and instituting features no one wants — wasted effort, etc. Unfortunately discovery research can take time — ethnographic studies, for instance, are labor intensive. But, if we want to do things “right” or “well”, starting with a solid strategy based on research data is important.
  • Heuristic Evaluations — One way to actually gain efficiency is to conduct heuristic evaluations on proposed product designs before even testing them with participants. Heuristic evaluations are much quicker, and although they’re not infallible, they can provide guidance and direction, and help enable future research to be focused on the right things and be the most efficient and effective it can be.
  • Pilot Tests — More often than not, we’re so rushed to conduct usability studies, that we just do them without first running a pilot test. Running your research study with 1 or 2 participants a day or two before your study to test out the design of the research or study and make appropriate changes accordingly helps elevate the quality and validity of our research, by allowing us to catch issues with clarity, comprehension, task order, technical issues, or a myriad of other potential issues that might disrupt a study or diminish its validity.
  • Test with 15 people — So many people test with 5, and that’s it. Nielsen originally proposed we test with 5, a minimum of 3 times, in an iterative fashion to catch the big issues, then the medium ones, then the small ones. A single usability study on a new design with 5 people is simply not sufficient and introduces serious risks to data validity and the inferential nature or generalizability of research findings.
  • Evaluative, Summative, and Benchmark Studies — We should bring back larger scale, summative, evaluative, and benchmark studies with quantitative metrics to help us benchmark the quality of user experience of the product over time and really ensure changes are for the better and have improved a product before releasing. This is especially true for larger features or changes and updates. However, even after a series of smaller updates, the cumulative effect of many small changes should occasionally be verified in this way too. Traditionally these studies used 30 or more participants, perhaps we could make some compromises and use 15 participants. It’s certainly better than not doing these studies at all.
  • Ongoing monitoring metrics — We should be utilizing analytics / metrics and things like ongoing satisfaction surveys to continually monitor the quality of health of the user experience of our digital products.

Conclusion

So there you have it — solid research practices based on decades of scientific evolution, that we should bring back. If we do not advocate for these, and allow product teams and companies to continually cut corners on research, then, at least in my opinion, we are sacrificing our integrity as the research experts we are and not doing our due diligence — it is ethically questionable…

Reclaiming the Lost Science of UX Research (2024)
Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 5774

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.