Candidate Exercises, Or: How I Learned to Stop Testing Research and Start Doing It
Let's talk about something all UX Research leaders wrestle with eventually: how to evaluate a UX Researcher in an interview.
This started, as many things do these days, with a text from a friend. He asked if I'd ever encountered a UX Research exercise as part of a hiring process, and what I thought about them. I had a reaction. A strong one.
Here's what he wrote:
Have you ever given or taken a UX Research interview exercise? If so, what did it look like? And what's your take on whether they're valuable?
And here's how I responded:
Yeah, I used to include a UX Research exercise in interviews. But I hated it. It came from trying to mimic the designer interview format, which included a live activity.
I gave candidates a broad problem space and asked them to talk through how they'd approach it from a UXR perspective. It was completely artificial. You're asking someone to simulate weeks of thoughtful work in a single hour, with no real context or domain knowledge.
Occasionally, a candidate would respond by asking lots of questions and challenging the setup, which showed good instincts. But even that felt unfair. The exercise assumed it was okay to jump in blind. It wasn't. The whole thing was flawed.
That text exchange stuck with me. Not just because I had such a visceral opinion, but because I've been part of the problem.
I'm paraphrasing both messages a bit here: my friend was more thoughtful, and I was less coherent. But the sentiment (and the strong reaction) came through loud and clear.
Why We Want Exercises
I understand the impulse.
When you're hiring a researcher, you want to go beyond the résumé. You want to see how they think. You want to simulate the job itself. You want evidence of their skills, their approach, their instincts. And frankly, it can be tempting to fall back on what other disciplines do: designers do design challenges, so why shouldn't researchers do research challenges?
That's exactly how I justified it in the past. We had a solid design exercise for product designers, and we mirrored the format for UXR. But it never felt quite right, and the more I reflect, the more convinced I am that it's not just a little flawed: It's fundamentally broken.
Why It Fails
The problem is this: research doesn't exist in a vacuum, and interview exercises always do.
You can't compress good research into an hour-long performance. You can't demonstrate planning, facilitation, synthesis, stakeholder alignment, ethical tradeoffs, business sensitivity, and domain understanding in a single fake exercise. At best, you get a shallow glimpse of one dimension. At worst, you misjudge someone entirely.
You might accidentally reward people who are great at fast-talking or academic frameworks, but weak on real-world complexity. You might miss someone whose strength is navigating ambiguity with grace, since you gave them none of the real ambiguity to navigate.
There's also the fairness problem. An exercise implies it's okay to just dive in and start "doing" the work. But a good researcher knows you never start without context. So do they call out the flawed premise? Do they perform anyway, just to check the box? Do they risk looking uncooperative by saying "this is a bad task"? It's a trap. And we put it there.
A Partial Defense (But Not Really)
I'll admit, there is a version of the exercise that tells you something useful: if a candidate gracefully challenges the premise, asks a ton of questions, and deconstructs the artificiality of the task, that can be a green flag.
But that's not a good enough reason to keep doing it. Because that moment of insight only happens if the candidate assumes they're allowed to push back. And many won't. Especially those coming from marginalized groups, or those new to the field, or those burned by bad hiring practices before. It's not their job to solve your broken process.
So What's Better?
Here's where I've landed: interviewing researchers should be treated like doing research.
Don't expect them to do fake work in fake conditions. Instead, do the work yourself. Ask real questions about their past experience. Dig into how they handled messy problems. Explore how they think through ambiguity, conflict, or tradeoffs. Let them show you who they are, not through performance, but through reflection.
This also gives space for different types of researchers. Not every excellent UXR is a smooth talker. Some shine in synthesis. Others in data. Others in quiet systems thinking. If you focus only on who can impress you in a single moment, you'll miss the ones who could make your org better over years.
What I Do Now
I no longer use exercises when hiring for my team. I don't need candidates to prove they can do research. I need to do research about them.
I ask about the contexts they've worked in. The projects they're proud of. The moments they struggled. I dig into those stories to understand their mindset, approach, ethics, flexibility, and alignment with our needs.
It's harder. It takes more from me. But it's more honest. And it gives them a fairer chance to show me what really matters.
In the End
I get why we love exercises. But I think they're better left for disciplines where individual outputs can be more cleanly isolated and judged. Research is too entangled. Too collaborative. Too contextual.
So no, I'm not running exercises anymore.
I'm running interviews.
And I'm okay with that.