Five research questions that actually move product decisions
March 12, 2026
You ran five interviews this week. The transcripts are thorough. The highlights are tagged. And yet when the PM asks "so what should we build next?", the data shrugs back. It's a familiar kind of failure — research that ends with information but no decision.
Most of the time, the culprit isn't analysis. It's the questions. Certain questions reliably produce decisions; others reliably produce opinions. Here are the five that, in our experience, do the first.
1. "What were you trying to do?"
This is the question that beats "what did you think of this?" every single time. Thoughts are cheap and retroactive — people will happily generate a new one for you on the spot. Intent is specific and anchored in a moment that actually happened.
"What were you trying to do when you opened the dashboard yesterday?" pulls a real goal out of a real Tuesday. You learn what success would have looked like to them. That frames everything they say next — including the parts where your product got in the way.
2. "What did you expect to happen?"
The gap between expectation and reality is where product decisions live. If someone expected the export button to send them a CSV by email and it actually downloaded a ZIP, you've just found a mental-model mismatch worth fixing.
Ask this immediately after they describe an action. Don't wait. The expectation has to be fresh or it rationalizes itself into whatever did happen.
3. "What have you tried before?"
Before you ask someone to speculate about a future feature, find out what the past looks like. The spreadsheet they maintain on the side. The Slack channel they use as an ad-hoc ticketing system. The tool they tried for three weeks and abandoned.
Past workarounds are the most honest form of user research you can get. Nobody builds a scrappy workflow for a problem they don't actually have. If three of your five interviews surface the same improvised solution, that's a roadmap item dressed in overalls.
4. "Walk me through the last time you…"
Generalized opinions are compressed, edited, and often wrong. Specific episodes are the uncompressed version of the same story — with times, places, other people, and the small frictions that got smoothed out of the summary.
"How do you usually handle onboarding a new teammate?" gets you a policy. "Walk me through the last time you onboarded someone" gets you what actually happened last Thursday at 2pm when Jess started and nobody had written the Figma handoff guide yet.
5. "What made you give up?"
This one is for the bravest moment of the interview. If the person mentioned abandoning a flow, a feature, or a whole tool — stop, back up, and find the exact sentence where it stopped being worth it.
"It just got annoying" is not an answer. "I had to click through three screens to change one setting and the change didn't save when the page reloaded" is an answer. You're hunting for the concrete friction, not the emotion that trailed it.
Why these five
You'll notice none of them ask for opinions, predictions, or preferences. That's the pattern. Opinions are unreliable because they're generated on demand. Predictions are unreliable because people can't actually forecast their own behavior. Preferences are unreliable because they depend on the frame you hand someone.
What people did, what they expected, what they tried, what they can walk through, what they gave up on — those are facts about the past. They don't evaporate when the interview ends. They survive the trip back to your team, and they survive the meeting where someone asks what the research actually said.
If you write an interview guide using only these five question types, you'll have a shorter script, better data, and — eventually — a roadmap that feels earned instead of invented.