← Back to case studies
EXAMPLE WORKFLOW

How a PM replaces a cancelled research agency contract

March 10, 2026

A PM's $42K research agency contract got cut. 3 weeks later she had run 4 studies — 61 participants, 3 decision-ready reports — for under $500.

This is an example workflow — not a real customer story. Honne is new; real customer outcomes are just starting to land. This illustrates how the product fits into real teams' afternoons.

The setup

She is a PM on the growth team at a two-hundred-person fintech. Her quarter was built around a forty-two-thousand-dollar research contract with an outside agency — the plan was a nav audit, a feature-grouping study, and a round of power-user interviews to pressure-test the team's Q2 roadmap. The agency was booked, scoped, and supposed to start on Monday.

On Friday afternoon, finance sends an email. Discretionary spend is frozen through end of quarter. The agency contract is one of seventeen things on the cancellation list. She has three weeks to the Q2 planning meeting. The research she was going to bring — she still has to bring it. Nobody has told her the scope has changed; only the budget.

She has a Honne subscription her team picked up six weeks earlier, mostly for ad-hoc tests. She has never run a full research arc on it.

What they did

Week 1: tree test. She starts with the nav audit because it's the most structured of the three. She exports the proposed new navigation — six top-level categories, twenty-two sub-items — into Honne's tree test tool. She writes eight tasks phrased as real user goals: "You want to export last month's transactions for your accountant. Where would you go?" She sends the study to the company's existing user research list — three hundred customers who opted in. Thirty-five complete it in forty-eight hours. Honne's analysis highlights two dead-end paths: the export task fails for sixty-two percent of participants, and account-level settings gets confused with team-level settings.

Week 2: card sort. She pivots to the feature-grouping question. An open card sort, twenty-eight feature cards, twenty-two participants from the same list. She watches the auto-clustering in Honne's synthesis view as results come in. One cluster catches her by surprise: participants group "reports," "insights," and "alerts" together — but the team has been building them as three separate product areas with three separate dashboards. The mental model in the card sort is one product, not three.

Week 3: power-user interviews. Four AI-moderated interviews, fifteen minutes each. She recruits from a list of users who have filed at least three support tickets in the last ninety days — on the theory that people who complain are people who care. Honne's moderator runs each interview; she reviews the transcripts the next morning. The synthesis view clusters insights across all four interviews and surfaces a workflow pain she hadn't heard before: power users are maintaining a parallel spreadsheet because the product's own export format loses context. They don't complain about it in support tickets because they've stopped expecting it to change.

What they learned

Three strong findings, each one actionable.

"I clicked on 'Settings' looking for what turned out to be in 'Preferences.' I gave up after two minutes."

The tree test surfaced this pattern repeatedly. Participants expect one of: Settings, Preferences, Account — not all three. The proposed new nav had all three, distributed across different top-level sections.

Implication: consolidate to one name for user-level configuration. The proposed nav fails a basic task, and it's fixable before the redesign ships.

"I thought 'Reports' and 'Analytics' would be the same menu. They're in different places."

The card sort clustered reports, analytics, insights, and alerts into one group in eighteen of twenty-two participants. The product team has been planning them as separate areas with separate leads.

Implication: the team's mental model is wrong. Either unify the product areas under one IA umbrella, or expect every user to be confused every time. The card sort data is strong enough to use as a reorg argument.

"I don't look at the dashboard. I look at the email summary. I'd never land on the dashboard deliberately."

Three of four power-user interviews said a version of this. The dashboard — the feature the team ships most releases against — is not the feature power users actually use.

Implication: the roadmap is allocating disproportionate engineering time to a surface that the highest-value users ignore. Either invest in making the dashboard indispensable, or redirect effort to the email summary, which is where these users live.

What they shipped

By the Monday of the planning meeting she has three decision-ready reports, each about four pages long, each ending with a specific recommendation. The tree test report recommends the nav consolidation. The card sort report recommends a single product area for reports-and-analytics. The interview report recommends redirecting six weeks of Q2 engineering from dashboard work to email-summary work.

Two of the three are adopted at the planning meeting. (The third — the reorg — takes two more weeks of follow-up conversations to land.)

Total cost to the business: her Honne Pro subscription, which was already paid, plus four hundred dollars in gift cards for participants. She gets asked to present at the next all-hands on how async research changed what the planning meeting felt like — "less like we were arguing about opinions, more like we were choosing between three clear bets." The agency contract does not get renewed the following quarter. The team keeps the Honne subscription and adds two more seats.