How a product designer validates a pricing page in a day
One designer, one Monday, 47 participants. A five-second test, a preference test, and three follow-up AI-moderated interviews.
This is an example workflow — not a real customer story. Honne is new; real customer outcomes are just starting to land. This illustrates how the product fits into real teams' afternoons.
The setup
She is a senior product designer at a forty-person SaaS company. The product is a B2B analytics tool; the homepage gets a few thousand visitors a week; pricing is where they convert or don't. On Monday morning her PM drops a Slack message asking if she can take a look at the new pricing page before Thursday's ship — "just make sure it makes sense, you know the drill." She has exactly one day. The mock exists. The engineers are already building it. Thursday's ship is on the roadmap and, culturally, ships that slip get noticed.
She has been pushing for more research at the company for a year and keeps getting told there isn't time. This is a chance to prove that "a day" is enough time, if the tools are right.
What they did
At 9 a.m. she opens Honne and spins up a five-second test on the new pricing mock. Three questions, in order: "What do you think this page is for?" — "Roughly what would it cost to use this?" — "What would you click on first?" The five-second test runs the screenshot for five seconds, hides it, then asks. It's a first-impression check, and it's brutal in exactly the way she needs.
By 11 a.m. the test is live. She adds a preference test as the second stage: the new pricing mock and the current live page, side by side, shown in random order. One question: "Which of these would you rather buy from, and why?"
At 2 p.m. she adds a third stage: a ten-minute AI-moderated interview, triggered only for participants who complete the first two. Honne's moderator does the follow-up — it reads the participant's typed answers from stages one and two, asks specific follow-ups, and digs in when answers are vague. Three interview slots. She doesn't want fifty interviews; she wants three good ones.
At 4 p.m. she drops the study link in the company's #beta-testers Slack channel. Beta testers are existing customers who opted in to feedback loops — they match the real buyer persona closely enough. Forty-seven participants complete the five-second test and preference test inside two hours. Three finish the interview by dinner.
What they learned
The signal is sharp and, in places, uncomfortable.
"I thought this was a blog post, honestly. There wasn't enough pricing front and center."
Twelve of forty-seven participants used the word "article," "post," or "content" to describe the page. The new mock leads with a feature grid and pushes the price cards below the fold on a laptop. The first-impression data says: at five seconds, most people don't see pricing on the pricing page.
Implication: pricing cards need to be above the fold on the most common viewport. This is not a subjective judgement; it's what forty-seven first impressions said.
"The 'Pro' box looked like an ad."
The "Pro" tier — the one the company actually wants people to buy — is highlighted with a purple gradient and a "Most popular" ribbon. Nineteen participants in the preference test flagged it as feeling "promotional," "pushy," or "like an ad." Several explicitly said they'd scroll past it.
Implication: the visual emphasis is working against the business goal. A quieter highlight will convert better than a loud one.
"I would have picked the old page — at least I knew where to click."
In the preference test, twenty-eight of forty-seven preferred the current live page over the new mock. This is the finding that changes the ship decision. The new design is prettier; it's also less clear. Participants weren't saying "the old page looks better" — they were saying "I know what to do on the old page."
Implication: clarity of action beats aesthetic polish on a pricing page. The new design optimised for the wrong variable.
What they shipped
Tuesday morning, she walks into stand-up and shares the three insights in ninety seconds. The PM, who came in expecting a rubber-stamp, looks at the preference split and agrees to delay the ship by a week. They don't kill the new design; they restructure it. Pricing cards move above the fold. The "Pro" highlight loses the gradient and gains a quieter inline label. The feature grid moves below.
She re-runs the same study on the restructured mock on Friday. Forty-one participants. The preference split flips: thirty-one prefer the new design over the old. First-impression answers mention "pricing" or a specific tier in the first five seconds.
The restructured page ships the following Friday — eight days after the original deadline. In the next A/B test, the new pricing page's conversion rate is measurably higher than the old live page. The designer doesn't frame it as a win for her; she frames it as a win for the one-day-of-research workflow. Her next pitch to leadership is shorter: this is what a day of research looks like, and this is the meeting we didn't have to rebuild in three months.