← Back to guides

Designing card sorts for an IA overhaul

IntermediateApril 2, 2026

When you're rebuilding a navigation and 'we'll just ask users what they want' isn't enough, this is the method. A practical walk-through.

Card sorts are everyone's favorite research method they never actually run. They sound simple. Print some cards, ask people to group them, look at the piles. The problem is that the simple version produces low-quality data, and the high-quality version takes enough design work to scare teams off.

This guide is the high-quality version, made practical.

Open vs. closed sorts

An open sort asks participants to group cards however they like and name their own groups. It surfaces mental models — the categories people actually have in their heads.

A closed sort gives participants fixed groups and asks them to sort cards into those. It validates a specific structure — useful when you've already proposed an IA and want to test whether it holds.

Run open first when you're starting from scratch. Run closed when you're stress-testing a candidate structure. Running closed too early anchors you to the structure you already had.

How many cards

For an open sort, 30–60 cards is the sweet spot. Under 30 and you won't see meaningful clusters; over 60 and participants start skimming.

Closed sorts tolerate more — up to 100 — because the cognitive load of categorizing is lower than inventing categories. Past 100, split the study in half and run two sessions per participant. The fatigue curve is sharp.

The trap: letting participants rename cards

The first time someone says "can I call this something else?" the instinct is to be accommodating. Don't be. If participants edit the card text, you've lost your ability to compare across sessions. Card co-occurrence analysis requires that every participant worked with the same deck.

Instead, capture the rename as a note next to the session: "P3 felt 'Integrations' should be 'Connectors.'" That's useful qualitative data. Just don't let it pollute your quantitative data.

If several participants want to rename the same card the same way, the label is probably wrong. Fix it between sessions and note the cutoff.

Synthesizing: frequency + clusters

The naive analysis looks at each participant's groupings in isolation. The better analysis looks at which cards co-occur across participants.

For each pair of cards, count how often they ended up in the same group. High co-occurrence means "people think of these as related," regardless of what they called the group. The clusters that emerge from co-occurrence data are your actual mental-model groupings. The group names participants invented are secondary — they're prompts for naming, not the answers.

Most card-sort tools produce a similarity matrix automatically. If you're running it on paper, spreadsheets work fine for up to twenty participants.

What to do with the results

Propose an IA based on the clusters. Write group labels that match the language participants used (this is where the session notes pay off). Then run a tree test — the closed-sort's cousin — to verify participants can actually find things in your proposed structure.

Card sort tells you how people group. Tree test tells you whether they can navigate. You need both before shipping an IA change.