April 18, 2026
Static surveys on behavioral tests — ship without AI
Every behavioral test now supports a plain survey fallback. AI follow-ups are optional per section.
You can now run prototype tests, first-click tests, preference tests, and card sorts with a simple static survey instead of the AI moderator. The AI toggle lives on every behavioral section — flip it off, and the research questions you configured render as a straightforward form at the end of the task. Flip it on, and the AI chat runs as usual, reading the participant's answers and asking the follow-ups a human researcher would ask.
Why ship this? Three reasons teams have been asking.
First, compliance. Some research programs cannot send participant utterances to an LLM, full stop. Static surveys keep the behavioral data (clicks, hovers, task outcomes) and collect structured self-report without a model in the loop.
Second, predictability. When your stakeholders need to see the exact script every participant saw — no AI improvisation, no rephrasing, no probing branches — a locked survey gives you that.
Third, speed. For short validation runs where you already know the five questions you want answered, skipping the AI gets you cleaner quantitative data, faster.
The toggle is per section, not per study. Mix AI-moderated interviews with static-survey behavioral tests in the same study when the research calls for it.