Add qualitative survey data to every Optimizely experiment — know why a variation won, not just that it did
A/B tests tell you which variation won. They rarely tell you why. “Variation B converted 12% higher” could be due to clearer copy, faster layout, better trust signals — or a coincidence inside the margin of error. Adding Responsly to Optimizely captures the ‘why’ alongside the ‘how much’, so experiment readouts turn into decisions the team actually agrees with.
Where Optimizely + Responsly pays off
Post-experiment survey on variation exposure
A brief survey asks users who saw each variation what they noticed, what was clearer, and what they’d change. When Variation B wins on conversion AND users explain why it worked, the finding becomes actionable beyond the current test.
Targeted qualitative follow-up
An experiment shows a surprising result — maybe a dip in a non-obvious metric. A variation-targeted survey can probe specifically what happened, bypassing weeks of speculation.
Multi-variant content testing
Testing headline variants, image variants, or layout variants gains a qualitative layer: which variation did users find most trustworthy? Most professional? Most exciting? These subjective dimensions rarely show up in conversion data but often drive long-term engagement.
Personalization campaign validation
Personalized experiences can be tested not just on click rate but on whether the personalization felt relevant to the viewer. A short post-experience survey answers that directly.
Feature rollout feedback
For Feature Experimentation (phased rollouts), a survey targeted at the exposed cohort captures the feature’s real-world reception beyond conversion metrics.
Connecting Responsly to Optimizely
- Identify the variation data source. Optimizely’s client-side API, or server-side passed as a URL parameter.
- Configure Responsly hidden fields for variation name and experiment ID.
- Build the survey with questions that map to the experiment hypothesis.
- Set display rules — all users, or only specific variations.
- Tag responses automatically so Responsly reports segment by variation.
- Correlate with Optimizely results in a combined readout (usually a shared Google Sheet or Airtable base).
Practices that sharpen experiment conclusions
One or two questions max on post-experiment surveys. Conversion-impacting intercepts should be short.
Ask about the experience, not the hypothesis. Users can describe what they saw; they can’t tell you why the hypothesis worked or didn’t.
Correlate qualitative with quantitative. A survey finding that contradicts a conversion result is a red flag worth investigating before calling the test a win.
Save qualitative findings with the experiment record. A dedicated doc or research repository keeps the ‘why’ alongside the ‘what’, reusable for future experiments.
Pair with session replay. FullStory or similar tools answer ‘what did they do?’; surveys answer ‘what did they think?’. Use both on important experiments. See the FullStory integration.
Know why the experiment won, not just that it did
Connect Responsly to Optimizely and every A/B test gains a qualitative layer. Product and growth teams stop debating what variation-B’s lift means — they see the user reasoning alongside the numbers. Experiment conclusions firm up, decisions go faster, and the next test builds on real understanding instead of lucky outcomes. For question design in experiment surveys, see our survey question types guide. For survey data analysis once experiment data is collected, see our survey data analysis guide.



















