See exactly what users did before and after giving feedback
Responsly connects survey responses to Smartlook session recordings by pushing each submission as a custom event on the visitor’s session timeline. Your team filters recordings by feedback score or answer, watches the experience that led to the response, and acts on behavioral evidence — not assumptions.
For product, UX, and CX teams, this integration answers the question that survey dashboards can’t: “What actually happened?” A low score stops being a mystery when you can watch the session that produced it.
Closing the gap between sentiment and behavior
Survey responses are opinions. Session recordings are evidence. Separately, each tells an incomplete story: surveys say “I’m frustrated” without showing why; recordings show behavior without explaining how the user felt about it.
Together, they form a complete feedback loop:
- a detractor’s NPS score becomes a specific product issue when you watch their session and see a payment form error,
- a feature request becomes urgent when the recording shows the user spent four minutes trying a workaround that doesn’t exist,
- a positive CSAT score confirms which design patterns are working when you watch promoters navigate effortlessly.
Teams that connect feedback to behavior fix the right problems first. For a deeper framework on connecting feedback to action, see the guide on customer experience strategy.
Watching what happened before a detractor NPS score
A fintech product team runs a post-login NPS survey. Detractor submissions appear as custom events in Smartlook. The team filters for scores 0–6 and watches the five most recent detractor sessions.
They discover three patterns:
- two detractors encountered a timeout error on the transaction history page — the page loaded but the data request failed silently, showing an empty table,
- one detractor spent 90 seconds searching for the account settings page, clicking three wrong navigation items before finding it,
- two detractors completed their task successfully but experienced noticeable lag on every page transition (3–4 second load times).
Each pattern maps to a different team: infrastructure fixes the API timeout, design relocates account settings in the navigation, and performance engineers investigate the rendering lag. Without session context, the NPS survey would have produced an action item like “improve product experience” — too vague to act on. Read about structuring NPS programs in our survey alternatives comparison.
Identifying UI friction from low-CSAT session replays
An e-commerce platform triggers a CSAT survey after order completion. Low scores (1–2 stars) are investigated by watching the corresponding sessions.
Session replay reveals:
- 38% of low-CSAT sessions include at least one rage click — rapid repeated clicks on an element that doesn’t respond,
- the most common rage-click target is the “Apply Coupon” button, which has a 1.5-second delay before confirmation,
- 22% of low-CSAT sessions show users scrolling past the shipping options section and returning to it — the default selection isn’t visible without scrolling.
The team fixes the coupon button responsiveness and repositions shipping options above the fold. Follow-up CSAT shows a 0.9-point improvement within one release cycle. Checkout completion rate rises 7%.
Matching exit survey reasons to page interactions
A B2B SaaS triggers an exit survey when visitors navigate away from the pricing page: “What’s preventing you from signing up today?”
Responses sort into categories — “pricing unclear,” “need to talk to sales,” “comparing options,” “missing feature.” Smartlook recordings add context to each category:
- “pricing unclear” respondents hover over the pricing table for an average of 45 seconds but never click “See full comparison” — the link is too small and positioned below the fold,
- “missing feature” respondents navigate to the features page and use Ctrl+F to search for a specific term — revealing exactly which feature they need,
- “comparing options” respondents switch between the pricing page and a competitor’s tab (visible in the URL bar during screen transitions).
The team enlarges the pricing comparison link, adds the three most-searched features to the pricing page, and creates a competitive comparison section. Pricing page conversion rate improves from 3.1% to 4.6% over two months. Learn more about survey techniques in our guide to WordPress survey makers.
Bug reports enriched with session recordings
A product team adds a feedback widget on their web app: “Did you encounter any issues?” with a yes/no toggle and an optional description field. “Yes” responses fire as high-priority events in Smartlook.
Over 30 days:
- 127 bug reports are submitted through the widget,
- 89 have a matching Smartlook session recording,
- the engineering team resolves 34 bugs in the first sprint — each ticket includes the user’s description, the exact session recording, and a timestamp of the error moment.
Average time to reproduce a reported bug drops from 45 minutes to under 5 minutes. The backlog of unactionable bug reports (“it just didn’t work”) shrinks by 60% because engineers can see exactly what happened. Use skip logic to branch from the yes/no toggle into a detailed description field only when needed.
What data is sent to Smartlook
Each survey submission pushes a custom event containing:
- survey name and campaign identifier,
- question text and full answer content for each completed question,
- response type (NPS score, CSAT rating, text, multiple-choice selection),
- response ID and submission timestamp,
- completion status (full or partial submission).
Events are searchable in Smartlook’s filter system, visible on session timelines, and usable in funnels and retention analysis.
Start watching the sessions behind your feedback
Connect Responsly to Smartlook, deploy your first on-site survey, and stop guessing why users gave the scores they did. Watch the experience, identify the problem, and fix it with evidence — not speculation.

















