Key metrics
The dashboard tracks six core metrics that reflect agent quality, efficiency, and cost:| Metric | What it measures |
|---|---|
| CSAT | Customer satisfaction score collected at the end of a conversation |
| Containment rate | Percentage of conversations resolved without human escalation |
| Conversion | Rate at which conversations achieve a defined goal (e.g., booking, purchase) |
| Average handling time | Mean duration of a conversation from start to finish |
| Median agent response latency | Typical time between a user turn and the agent’s spoken response |
| Cost per agent resolution | Average credit cost for each successfully resolved conversation |
Call history
The Call History tab lists every conversation your agent has handled. For each conversation you can:- Read the full transcript
- Play back the audio recording
- View evaluation results from your success criteria
- Inspect any structured data collected during the call
Interpreting latency
Median agent response latency measures the 50th-percentile response time across all turns in a conversation. It reflects your choice of LLM, voice model, and any tool calls the agent makes. If you observe high latency:- Switch to a faster LLM in Configure > Models
- Reduce the number of tool calls per turn
- Avoid synchronous calls to slow external APIs in your tools
Next steps
Conversation analysis
Set up evaluation criteria and extract structured data from transcripts.
Experiments
Run A/B tests to measure the impact of configuration changes on these metrics.

