Skip to main content
The XUNA AI analytics dashboard gives you a real-time view of how your agents are performing. Open the dashboard at xuna.ai/app/agents, select an agent, and navigate to the Analytics tab to see aggregated metrics and the Call History tab to review individual conversations.

Key metrics

The dashboard tracks six core metrics that reflect agent quality, efficiency, and cost:
MetricWhat it measures
CSATCustomer satisfaction score collected at the end of a conversation
Containment ratePercentage of conversations resolved without human escalation
ConversionRate at which conversations achieve a defined goal (e.g., booking, purchase)
Average handling timeMean duration of a conversation from start to finish
Median agent response latencyTypical time between a user turn and the agent’s spoken response
Cost per agent resolutionAverage credit cost for each successfully resolved conversation
Use containment rate and CSAT together to distinguish between agents that deflect conversations and agents that actually resolve them. A high containment rate with low CSAT suggests the agent is blocking escalation rather than answering questions effectively.

Call history

The Call History tab lists every conversation your agent has handled. For each conversation you can:
  • Read the full transcript
  • Play back the audio recording
  • View evaluation results from your success criteria
  • Inspect any structured data collected during the call
Click any conversation row to open the detail view. You can filter by date range, evaluation outcome, or collected data field values to find specific calls quickly.
Use the Smart search feature in the Call History tab to find conversations by keyword or semantic meaning — for example, searching “billing problem” will surface conversations where users discussed billing even if they used different phrasing. See Conversation analysis for more on search and data extraction.

Interpreting latency

Median agent response latency measures the 50th-percentile response time across all turns in a conversation. It reflects your choice of LLM, voice model, and any tool calls the agent makes. If you observe high latency:
  • Switch to a faster LLM in Configure > Models
  • Reduce the number of tool calls per turn
  • Avoid synchronous calls to slow external APIs in your tools

Next steps

Conversation analysis

Set up evaluation criteria and extract structured data from transcripts.

Experiments

Run A/B tests to measure the impact of configuration changes on these metrics.