Skip to main content
Conversation analysis lets you define what a successful conversation looks like and automatically score every call against that definition. You can also configure data collection to pull structured fields — like user questions or intent — directly from transcripts. Both features are available under the Analysis tab in your agent’s settings.

Set up success evaluation

Success evaluation scores each conversation as success, failure, or unknown based on criteria you define. The agent’s LLM applies the criteria to the transcript after the call ends, so there is no runtime cost to latency.
1

Open the Analysis tab

Go to xuna.ai/app/agents, select your agent, and click Analysis.
2

Add an evaluation criterion

Under Success evaluation, click Add criterion. Give it a short machine-readable name — for example, solved_user_inquiry.
3

Write the evaluation prompt

Describe what the evaluator should look for. Be specific about what constitutes success and what constitutes failure.
Example criterion
Name: solved_user_inquiry
Prompt: The assistant was able to answer all of the queries or redirect them
        to a relevant support channel.
Success criteria:
- All user queries were answered satisfactorily.
- The user was redirected to a relevant support channel if needed.
4

Save and test

Click Save. Run a test conversation (or use an existing one from Call History) and open the conversation detail to see the evaluation result and rationale.

Outcome values

Each evaluation criterion produces one of three outcomes:
OutcomeMeaning
successAll success criteria were met
failureOne or more criteria were not met
unknownThe evaluator could not determine an outcome — for example, the conversation was too short or the user disconnected unexpectedly
The rationale field explains which criteria passed or failed. Use it to debug prompt phrasing when you see unexpected unknown results.
You can define multiple evaluation criteria per agent. Each criterion is scored independently, so you can track success across different dimensions — for example, both solved_user_inquiry and maintained_brand_tone.

Set up data collection

Data collection extracts specific information from the transcript and stores it as structured fields on each conversation record. Use this to track trends, feed downstream systems, or filter conversations in Call History.
1

Add a data field

Under Data collection, click Add field.
2

Configure the field

Set the data type, identifier, and a description that tells the extractor what to look for.
Example field
Data type: string
Identifier: user_question
Description: Extract the user's questions and inquiries from the conversation.
3

Choose a data type

Select the type that matches the value you want to capture:
TypeUse for
stringFree-text values like questions, complaints, or topics
numberNumeric values like order amounts or queue positions
booleanYes/no flags like whether the user agreed to terms
4

Save and verify

Save the field and run a conversation. Open the conversation detail view and check the Collected data section to confirm the field is populated correctly.

Search conversation history

The Call History tab supports two search modes:
  • Keyword search — matches exact words or phrases in transcripts.
  • Semantic search — finds conversations by meaning, not exact wording. Searching “billing problem” surfaces conversations where users said “I was charged twice” or “my invoice is wrong”.
Use semantic search when you want to understand how users describe a topic, not just how often they use a specific term. This is especially useful for identifying gaps in your knowledge base or discovering emerging support themes.

Next steps

Analytics dashboard

View aggregate metrics and filter conversations by evaluation outcome.

Testing

Run automated tests to validate your evaluation criteria before deploying.