Skip to main content
XUNA AI provides native SDKs for iOS, Android, and React Native. Each SDK handles microphone input, audio playback, and the conversation protocol so you can focus on building your app’s UI.

iOS — Swift

XunaAISDK for native iOS apps built with Swift or SwiftUI.

Android — Kotlin

XunaAIClient for native Android apps built with Kotlin.

React Native

Cross-platform SDK for iOS and Android from a single JavaScript codebase.

iOS — Swift SDK

Installation

Add the XunaAISDK package to your Xcode project using Swift Package Manager.
1

Open your project in Xcode

Go to File → Add Package Dependencies.
2

Enter the package URL

https://github.com/xuna-ai/xuna-ai-swift
3

Select the version

Choose the latest release and click Add Package.
4

Add microphone permission

In your Info.plist, add the NSMicrophoneUsageDescription key with a user-facing description.
<key>NSMicrophoneUsageDescription</key>
<string>This app uses the microphone to talk with your AI assistant.</string>

Basic usage

import XunaAISDK

class AgentViewModel: ObservableObject {
    private var conversation: XunaAISDK.Conversation?

    func startConversation() async throws {
        let config = XunaAISDK.SessionConfig(agentId: "YOUR_AGENT_ID")
        conversation = try await XunaAISDK.Conversation.startSession(
            config: config,
            callbacks: XunaAISDK.Callbacks(
                onConnect: { conversationId in
                    print("Connected: \(conversationId)")
                },
                onDisconnect: {
                    print("Disconnected")
                },
                onMessage: { message, role in
                    print("\(role): \(message)")
                },
                onError: { error, _ in
                    print("Error: \(error)")
                },
                onStatusChange: { status in
                    print("Status: \(status)")
                },
                onModeChange: { mode in
                    print("Mode: \(mode)")
                }
            )
        )
    }

    func endConversation() async {
        await conversation?.endSession()
    }
}

SwiftUI example

import SwiftUI
import XunaAISDK

struct ContentView: View {
    @StateObject private var viewModel = AgentViewModel()
    @State private var isConnected = false

    var body: some View {
        Button(isConnected ? "End conversation" : "Start conversation") {
            Task {
                if isConnected {
                    await viewModel.endConversation()
                    isConnected = false
                } else {
                    try await viewModel.startConversation()
                    isConnected = true
                }
            }
        }
        .buttonStyle(.borderedProminent)
    }
}

Android — Kotlin SDK

Installation

Add the XunaAIClient library to your Gradle build.
1

Add the dependency

In your module-level build.gradle.kts:
dependencies {
    implementation("io.xuna_ai:xuna-ai-android:LATEST_VERSION")
}
2

Add internet and microphone permissions

In your AndroidManifest.xml:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
3

Request runtime permission

On Android 6.0 and above, request RECORD_AUDIO at runtime before starting a session. Use the standard ActivityCompat.requestPermissions API or Jetpack’s permission APIs.

Basic usage

import io.xuna_ai.sdk.XunaAIClient
import io.xuna_ai.sdk.SessionConfig

class AgentViewModel : ViewModel() {
    private val client = XunaAIClient()

    fun startConversation() {
        val config = SessionConfig(agentId = "YOUR_AGENT_ID")
        client.startSession(
            config = config,
            onConnect = { conversationId ->
                Log.d("Agent", "Connected: $conversationId")
            },
            onDisconnect = {
                Log.d("Agent", "Disconnected")
            },
            onMessage = { message, role ->
                Log.d("Agent", "$role: $message")
            },
            onError = { error ->
                Log.e("Agent", "Error: $error")
            }
        )
    }

    fun endConversation() {
        client.endSession()
    }
}

Jetpack Compose example

@Composable
fun AgentScreen(viewModel: AgentViewModel = viewModel()) {
    var isConnected by remember { mutableStateOf(false) }

    Button(onClick = {
        if (isConnected) {
            viewModel.endConversation()
        } else {
            viewModel.startConversation()
        }
        isConnected = !isConnected
    }) {
        Text(if (isConnected) "End conversation" else "Start conversation")
    }
}

React Native SDK

Use the React Native SDK to build a single codebase that runs on both iOS and Android.

Installation

npm install @xuna-ai/react-native
For iOS, install native dependencies:
npx pod-install
Add the microphone permission to both platform manifests:
  • iOS (Info.plist): NSMicrophoneUsageDescription
  • Android (AndroidManifest.xml): android.permission.RECORD_AUDIO

Basic usage

import React, { useState } from 'react';
import { Button, View } from 'react-native';
import { useConversation } from '@xuna-ai/react-native';

export default function App() {
  const [status, setStatus] = useState<'idle' | 'connected'>('idle');

  const conversation = useConversation({
    onConnect: () => setStatus('connected'),
    onDisconnect: () => setStatus('idle'),
    onError: (error) => console.error(error),
  });

  async function start() {
    await conversation.startSession({ agentId: 'YOUR_AGENT_ID' });
  }

  return (
    <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}>
      <Button
        title={status === 'connected' ? 'End conversation' : 'Start conversation'}
        onPress={status === 'connected' ? conversation.endSession : start}
      />
    </View>
  );
}

Authenticating private agents

All three SDKs support signed URLs and conversation tokens for private agents. Generate the credential on your server (see Authentication) and pass it to startSession:
// iOS
let config = XunaAISDK.SessionConfig(signedUrl: signedUrl)
// Android
val config = SessionConfig(signedUrl = signedUrl)
// React Native
await conversation.startSession({ signedUrl });

Next steps

  • Configure your agent’s voice, language, and persona in Configure.
  • For web deployments, use the React SDK or the widget.