π€ Avatar Companion
Bring your AI into the physical world. Interact with your local Companion Agent as an embodied avatar across Unity and WebXR environments.
Overview
Avatar Companion gives your Companion AI a visual, spatial presence. Instead of interacting with your agent through a text chat interface, you engage with a 3D avatar that lives in a shared virtual space β on your laptop browser (WebXR), in a dedicated app, or on a mixed reality headset.
The avatar renders locally, queries your Hubβs inference stack for responses, and uses the Digital Memory Server for context. All rendering and inference is done on your hardware.
Key Features
- WebXR support β runs in any WebXR-capable browser with no app install
- Unity client β higher fidelity avatar with advanced animation and lip sync (Quest, Pico, Vision Pro)
- Voice interaction β speak to your avatar; it responds with synthesized speech (local Piper TTS)
- Memory-aware β avatar accesses Digital Memory context for personalized, contextual conversations
- Persistent presence β avatar can persist in a dedicated virtual room you return to
- Customizable appearance β choose from built-in avatar styles or import custom
.vrmmodels - Headset platforms β Quest 3, Pico XR, Vision Pro, Zappar Zapbox
Supported Platforms
| Platform | Client | Status |
|---|---|---|
| Desktop browser (Chrome, Edge) | WebXR | β |
| Meta Quest 3 | Unity + WebXR | β |
| Pico XR | Unity + WebXR | β |
| Apple Vision Pro | Unity + WebXR | β |
| Zappar Zapbox | WebXR | β |
| iOS/Android (AR) | WebXR | β |
Architecture
WebXR / Unity Client
βββ 3D Avatar Renderer
βββ WebRTC Audio
βββ XR Input Handler
β WebSocket (local LAN)
βΌ
Avatar Companion Hub Service
βββ Session manager
βββ Speech-to-text (Whisper)
βββ Agent router βββΆ Ollama / Companion Agent
βββ Text-to-speech (Piper)
βββ Memory bridge βββΆ Digital Memory ServerSetup
Install from Hub
Search for Avatar Companion in the Hub app store and install.
Open the WebXR client
Navigate to http://avatar.ci.localhost in a WebXR-capable browser. Click Enter to launch the experience. No headset required for desktop use.
Configure your avatar
In Settings β Avatar, select your preferred avatar appearance. For custom .vrm avatars, upload via the avatar manager.
Configure voice
In Settings β Voice:
- Select Whisper model for speech recognition
- Select Piper voice for TTS
- Test the audio pipeline with Settings β Test Voice
(Optional) Install Unity client for headsets
Download the Unity client app from:
- Quest 3 / Pico: Side Quest or platform app store
- Vision Pro: TestFlight link (see Hub dashboard β Avatar Companion β Install)
Usage
Starting a Session
Open http://avatar.ci.localhost and click Enter Environment. You are placed in a default virtual room with your avatar present and ready to converse.
Voice Conversation
Click the microphone button (or say the wake phrase) to start talking. The avatar lip-syncs to its spoken response in real time.
Entering XR Mode
In a WebXR browser or on a headset, click Enter XR to switch to immersive mode. On a headset, use hand tracking or controllers to interact.
Switching Environments
Navigate to Environments to choose a different virtual room, or import a custom environment as a .glb file.
Troubleshooting
Avatar not loading Check that the Avatar Companion service is running in Hub. Reload the page and wait for the initial asset load (first launch downloads avatar assets).
Voice input not recognized
Ensure your browser has microphone permission for avatar.ci.localhost. Check Whisper is running in Hub.
Avatar lips not syncing TTS and lip sync require a compatible Piper voice with phoneme output. Verify the selected voice supports phoneme data in Settings β Voice.
Headset connection refused
Ensure both the headset and your Hub are on the same Wi-Fi network. Open http://avatar.ci.localhost on the headsetβs browser to verify connectivity before launching the Unity app.