What Is Qui Anima
What Is Qui Anima
Qui Anima is the heart of every AI character in QUI. It's where you design, build, and manage the digital personalities and autonomous agents that power the entire ecosystem. Every conversation, every workflow, every tool execution — it all flows through a character built in Anima.
The name comes from the Latin word for "soul." That's deliberate. Anima doesn't just configure a chatbot — it gives your AI a complete identity: how it thinks, what it knows, what it can do, and how it grows over time.
Why Anima Matters
Most AI platforms give you a text box for a system prompt and call it done. Anima takes a fundamentally different approach.
Visual, not textual. You build characters on a canvas — dragging capability nodes, connecting tools, and configuring personality through a visual interface. You can see at a glance what your character can do.
Modular capabilities. Every feature is an opt-in node. Terminal access, web browsing, messaging, reasoning strategies, consciousness modeling — nothing is on by default. You add exactly what you need, nothing more. A customer support agent doesn't need terminal access. A code reviewer doesn't need voice synthesis. This keeps characters focused and predictable.
Consistent everywhere. Your character behaves the same whether you're chatting in Strings, running a ThinkThing workflow, or receiving an M2M message from another agent. Same personality. Same memory. Same capabilities. Anima is the single source of truth.
Fail-open by design. Every extended feature — memory, consciousness, thinking strategies — is resilient. If a downstream service is temporarily unavailable, your character still responds. Conversations never break because an optional enrichment is down.
What You Can Build
Characters in Anima range from simple to sophisticated:
| What | How | Example |
|---|---|---|
| Conversational AI | Identity + personality + memory | A creative writing partner that remembers your style preferences |
| Digital worker | Focused prompts + specific tools | A code reviewer with terminal and GitHub access |
| Research agent | Web browsing + knowledge base + memory | An analyst that crawls sources, stores findings, and synthesizes reports |
| Autonomous agent | Agentic mode + multi-step workflows | A support agent that diagnoses issues, runs commands, and reports back |
| Self-improving AI | Self-modifying prompts + consciousness | A character that learns from interactions and rewrites its own instructions |
| Multi-agent coordinator | M2M messaging + Spark sub-agents | A project manager that delegates tasks to specialist characters |
| Perception-aware agent | Environmental triggers + Thalamus | A monitor that reacts to file changes, system metrics, or API status |
All of these are built with the same tool: the Visual Builder.
The Visual Builder
The primary interface for creating characters is the Visual Builder — a canvas-based editor where you design your character visually.
[Screenshot: Visual Builder showing a character with capability nodes orbiting the central Anima node]
It has two sides:
Canvas (left) — defines WHO the character IS. A central node represents your character, surrounded by capability nodes you add from the palette. Each node unlocks a feature: memory, terminal access, MCP tools, consciousness modeling, voice, web browsing, and more. No node on the canvas = the character cannot use that feature.
Sequencer (right) — defines WHAT the character DOES automatically. Set up scheduled tasks, webhook responses, and multi-step workflows using the capabilities you've configured on the canvas. The Sequencer can only use tools that are present on the canvas — keeping automation tightly coupled to the character's designed capabilities.
Key principle: The canvas defines the character's identity and toolkit. The Sequencer defines its automation behaviour. Together they create a complete cognitive agent.
Capability Nodes at a Glance
When you add a node to the canvas, you're granting your character access to that capability. Here's what's available:
Core Intelligence
| Node | What It Does |
|---|---|
| Memory | Semantic memory that persists across conversations — your character remembers context, facts, and patterns |
| Autothink | 14 thinking strategies for deeper reasoning (first principles, red team, systems thinking, and more) |
| Qonscious | Consciousness modeling — coherence, arousal, emotional valence that influence how the character responds |
| FractalMind | Recursive multi-directional thinking for complex analysis |
| Knowledge Base | Upload documents for vector-powered search — your character can reference your files in conversation |
Tools & Actions
| Node | What It Does |
|---|---|
| Terminal | Execute shell commands with configurable safety presets (read-only through to system admin) |
| MCP Tools | Access 165+ integrations — GitHub, Slack, Telegram, Docker, databases, cloud services, and more |
| Qrawl | AI-native web browsing — search, skim, read, and extract content from the web |
| Voice | Text-to-speech (Kokoro) and speech-to-text (Whisper) for audio interaction |
Communication & Storage
| Node | What It Does |
|---|---|
| M2M | Send and receive messages to/from other characters — even across federated QUI instances |
| Spark | Spawn background sub-agents that work independently on fire-and-forget tasks |
| Clipboard | Session-scoped temporary storage — quick notes and context that expire |
| Variable Store | Persistent key-value storage across sessions — long-lived character state |
Automation & Perception
| Node | What It Does |
|---|---|
| Agentic Mode | Multi-step autonomous workflows — the character decides what tools to use, executes them, evaluates results, and iterates |
| Self-Modify | Characters can write, edit, and remove their own system prompt entries as they learn |
| Thalamus | Event routing and scheduled triggers — connect environmental perception to character actions |
Characters, Digital Workers, and Entities
All three are built in Anima. The difference is purpose, not technology:
| Type | Purpose | Example |
|---|---|---|
| Character | Conversational AI with personality | A creative writing partner, a language tutor, a brainstorming companion |
| Digital Worker | Task-oriented agent with focused tools | A code reviewer, a data analyst, a customer support agent |
| Entity | System-level identity | The User Entity (represents you in the system) |
"Character" is the default term throughout these docs. When you see "character," it means any AI personality built in Anima — whether it's a chatty companion or a focused digital worker.
How Characters Connect to Everything
Anima is the central hub. When you chat with a character in Strings, send a message through M2M, or run a ThinkThing workflow, the request always passes through Anima to load the character's full context:
Your message → Anima loads character (identity + personality + memory + tools)
→ LLM generates response with full character context
→ Response returned to you
This means your character behaves consistently everywhere — same personality, same memory, same capabilities — regardless of which app or service initiates the conversation.
The LLM Call Chain
When a character generates a response, the call flows through a billing-safe chain:
Anima (your character's context) → QUI Core (local gateway) → Billing check → LLM Provider
You choose from 4 cloud providers (Anthropic, OpenAI, Google, X) or run local models — the system handles authentication, cost tracking, and token limits transparently. Billing is automatic. You never need to manage API keys or worry about runaway costs.
Explore Qui Anima
Character Builder
Walkthrough of the Visual Builder interface — the canvas, node palette, hierarchy lines, and how to create your first character from scratch.
Personality & Identity
Configuring your character's name, description, voice, and system prompts. The difference between admin-authored prompts (immutable) and self-authored prompts. How to write effective system prompts that shape behaviour without over-constraining.
Tools & Nodes
Deep dive into every capability node — what each one does, how to configure it, and when to use it. The enabled_nodes system that keeps characters focused and secure.
Knowledge Base
Upload documents for vector-powered semantic search. Supported file types, how knowledge affects responses, reindexing, and best practices for building effective knowledge bases.
Agentic Mode
Multi-step autonomous workflows where the character decides what tools to use, executes them, evaluates results, and iterates. Spark sub-agents for fire-and-forget background tasks. Safety constraints that prevent runaway execution.
Self-Modifying Characters
Characters that learn and adapt by rewriting their own instructions. Three tiers of self-modification: chat-based (opportunistic), workflow-based (ThinkThing), and consciousness-based (Qonscious). Rate limiting and safety controls.
LLM Providers
Choosing between Anthropic, OpenAI, Google, X, and local models via Qllama. How to select a provider per character, understand token limits, and manage costs.