Building Characters in Qui Anima
Create your first AI character in Qui Anima's Visual Builder — identity, personality, memory, and capabilities on a visual canvas.
Objective
By the end of this lesson, you'll understand the Visual Builder interface and create your first AI character with identity, personality, memory, and capabilities.
Opening the Visual Builder
Qui Anima is the character management system. Open it from the QUI Core dashboard (Services tab) or from the system tray.
The Visual Builder has three main areas:
Canvas (center) — your character's configuration space. A central node represents the character, with capability nodes orbiting around it. Each node you add grants the character a new capability.
Node Palette (left) — all available nodes organized into two tabs:
- Basics — Attributes (Definition, Token Limits) and Abilities (M2M, Self-Modify, Spark, Agentic)
- Cognitive Design — Memory, Working Memory, Cortex, Tools (Terminal, MCP), Thalamus, and advanced features
Character Bar (bottom) — your saved characters as clickable tiles. Click one to load it, click [+] to create a new one.
[Screenshot: Visual Builder interface with canvas, palette, and character bar]
The Core Node — Identity
Every character starts with a core node at the center. Click it to configure:
| Setting | What It Does |
|---|---|
| Name | Display name used in conversations and across the ecosystem |
| Description | Brief summary shown in character lists |
| LLM Provider | Anthropic, OpenAI, Google, X, or Local |
| Model | Specific model (e.g., Claude Sonnet, GPT-4o, Gemini Pro) |
Why provider/model choice matters:
- Different models have different strengths — some excel at analysis, others at creative writing
- Cost varies significantly between models and providers
- You can change the model anytime without losing the character's configuration
Tip: Start with a mid-tier model (Claude Sonnet or GPT-4o). You can always upgrade to a more capable model later if needed.
The Definition Node — Personality
Click Definition in the palette (under Basics → Attributes) to add it to the canvas. This is where you shape how the character behaves.
System Prompts
System prompts are the instructions that define your character's behaviour. A good system prompt covers:
- Role — what the character does ("You are a technical writer specializing in API documentation")
- Behaviour — how it responds ("Be concise. Use code examples. Lead with the answer.")
- Boundaries — what it should not do ("Never make up API endpoints. If unsure, say so.")
- Knowledge — domain context ("The project uses Python with FastAPI")
You can add multiple prompt entries, each with a label for organization:
- "Role & Identity"
- "Coding Standards"
- "Communication Style"
Human vs Self-Authored Prompts
Prompts you write are human-authored — the character can never modify them. This is important: even if you later enable self-modification, your core instructions are immutable.
Personality Traits
Below the system prompts, set personality traits — descriptive keywords that influence tone:
- "analytical and precise"
- "warm and encouraging"
- "direct, no small talk"
Token Limits — Controlling Costs
Click Token Limits in the palette to add it. Configure:
| Setting | What It Controls | Guidance |
|---|---|---|
| Max Input Tokens | How much context is sent to the LLM | Higher = more memory and knowledge included, more expensive |
| Max Output Tokens | Maximum response length | Set 500 for quick answers, 2000-4000 for detailed analysis |
The billing system also caps output based on your balance — you can't accidentally spend more than you have.
Adding Capabilities — The Node System
This is the key principle of Qui Anima: no node on the canvas = the character cannot use that feature.
Let's add the most commonly needed capability: Memory.
- Click the Cognitive Design tab in the palette
- Click Memory
- The Memory node appears on the canvas, connected to the core node
Your character now has persistent semantic memory. Every conversation is indexed and recalled based on meaning — not just keyword matching.
What's Available
The palette contains many more nodes you'll explore throughout this course:
| Category | Nodes | When You'll Use Them |
|---|---|---|
| Core Intelligence | Memory, Autothink, Qonscious, FractalMind, Knowledge Base | Week 1-2 |
| Tools & Actions | Terminal, MCP (165+ integrations), Qrawl (web), Voice | Week 3 |
| Communication | M2M, Spark, Clipboard, Variable Store | Week 3 |
| Automation | Agentic Mode, Self-Modify, Thalamus | Week 4 |
For now, just add Memory. We'll add more in later lessons.
Characters, Digital Workers, and Entities
The Visual Builder creates three types — the difference is purpose, not technology:
| Type | Purpose | Example |
|---|---|---|
| Character | Conversational AI with personality | Writing partner, language tutor |
| Digital Worker | Task-focused agent | Code reviewer, data analyst |
| Entity | System-level identity | Your own identity in QUI (the User Entity) |
"Character" is the default term. When you see it, it means any AI personality built in Anima.
Key Takeaways
- Characters are built visually on a canvas by adding capability nodes from the palette
- Every capability is opt-in — the character only has what you explicitly enable
- System prompts define personality and behaviour; Token Limits control costs
- Memory is the most important first node — it gives your character persistent context
- The same character behaves consistently everywhere — Strings, ThinkThing, M2M
- Start simple: core node + Definition + Memory. Add capabilities as you need them.
Next: Open Strings and start a conversation with the character you just built.