Knowledge Base
Knowledge Base
Each character can have its own knowledge base — a collection of documents that the character uses to ground its responses in specific information. When you upload documents, they are indexed for semantic search and automatically included as context when relevant to a conversation.
How It Works
- You upload documents to a character's knowledge base
- Documents are split into chunks and indexed using vector embeddings (semantic search)
- When the character receives a message, the system searches the knowledge base for relevant chunks
- Matching content is included in the character's context as a "RELEVANT KNOWLEDGE" section
- The character uses this knowledge to inform its response
This is often called RAG (Retrieval-Augmented Generation) — the character retrieves relevant knowledge before generating a response, rather than relying solely on what the LLM was trained on.
Adding Documents
The knowledge base is managed through the Visual Builder. The Knowledge node must be enabled on the character's canvas.
- Enable the Knowledge node in the palette (if not already added)
- Click the Knowledge node on the canvas to open its configuration
- Upload files using the file upload interface
[Screenshot: Knowledge node configuration panel showing uploaded files and search]
Supported File Types
The knowledge base accepts text-based documents. Content is extracted and indexed for semantic search.
Supported formats: .txt, .md, .json, .csv, .py, .js, .ts, .html, .xml, .yaml, .yml, and .pdf. Text-based files are read directly; PDFs have their text extracted before indexing.
How Search Works
When a character receives a message, the knowledge base is searched using the user's most recent message as the query. The search uses vector similarity — it finds content that is semantically similar to the question, not just keyword matches.
For example, if your knowledge base contains documentation about "database connection pooling" and the user asks "how do we manage DB connections?", the system will find the relevant content even though the exact words differ.
Results are capped at 4000 characters to keep context manageable. The most relevant chunks are included.
When Knowledge Is Used
Knowledge search runs automatically on every message when the Knowledge node is enabled. There is no manual trigger needed.
The search results appear in the character's assembled context between the system prompts and memory sections. The LLM sees:
[Character identity and personality]
[System prompts]
=== RELEVANT KNOWLEDGE ===
[matched document chunks]
[Memory context]
[Tool descriptions]
If no relevant content is found, the knowledge section is simply omitted.
Reindexing
If you update or replace documents, the knowledge base reindexes automatically. Previous embeddings for deleted documents are cleaned up.
Tips
- Keep documents focused. Shorter, topic-specific documents work better than massive catch-all files. The search can match more precisely when content is organized by topic.
- Use for stable reference material. Knowledge bases are ideal for documentation, procedures, product specs, and domain knowledge. For dynamic/conversational context, use Memory instead.
- One knowledge base per character. Each character has its own knowledge base. If multiple characters need the same knowledge, upload the documents to each.
- Knowledge supplements, not replaces. The LLM still has its training knowledge. Your documents add domain-specific context on top.