FractalMind
FractalMind
FractalMind is a recursive multi-perspective thinking engine. It decomposes complex objectives into a tree of sub-agents, each exploring a different dimension, then synthesizes results — producing deeper analysis than a single LLM call can achieve.
Patent Pending — FractalMind is a proprietary cognitive architecture developed by Qui Intelligent Systems LLC.
What FractalMind Does
You provide an objective — a question or task that benefits from multi-perspective analysis. FractalMind creates a branching tree of agents, each exploring a different angle of the problem. Sub-agents can spawn further sub-agents, creating a fractal structure. Results are synthesized and returned as a comprehensive analysis.
Each agent in the tree uses your character's configuration, maintaining the character's identity and personality throughout.
Using FractalMind
From Strings
If your character has the FractalMind node enabled, it can trigger a session via trigger syntax. The session runs in the background and results appear in the character's next response or in the FractalMind tab.
From ThinkThing
Use the FractalMind node in a ThinkThing graph. Connect it to an Anima node and configure the objective in the node settings.
From the Dashboard
The FractalMind tab in the QUI Core dashboard lets you launch sessions directly, monitor progress, and visualize the branching thought tree.
Temporal Modes
Control how FractalMind balances speed and depth:
| Mode | Best For |
|---|---|
| Fast | Quick insights when time is limited |
| Capped | Balanced analysis within a budget |
| Thorough | Maximum depth and quality (default) |
When to Use FractalMind
Good for:
- Complex strategic questions that benefit from multiple perspectives
- Research topics requiring breadth and depth simultaneously
- Decision analysis where you want to explore alternatives and consequences
- Creative problem-solving where unexpected connections matter
Not needed for:
- Simple factual questions
- Tasks with clear, single-path solutions
- Time-sensitive queries (even Fast mode takes multiple LLM calls)
Cost awareness: Each agent in the tree makes its own LLM call. A deep tree with many agents means many LLM calls. Use the configuration panel to set budgets and limits appropriate for your use case.