Parallel Branches
Parallel Branches
ThinkThing supports parallel execution — splitting content into multiple branches that process simultaneously, then merging the results back together.
The Split-Merge Pattern
The most common parallel pattern uses Hub nodes to fan out and fan in:
Start → Hub (fan-out) → Branch A (Prompt) → Hub (fan-in) → End
→ Branch B (Summarize) ↗
→ Branch C (Classify) ↗
- Fan-out Hub — distributes the same input to multiple branches
- Parallel branches — each branch processes the content independently
- Fan-in Hub — collects results from all branches before continuing
All branches execute simultaneously. The fan-in Hub waits until all incoming branches have completed before releasing the combined output.
When to Use Parallel Branches
Multi-perspective analysis — send the same content through different analysis nodes (Evaluate, Critic, Inspector) and compare results.
Parallel tool execution — run multiple terminal commands or MCP calls simultaneously instead of sequentially.
Multi-agent collaboration — use Broadcast/Collect/Consensus nodes to have multiple characters work on the same problem and converge on an answer.
Efficiency — long-running operations (API calls, LLM reasoning, terminal commands) can run in parallel to reduce total execution time.
Hub Node Modes
The Hub node operates in two modes depending on its connections:
Fan-Out (One Input, Multiple Outputs)
The Hub receives content from one source and distributes it to all connected output edges. Each downstream branch receives the same input.
Fan-In (Multiple Inputs, One Output)
The Hub receives content from multiple branches and waits for all of them to complete. Once all inputs have arrived, it combines them and passes the combined output to the next node.
Combine vs Hub
Both merge content, but they work differently:
| Node | Behavior |
|---|---|
| Hub (fan-in) | Waits for ALL incoming branches, then releases combined output |
| Combine | Concatenates available inputs immediately using a configurable delimiter |
Use Hub when you need to wait for all parallel branches. Use Combine when you want to merge whatever is available.
Designing Parallel Workflows
Keep Branches Independent
Each parallel branch should process content independently. Avoid creating dependencies between parallel branches — if Branch B needs the output of Branch A, they should be sequential, not parallel.
Consider Token Costs
Each branch with a cognition node makes its own LLM call. Three parallel branches = three LLM calls. Design your parallelism with cost in mind.
Error Handling in Parallel
If one branch fails while others succeed:
- The fan-in Hub waits for all branches, including failed ones
- Failed branches produce an error output
- The workflow continues with whatever results are available
- You can add Gate or Control nodes after the merge to handle partial failures
Nesting
Parallel branches can be nested — a branch can itself split into sub-branches. Keep nesting shallow (2 levels maximum recommended) to maintain readability and debuggability.
Multi-Agent Pattern
For multi-character collaboration, use the dedicated Multi-Agent nodes:
Start → Broadcast → [Character A responds] → Collect → Consensus → End
→ [Character B responds] ↗
→ [Character C responds] ↗
- Broadcast — sends the same prompt to multiple characters simultaneously
- Each character generates its response independently
- Collect — gathers all responses
- Consensus — analyzes the collected responses to find agreement, disagreement, or synthesize a final answer
This pattern is powerful for tasks where multiple perspectives improve quality — code review, research, decision making, or brainstorming.