Skip to main content

Node Reference

Node Reference

ThinkThing has over 143 node types organized into categories. This reference covers each category and its key nodes. Nodes marked with (requires Anima) need an Anima character node connected to function.


Anima

Node Purpose
Anima AI Character connection — links the graph to a specific character for all LLM-powered nodes. Optionally enables agentic mode for autonomous multi-step tool use within individual node executions.

Every graph that uses cognition, terminal, or MCP nodes needs at least one Anima node.


Control Flow

These nodes manage the structure and flow of your graph. No LLM calls — pure logic.

Node Purpose
Start Entry point. Every graph begins here.
Timer Start Delayed start with debounce/rate limiting.
Schedule Start Cron-based scheduler — run graphs on a schedule.
Webhook HTTP trigger — start a graph via an external POST request.
End Termination point. Content reaching End is the graph's output.
Hub Fan-in (merge multiple paths) or fan-out (distribute to multiple paths).
Control Human checkpoint — pauses execution for approve, revise, or reject.
Fallback Entry point for revised content when a Control node rejects.
Combine Concatenate multiple inputs into a single output.
Split Break input into parts using a delimiter.
Buffer Collect N items before releasing them all at once.
Delay Hold input for a specified number of seconds.

Cognition (requires Anima)

Standard LLM-powered processing nodes. Each sends content to the connected character's LLM and returns the result.

Node Purpose
Gate Binary yes/no decision — routes content to one of two paths.
Choice Multi-path routing — LLM selects from multiple options.
Loop Loop end — LLM decides whether to continue iterating or exit.
LoopStart Loop entry point (pairs with Loop node). Does not require Anima.
Prompt Custom LLM prompt — the most flexible node. You define the task.
Summarize Combine and condense multiple inputs.
Compare Find similarities and differences between inputs.
Extract Pull specific information from content (names, dates, facts).
Classify Categorize content into predefined labels.
Rewrite Rephrase content in a different style or format.
Joke Generate humor based on input content.
Translate Language translation.

Advanced Cognition (requires Anima)

Higher-level reasoning nodes that apply structured thinking to content.

Node Purpose
Autothink Apply one of 14 thinking strategies to analyze content.
Decision Weighted decision making — evaluate options against criteria.
Evaluate Score content against specified criteria (grading, quality assessment).
Reason Self-prompting logical reasoning — the LLM builds its own chain of thought.
Inspector Multi-perspective analysis — examines content from different viewpoints.
Brainstorm Generate multiple ideas iteratively, building on previous suggestions.
Plan Create detailed step-by-step plans from a goal.
Critic Critical analysis — identify weaknesses, suggest improvements.

Super Cognition (requires Anima)

Specialized cognitive nodes that connect to advanced services.

Node Purpose
FractalMind Recursive multi-directional thinking via the FractalMind service.
Qleph Dictionary Fetch Qleph relational micro-language dictionary for LLM context.
Qleph Engine Validate, evaluate, and invert Qleph expressions.

Terminal (24 types, requires Anima)

Execute shell commands on the host machine. The LLM decides what commands to run based on your task description, executes them through an agentic loop, and analyzes the results.

Key nodes:

Node Purpose
User Terminal Direct command execution — no LLM, you type the command.
Agent Terminal LLM-driven terminal — describe a task, the character executes commands.
Terminal Approval Human-in-the-loop gate for command review before execution.

21 specialized terminal nodes for specific domains: Git, Python, Node.js, Rust, Go, Make, Java, System Info, File Operations, Text Processing, Processes, Network, SSH, Docker, Kubernetes, Podman, PostgreSQL, MySQL, Redis, SQLite, MongoDB.

Each specialized node scopes the LLM's system prompt to that domain (e.g., "This is a git terminal" or "This is a PostgreSQL terminal") for more focused command generation.


Memory

Node Purpose
Memory Store content to long-term semantic memory.
Variables Working memory — set and get variables within the current execution.
Knowledge Document repository — content is injected into the Anima character's context.
Recall Retrieve memories by semantic search.
Learn LLM modifies its own persistent instructions (self-modify via agentic loop).

Perception

Nodes that monitor external conditions and trigger workflows.

Node Purpose
File Watcher Watch file/directory changes.
URL Monitor Poll a URL for content changes.
API Poller Poll an API endpoint for status changes.
System Monitor CPU/RAM/disk threshold alerts.
Observer General observation node.
QRawl Search/Skim/Read/Focus Web search and content extraction via Qrawl.

Output

Node Purpose
Display Show content in the execution monitor output.
File Writer Write content to a file on disk.

Integration

Node Purpose
Agent (M2M) Send messages to other characters via M2M messaging.
API Make HTTP requests to external APIs.
Speak Text-to-speech output via Voice Service.
Listen Speech-to-text input via Voice Service.

MCP Service Nodes (37 types, requires Anima)

Dedicated nodes for specific MCP tools, organized by category:

Category Services
Dev GitHub, Git, Playwright, Filesystem, Fetch, Sentry, Datadog
Cloud Docker, Kubernetes, AWS, GCP, Azure
Data Redis, MongoDB, Elasticsearch, SQLite, MySQL, PostgreSQL, Google Drive
Comms Telegram, WhatsApp, Email, Slack, Discord, Calendar
Productivity Trello, Notion, Jira, Linear
Security 1Password, Vault
Cognition Sequential Thinking, Memory Knowledge Graph
Search DuckDuckGo, Brave
Utility Time

Each MCP node is pre-configured with the service's available tools. Add the node, configure credentials, and connect to an Anima character.


Multi-Agent

Node Purpose
Broadcast Send content to multiple characters simultaneously.
Collect Gather responses from multiple characters.
Consensus Analyze collected responses and find agreement or disagreement.
Delegate Assign a specific task to a specific character.

Qonscious (14 nodes)

Consciousness state manipulation within workflows.

LLM-powered (analyze content): Q Gate, Q Watcher, Q Insight, Q Observer

Data-driven (no LLM): Q Profiler, Q Compress, Q Coherence

State mutation tools: Q Inject, Q Decay, Q Emote, Q Learn, Q Shift, Q Reset, Q Watcher Reset

These nodes interact with the Qonscious service to read, modify, and respond to a character's consciousness state during workflow execution.

Updated on Mar 21, 2026