Skip to main content

Running Graphs

Running Graphs

ThinkThing graphs execute in real-time with step-by-step monitoring. You can watch content flow through each node, pause at checkpoints for human review, and control execution at any point.


Starting Execution

There are several ways to start a graph:

Method How
Manual Click the Run button in the ThinkThing interface
Webhook Send an HTTP POST to the graph's webhook endpoint
Schedule Use a Schedule Start node with a cron expression
Timer Use a Timer Start node for delayed/debounced execution

Execution States

As a graph runs, it moves through states:

State What's Happening
Pending Created but not yet started
Running Actively executing nodes
Paused Manually paused by you
Waiting for Input At a Control node, waiting for your decision
Completed Finished successfully — content reached the End node
Failed An error occurred during execution
Cancelled You manually stopped the execution

The Execution Monitor

When a graph runs, the execution monitor panel opens on the right side of the canvas. It shows real-time progress:

[Screenshot: Execution monitor showing nodes completing with content previews]

For each node, you can see:

  • Status indicator — pending, running, completed, or failed
  • Input content — what the node received
  • Output content — what the node produced
  • Duration — how long execution took
  • LLM metadata — model used, tokens consumed, thinking steps (for cognition nodes)

The monitor updates in real-time via WebSocket — you see results as they happen, not after the graph finishes.


Human-in-the-Loop Checkpoints

The Control node pauses execution and presents you with three options:

Action Effect
Approve Content passes through unchanged, execution continues to the next node
Revise You edit the content before it continues. Modified content flows to the next node
Reject Content is sent to the Fallback node (if connected) for an alternative path

This is essential for workflows where you want AI to draft but a human to verify — content moderation, code review, approval chains, or any process where quality gates matter.


Controlling Execution

During a running execution, you can:

  • Pause — temporarily stop execution. Resume at any time from where it left off.
  • Resume — continue a paused execution.
  • Cancel — stop execution entirely. Cannot be resumed.

Error Handling

When a node fails:

  1. The execution monitor shows the failed node in red with the error message
  2. Execution stops at the failed node
  3. You can investigate the error, fix the issue, and re-run the graph

Common failure causes:

  • Anima character not connected to a cognition node
  • LLM provider temporarily unavailable
  • Terminal command execution error
  • MCP tool credentials expired
  • Timeout exceeded (default 300 seconds per graph)

Agentic Execution

When the Anima node has agentic mode enabled, individual cognition and terminal nodes can internally cycle multiple times:

  1. The node makes its LLM call
  2. If the response contains tool triggers ([TRIGGER:terminal:command], [TRIGGER:mcp:...], etc.)
  3. The tools are executed and results fed back to the LLM
  4. The LLM responds again — this repeats until no more triggers, the step limit is reached, or the token budget is exhausted
  5. The final output is passed to the next node in the graph

From the graph's perspective, it's one node execution. Inside that node, the agentic loop may have performed multiple LLM calls and tool executions autonomously.

Agentic settings (configured on the Anima node):

Setting Default Description
Max Steps 3 Maximum iterations per node
Token Budget 50,000 Total tokens across all steps within a node
Timeout 120s Maximum time for the entire loop within a node

Webhooks

Graphs with a Webhook node can be triggered externally via HTTP POST:

POST /api/execution/webhook/{graph_id}

The webhook payload becomes the input content for the first node after the webhook trigger. This enables integration with external systems — CI/CD pipelines, monitoring alerts, form submissions, or any system that can send HTTP requests.

Updated on Mar 21, 2026