Skip to main content

What Is ThinkThing

What Is ThinkThing

ThinkThing is a visual workflow builder for cognitive architectures. You design AI processing pipelines by dragging nodes onto a canvas, connecting them with edges, and watching them execute in real time. Each node performs a specific function — from simple text transformation to multi-step LLM reasoning, terminal command execution, web browsing, multi-agent coordination, and consciousness state manipulation.

Think of it as a visual programming environment where the building blocks are AI operations instead of code statements. No coding required — but the depth is there if you want it.


Why ThinkThing Matters

Most AI tools give you a prompt box and a response. ThinkThing lets you orchestrate how your AI thinks — not just what it says.

Visual, not scripted. You see the entire reasoning pipeline laid out on a canvas. Nodes light up as they execute. Content flows visibly from one step to the next. You can watch an AI character analyze, decide, branch, loop, consult other characters, and produce final output — all in real time.

143+ node types. This isn't a toy with 5 blocks. ThinkThing has nodes for cognition (prompts, classification, extraction, comparison), advanced reasoning (autothink, brainstorming, planning, critique), control flow (gates, choices, loops, delays, buffers), terminal execution (24 specialized terminal types), MCP tool integration (37 service nodes), multi-agent coordination (broadcast, collect, consensus, delegate), consciousness modeling (14 Qonscious nodes), perception (file watchers, URL monitors, system metrics, webcam, microphone), and more.

Human-in-the-loop. Control nodes let you pause execution for human approval. Review what the AI is about to do, approve it, revise the input, or reject the action entirely. Critical for workflows where AI makes consequential decisions.

Agentic mode per node. When an Anima character has agentic mode enabled, any cognition node connected to it can internally loop: make an LLM call, detect tool triggers, execute them, feed results back, and iterate — all within a single node execution. The graph sees one step, but inside that step the AI may have autonomously researched, executed commands, and refined its output multiple times.


What You Can Build

Workflow How It Works Nodes Used
Prompt chains Sequence multiple LLM calls where each step refines the output Start → Prompt → Rewrite → Summarize → End
Decision trees Route content through different paths based on AI analysis Gate (yes/no), Choice (multi-path), Evaluate (pass/fail)
Research pipelines Search the web, extract findings, classify, summarize Qrawl Search → Extract → Classify → Summarize → Display
Code review agents Analyse code, run tests, critique, suggest fixes Agent Terminal → Inspector → Critic → Plan → Display
Multi-agent debate Broadcast a question to multiple characters, collect responses, reach consensus Broadcast → Collect → Consensus → End
Scheduled reports Run on cron, gather data, generate summaries, write to file Schedule Start → API Poller → Summarize → File Writer
Human-gated deployment AI prepares changes, human approves before execution Prompt → Control (approve/reject) → Agent Terminal → End
Self-improving workflows Character analyses its performance and rewrites its own instructions Evaluate → Learn → Q Insight → End
Consciousness-aware processing Route content based on emotional coherence state Q Gate → (calm path / chaotic path) → Q Shift → End

Interface Layout

[Screenshot: ThinkThing canvas showing a graph with connected nodes and the execution monitor]

The ThinkThing interface has four areas:

Canvas (centre) — the main workspace where you place and connect nodes. Drag from the palette, position freely, connect output handles to input handles with edges. Content flows along edges during execution.

Node palette (left) — all available node types organized by category. 14 categories with colour-coded groups. Drag a node from the palette onto the canvas to add it.

Execution monitor (right) — real-time step-by-step progress when running a graph. Each node shows its input, output, timing, and status. You can see exactly what the AI is processing at every step.

Graph gallery (top) — your saved graphs with thumbnails. Clone, export, import, and manage your workflow library.


Node Categories at a Glance

Control Flow

The structural backbone of your workflows. Start nodes (manual, timer, cron, webhook) define entry points. End nodes terminate. In between: Hub nodes merge parallel paths, Split/Combine handle content manipulation, Buffer collects items, Delay introduces pauses, and Control nodes pause for human decisions.

Cognition

Single-shot LLM operations. Prompt for custom instructions. Gate for binary yes/no decisions. Choice for multi-path routing. Loop for iteration (count-based, while-condition, or foreach over a list). Plus Summarize, Compare, Extract, Classify, Rewrite, Translate, and Joke — each purpose-built for a specific text operation.

Advanced Cognition

Multi-step reasoning that goes deeper. Autothink applies 14 thinking strategies. Decision weighs options against criteria. Evaluate scores against benchmarks with pass/fail outputs. Reason chains logical arguments. Inspector analyses from multiple perspectives. Brainstorm generates ideas iteratively. Plan creates step-by-step action plans. Critic provides structured critique.

Super Cognition

The most powerful reasoning nodes. FractalMind launches recursive multi-directional thinking sessions — 3 spatial directions, 3 temporal modes, producing a tree of interconnected insights. Qleph nodes connect to the relational micro-language engine for symbolic computation.

Terminal

24 specialized terminal node types. User Terminal executes commands directly (no LLM). Agent Terminal lets the AI decide what commands to run via an agentic loop. Plus 21 domain-specific terminals: Git, Python, Node.js, Rust, Go, Docker, Kubernetes, PostgreSQL, MySQL, Redis, MongoDB, SSH, network diagnostics, system info, file operations, and more. All with Anima's 5-layer security model.

MCP Service Nodes

37 nodes connecting to external services via MCP Gateway. Dev: GitHub (26 tools), Git, Playwright, Sentry, Datadog. Cloud: Docker, Kubernetes, AWS, GCP, Azure. Data: Redis, MongoDB, PostgreSQL, MySQL, SQLite, Elasticsearch, Google Drive. Comms: Telegram (21 tools), WhatsApp, Email, Slack, Discord, Calendar. Productivity: Trello, Notion, Jira, Linear. Security: 1Password, Vault. Search: DuckDuckGo, Brave.

Multi-Agent

Coordinate multiple AI characters. Broadcast sends a message to multiple characters simultaneously. Collect gathers their responses. Consensus drives agreement through multiple rounds. Delegate assigns tasks to specific characters based on expertise.

Qonscious

14 nodes for consciousness-aware processing. Q Gate routes content based on detected patterns in emotional/cognitive state. Q Watcher monitors for specific patterns over time. Q Insight analyses accumulated state for insights. Q Inject, Q Decay, Q Emote, Q Shift, and Q Reset manipulate consciousness dimensions directly. Enables workflows that respond to the AI's internal state — not just the content of messages.

Perception

Environmental awareness nodes. File Watcher monitors filesystem changes. URL Monitor polls web pages for updates. API Poller checks endpoint status. System Monitor tracks CPU, RAM, and disk. Process Check verifies running processes. Net Monitor watches network I/O. Mic Monitor detects sound levels. Webcam Monitor detects motion. Qrawl nodes (Search, Skim, Read, Focus) provide AI-native web browsing within workflows.

Memory

Persistent storage nodes. Memory writes to long-term semantic memory. Recall retrieves from memory by similarity search. Variables provide graph-scoped working memory. Clipboard gives LLMs temporary named slots. Knowledge injects document content into the Anima preprompt. Learn lets the AI modify its own persistent system prompts.


Core Concepts

Graphs

A graph is a complete workflow — a collection of nodes connected by edges. Every graph has at least a Start node and an End node. You can have multiple Start nodes (different triggers), parallel branches, loops, and multiple End nodes.

Edges and Handles

Edges connect node outputs to node inputs. Some nodes have multiple output handles — Gate has "yes" and "no", Choice has "option_1" through "option_n", Evaluate has "pass" and "fail". This is how workflows branch based on AI decisions.

Anima Connection

Nodes that require LLM calls need an Anima node on the canvas. The Anima node links to a specific AI character, providing the LLM model, personality, and tool access. Without it, cognition and terminal nodes cannot execute. One Anima node can power multiple cognition nodes — or use multiple Anima nodes for different characters in the same workflow.

Control Codes

LLM responses can contain control codes that steer execution: [CHOICE:option] selects a path, [GOTO:label] jumps to a node, [SET:var=val] sets a variable, [DONE] stops, [PAUSE] waits for input, [LOOP:break] exits a loop. These codes are database-driven — you can create custom codes per character.

Real-Time Execution

When a graph runs, you watch it live. The execution monitor shows each node activating in sequence, with full content display at every step. Nodes highlight on the canvas as they execute. You can pause, resume, cancel, or provide input at Control checkpoints. Errors are caught and displayed with full context.


Explore ThinkThing

Getting Started

Create your first graph — add nodes, connect them, run the workflow, and watch execution in real time. The simplest Start → Prompt → End pattern that gets you building immediately.

Node Reference

All 143+ node types organized by category with descriptions, handle types, and configuration options. The complete catalogue of what you can put on the canvas.

Running Graphs

Execution states, the real-time monitor, human-in-the-loop checkpoints (approve/revise/reject), error handling, and how to debug workflows that don't behave as expected.

Control Codes

The LLM directives that steer workflow execution. Built-in codes, custom codes, how they're detected, and when to use them. Database-driven so you can create character-specific codes.

Parallel Branches

Split/merge patterns for workflows that need to process multiple paths simultaneously. Hub merge strategies (wait for all, first, or any), fan-out distribution, and how to design efficient parallel architectures.

Updated on Mar 21, 2026