Skip to main content

Your First ThinkThing Workflow

· By Qui Academy · 3 min read

What You'll Build

A research and review pipeline — a graph that takes a topic, researches it with an AI character, evaluates the quality of the research, and either accepts the result or sends it back for revision. This covers the core ThinkThing concepts: nodes, edges, Anima connections, control flow, and execution monitoring.


Prerequisites

  • At least one AI character created in Qui Anima (see Creating Specialist Characters)
  • ThinkThing service running (check Services tab in QUI Core dashboard)

Step 1: Create a New Graph

  1. Open ThinkThing from the QUI Core dashboard (Services → ThinkThing) or the system tray
  2. Click New Graph in the gallery
  3. Name it Research Pipeline
  4. A blank canvas opens with a Start node already placed

[Screenshot: Empty canvas with Start node]


Step 2: Add the Core Nodes

Open the node palette on the left and add these nodes:

  1. Prompt (under Cognition) — this will do the research
  2. Gate (under Cognition) — this will evaluate quality
  3. Summarize (under Cognition) — this will produce the final output
  4. End (under Control Flow) — terminates the graph
  5. Anima (under Anima) — connects your AI character to the cognition nodes

You should now have 6 nodes on the canvas.


Step 3: Connect the Nodes

Drag edges between nodes by clicking an output handle and dropping on an input handle:

  1. StartPrompt (the topic flows into the research step)
  2. PromptGate (research output flows to quality check)
  3. Gate (approve output) → Summarize (good research gets summarized)
  4. Gate (reject output) → Prompt (bad research loops back for another attempt)
  5. SummarizeEnd

Then connect the Anima node to each cognition node's tool handle:

  • AnimaPrompt tool handle
  • AnimaGate tool handle
  • AnimaSummarize tool handle

This tells ThinkThing which AI character to use for LLM calls.

[Screenshot: Connected graph showing the research pipeline flow]


Step 4: Configure Each Node

Start Node

Click the Start node and set the input:

Research the topic of semantic memory in AI systems. Cover what it is, 
how it differs from episodic memory, and practical implementations.

Anima Node

Click the Anima node and select your AI character from the dropdown. This character's personality, model, and tools will be used for all connected cognition nodes.

Prompt Node

Click the Prompt node and enter:

Research the following topic thoroughly. Provide a detailed analysis 
with specific examples and technical depth.

Topic: {{input}}

The {{input}} variable pulls content from the previous node (Start, or Gate on a retry loop).

Gate Node

Click the Gate node and configure:

  • Prompt: Evaluate whether this research is thorough and accurate. Reply with APPROVE if it meets quality standards, or REJECT with specific feedback on what needs improvement.
  • Approve output: routes to Summarize
  • Reject output: routes back to Prompt

The Gate node produces a [CHOICE:approve] or [CHOICE:reject] control code that ThinkThing uses to route the content.

Summarize Node

Click the Summarize node and enter:

Create a concise executive summary of this research. 
Include key findings, practical implications, and recommendations.

Step 5: Run the Graph

  1. Click the Run button
  2. The Execution Monitor opens on the right
  3. Watch each node light up as it executes:
    • Start sends the topic
    • Prompt generates research (you'll see the full output)
    • Gate evaluates quality
    • If approved → Summarize produces the final output
    • If rejected → Prompt tries again with the Gate's feedback

[Screenshot: Execution monitor showing completed nodes with content preview]


Step 6: Review the Results

Click any completed node in the monitor to see:

  • Input — what the node received
  • Output — what it produced
  • Duration — how long it took
  • Tokens — LLM tokens consumed

The End node shows the final summarized output — your complete research pipeline result.


What You Learned

Concept What It Does
Nodes Processing steps — each does one thing
Edges Data flow between nodes
Anima node Connects an AI character to power LLM-based nodes
Prompt node Sends content to the LLM with instructions
Gate node Quality control — approve or reject with routing
Control codes [CHOICE:approve] / [CHOICE:reject] steer execution
Loop Rejected content goes back to Prompt for retry
Execution Monitor Real-time view of what each node is doing

Next Steps

  • Add a Human checkpoint before the Gate to review research yourself before the AI evaluates it — see Running Graphs
  • Try parallel branches — split research into multiple directions and merge results — see Parallel Branches
  • Explore all 143+ node types in the Node Reference
  • Use Control Codes to add variables, loops, and conditional routing

About the author

Qui Academy Qui Academy
Updated on Mar 22, 2026