GUIness is a self-contained, single-file HTML application for designing, testing, and executing AI skill pipelines. It runs entirely in the browser — no server, no install, no dependencies. This quickstart walks you through opening the file, building your first pipeline from primitive nodes, and sending it to an LLM.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/discoposse/GUIness/llms.txt
Use this file to discover all available pages before exploring further.
Requirements
Before you begin, make sure you have:- A modern browser (Chrome, Edge, Firefox, or Safari)
- An API key for at least one LLM provider (Anthropic, OpenAI, or Gemini), or a local Ollama or OpenClaw instance
Build your first pipeline
Open the file
Double-click
pipeline-builder.html in your file manager to open it in your default browser.For full File System Access API support — including autosave backups to a local folder — serve the file from
localhost instead of opening it directly via the file:// protocol.Add nodes from the Node Library
The Node Library on the left lists all available skill types. Click the + button next to any primitive to place it on the canvas.There are 6 primitive node types to choose from: Text, Inputs, Compute, Code, Router, and Context.
Connect nodes
Drag from an output port (solid filled circle on the right side of a node) to an input port (open circle on the left side of another node) to create a connection.Connections flow left-to-right by convention. The system prevents cycles — you cannot create an infinite loop except through Router nodes with explicit loop control.
Configure nodes in the Inspector panel
Click any node on the canvas to select it. The Inspector panel on the right shows that node’s configurable fields:
- Text — paste or type your static content
- Inputs — set the source type (text, file, URL, API, clipboard)
- Compute — write instructions and choose an executor (LLM, Function, HTTP, or Code)
- Code — select a language and write the script
- Router — define a condition and evaluation mode (LLM or JS expression)
- Context — add variables and set a merge mode
Run the pipeline
Click Export → LLM in the top-right toolbar to open the Run drawer.
- Go to the Run tab in the drawer.
- Click Vault in the toolbar and add your API key for the provider you want to use.
- Select a provider (Anthropic, OpenAI, Gemini, or Local).
- Choose a model from the dropdown.
- Click ▶ Send to LLM.
Only nodes connected to at least one edge are included in the export. Disconnected nodes on the canvas are ignored.
Use a local LLM with Ollama
For free, private execution without a cloud API key, install Ollama and run a model locally:http://localhost:11434, and click Detect to automatically find your available models.
Use OpenClaw
If you use OpenClaw, enable the chat completions endpoint in~/.openclaw/openclaw.json, then:
- Set the URL in the Local provider field to
http://localhost:18789. - Open the Vault and add your gateway token as an OpenClaw Token.