Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/discoposse/GUIness/llms.txt

Use this file to discover all available pages before exploring further.

GUIness is a self-contained, single-file HTML application for designing, testing, and executing AI skill pipelines. It runs entirely in the browser — no server, no install, no dependencies. This quickstart walks you through opening the file, building your first pipeline from primitive nodes, and sending it to an LLM.

Requirements

Before you begin, make sure you have:
  • A modern browser (Chrome, Edge, Firefox, or Safari)
  • An API key for at least one LLM provider (Anthropic, OpenAI, or Gemini), or a local Ollama or OpenClaw instance

Build your first pipeline

1

Open the file

Double-click pipeline-builder.html in your file manager to open it in your default browser.
For full File System Access API support — including autosave backups to a local folder — serve the file from localhost instead of opening it directly via the file:// protocol.
2

Add nodes from the Node Library

The Node Library on the left lists all available skill types. Click the + button next to any primitive to place it on the canvas.There are 6 primitive node types to choose from: Text, Inputs, Compute, Code, Router, and Context.
Start with a Text node for static content (like a prompt template) and a Compute node to run LLM inference on it.
3

Connect nodes

Drag from an output port (solid filled circle on the right side of a node) to an input port (open circle on the left side of another node) to create a connection.Connections flow left-to-right by convention. The system prevents cycles — you cannot create an infinite loop except through Router nodes with explicit loop control.
4

Configure nodes in the Inspector panel

Click any node on the canvas to select it. The Inspector panel on the right shows that node’s configurable fields:
  • Text — paste or type your static content
  • Inputs — set the source type (text, file, URL, API, clipboard)
  • Compute — write instructions and choose an executor (LLM, Function, HTTP, or Code)
  • Code — select a language and write the script
  • Router — define a condition and evaluation mode (LLM or JS expression)
  • Context — add variables and set a merge mode
You can also edit the node’s name, description, tags, color, and shape from the top of the Inspector panel.
5

Run the pipeline

Click Export → LLM in the top-right toolbar to open the Run drawer.
  1. Go to the Run tab in the drawer.
  2. Click Vault in the toolbar and add your API key for the provider you want to use.
  3. Select a provider (Anthropic, OpenAI, Gemini, or Local).
  4. Choose a model from the dropdown.
  5. Click ▶ Send to LLM.
The response streams into the response area in real time. You can toggle between Raw and Preview modes, copy or save the output, or send a follow-up message.
Only nodes connected to at least one edge are included in the export. Disconnected nodes on the canvas are ignored.

Use a local LLM with Ollama

For free, private execution without a cloud API key, install Ollama and run a model locally:
ollama run llama3
Then restart Ollama with CORS enabled so the browser can reach it:
OLLAMA_ORIGINS=* ollama serve
In the Run drawer, select ⊙ Local, leave the URL set to http://localhost:11434, and click Detect to automatically find your available models.

Use OpenClaw

If you use OpenClaw, enable the chat completions endpoint in ~/.openclaw/openclaw.json, then:
  1. Set the URL in the Local provider field to http://localhost:18789.
  2. Open the Vault and add your gateway token as an OpenClaw Token.
The Vault encrypts all credentials with a master password using AES-GCM. Credentials are only decrypted in memory during your session.

Build docs developers (and LLMs) love