The Run section lives inside the Export drawer, directly below the output format tabs. After selecting a format, choose a provider and model, then click ▶ Send to LLM to execute your pipeline. Responses stream in real time and remain editable, shareable, and continuable with follow-up messages.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/discoposse/GUIness/llms.txt
Use this file to discover all available pages before exploring further.
API keys must be added to the Vault before running. Open the Vault from the top toolbar and add credentials for your chosen provider.
Providers
| Provider | Models | Endpoint |
|---|---|---|
| Anthropic | Claude models | API key from Vault |
| OpenAI | GPT models | API key from Vault |
| Gemini | Gemini models (fetched live from API) | Gemini API key from Vault |
| Local | Ollama or OpenClaw | http://localhost:11434 (Ollama) or http://localhost:18789 (OpenClaw) — click Detect to discover available models |
Execution modes
- Single
- Chain
Single mode compiles your entire connected pipeline into one monolithic prompt and sends it as a single LLM call. This is the fastest execution path and works well for pipelines where all context can be expressed in one prompt.
- All connected nodes are serialized in topological order into a single payload
- The prompt is sent to the selected provider in one request
- The response streams into the response area in real time as tokens arrive
Response area
After execution, the response appears in the panel below the Send button. Several controls are available:Raw / Preview
Toggle between markdown rendering and a live HTML preview. HTML blocks in the response are rendered inside sandboxed iframes to prevent script execution outside the preview area.
Edit
Click Edit to make the response text editable inline. All formatting is preserved. Use this to refine the output before copying or saving.
Copy / Save
Copy sends the response to your clipboard as plain text. Save downloads the response as a file. The file extension is auto-detected from the content —
.md for Markdown, .html for HTML, .py for Python, and so on.Follow-up
Type a follow-up message in the input below the response and press Send. The full conversation history is included in the next request, so the model retains context from all previous exchanges in the session.