Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/discoposse/GUIness/llms.txt

Use this file to discover all available pages before exploring further.

The Run section lives inside the Export drawer, directly below the output format tabs. After selecting a format, choose a provider and model, then click ▶ Send to LLM to execute your pipeline. Responses stream in real time and remain editable, shareable, and continuable with follow-up messages.
API keys must be added to the Vault before running. Open the Vault from the top toolbar and add credentials for your chosen provider.

Providers

ProviderModelsEndpoint
AnthropicClaude modelsAPI key from Vault
OpenAIGPT modelsAPI key from Vault
GeminiGemini models (fetched live from API)Gemini API key from Vault
LocalOllama or OpenClawhttp://localhost:11434 (Ollama) or http://localhost:18789 (OpenClaw) — click Detect to discover available models
For local providers, no API key is required. Select ⊙ Local, confirm the URL matches your running instance, and click Detect to populate the model list.

Execution modes

Single mode compiles your entire connected pipeline into one monolithic prompt and sends it as a single LLM call. This is the fastest execution path and works well for pipelines where all context can be expressed in one prompt.
  • All connected nodes are serialized in topological order into a single payload
  • The prompt is sent to the selected provider in one request
  • The response streams into the response area in real time as tokens arrive
Use Single mode for straightforward pipelines, quick iteration, and any pipeline where inter-step dependencies do not require separate LLM calls.
Use Chain mode for sequential workflows — for example, research → summarize → format → review — where each step depends on the output of the previous one. Single mode is faster for flat, context-rich prompts.

Response area

After execution, the response appears in the panel below the Send button. Several controls are available:
1

Raw / Preview

Toggle between markdown rendering and a live HTML preview. HTML blocks in the response are rendered inside sandboxed iframes to prevent script execution outside the preview area.
2

Edit

Click Edit to make the response text editable inline. All formatting is preserved. Use this to refine the output before copying or saving.
3

Copy / Save

Copy sends the response to your clipboard as plain text. Save downloads the response as a file. The file extension is auto-detected from the content — .md for Markdown, .html for HTML, .py for Python, and so on.
4

Follow-up

Type a follow-up message in the input below the response and press Send. The full conversation history is included in the next request, so the model retains context from all previous exchanges in the session.
5

Clear

Click Clear to reset the response area and wipe the conversation history. The next run starts a fresh session.

Build docs developers (and LLMs) love