Skip to main content
Loom — The Weaver Owl

What is Loom?

Loom is an Elixir-native AI coding assistant that reads your codebase, proposes edits, runs commands, and commits changes — through both an interactive CLI and a Phoenix LiveView web UI with real-time streaming chat, file browsing, diff viewing, and an interactive decision graph. Unlike chat-based coding tools, Loom maintains a persistent decision graph across sessions so it remembers why decisions were made, not just what was done.

Key Features

Phoenix LiveView Web UI

Real-time streaming chat, file tree browser, unified diff viewer, interactive SVG decision graph, model selector, and session switcher — all without writing JavaScript

Interactive CLI

REPL-style interface with streaming output, colored diffs, markdown rendering powered by Owl. One-shot mode or persistent sessions

Decision Graph

Persistent reasoning memory with 7 node types (goal, decision, option, action, outcome, observation, revisit) and typed relationships. Remembers context across sessions

11 Built-in Tools

File read/write/edit, glob search, regex search, directory listing, shell execution, git operations, decision logging/querying, and sub-agent search

Multi-Provider LLM Support

Support for Anthropic, OpenAI, Google, Groq, xAI, and more via req_llm. 16+ providers, 665+ models. Real-time cost tracking

Repo Intelligence

ETS-backed file index, regex symbol extraction, relevance-ranked context packing. Token-aware context window with automatic budget allocation

Why Elixir?

Most AI coding tools are built in Python or TypeScript. Loom is built in Elixir because the BEAM virtual machine is quietly the best runtime for AI agent workloads:
An AI agent that reads files, searches code, runs shell commands, and calls LLMs is inherently concurrent. On the BEAM, each tool execution is a lightweight process. Parallel tool calls aren’t a threading nightmare — they’re just Task.async_stream. No thread pools, no callback hell, no GIL.
When a shell command hangs or an LLM provider times out, OTP supervisors handle it. A crashed tool doesn’t take down the session. A crashed session doesn’t take down the application. This isn’t defensive coding — it’s how the BEAM works.
No other AI coding assistant offers a real-time web UI with streaming chat, file browsing, diff viewing, and decision graph visualization — without writing a single line of JavaScript. Phoenix LiveView makes this possible. The same session GenServer that powers the CLI powers the web UI. Two interfaces, one source of truth.
Update Loom’s tools, add new providers, tweak the system prompt — all without restarting sessions or losing conversation state. In production. While agents are running.
Elixir’s pattern matching makes handling the zoo of LLM response formats (tool calls, streaming chunks, error variants, provider-specific quirks) clean and exhaustive rather than a tangle of if/else.
# This is real code from Loom's agent loop
case ReqLLM.Response.classify(response) do
  %{type: :tool_calls} -> execute_tools_and_continue(response, state)
  %{type: :final_answer} -> persist_and_return(response, state)
  %{type: :error} -> handle_error(response, state)
end

Built on Jido

Loom is built on the Jido agent ecosystem, a thoughtfully designed Elixir-native framework:
  • jido_action — Every Loom tool is a Jido.Action with declarative schemas, automatic validation, and composability
  • jido_ai — Provides the ReAct reasoning strategy that drives the agent loop
  • jido_shell — Sandboxed shell execution with resource limits
  • req_llm — 16+ LLM providers, 665+ models, streaming, tool calling, cost tracking

Next Steps

Installation

Install Loom and configure your first LLM provider

Quickstart

Get started with your first coding session

Decision Graph

Learn how Loom remembers context across sessions

Architecture

Understand how Loom is built on the BEAM

Build docs developers (and LLMs) love