Skip to main content
Browser automation is reliable, but it’s not always the fastest approach. When a site’s internal API is accessible from within the browser context, calling those endpoints directly is faster and less fragile than driving the UI — no selectors to maintain, no page rendering to wait for, and no DOM changes to break your script. Libretto captures all network traffic during a browser session, so your agent can inspect exactly which API calls the site makes and convert a UI-based workflow to direct requests.

Example prompt

We have a browser script at ./integration.ts that automates going to Hacker News and getting the first 10 posts. Convert it to direct network scripts instead. Use the Libretto skill.
The agent runs the existing script, examines the captured network log, identifies the API endpoints that return the data, and rewrites the integration to call them directly from within the browser context.

The process

1

Run the existing script and capture traffic

npx libretto run ./integration.ts main --headed
Running the script with Libretto automatically captures all network requests and responses to .libretto/sessions/<session>/network.jsonl.
2

Examine the network log

npx libretto network
The agent reviews the captured requests to find API calls that return the data you need:
# Filter for JSON API responses
jq 'select(.contentType | test("application/json"))' \
  .libretto/sessions/<session>/network.jsonl

# Find POST requests
jq 'select(.method == "POST")' \
  .libretto/sessions/<session>/network.jsonl
3

Run a site security review

Before committing to a network-first approach, the agent checks whether direct API calls are safe on this site:
  • Is there enterprise bot protection (Akamai, PerimeterX, Cloudflare)?
  • Is window.fetch monkey-patched to inspect call stacks?
  • Does the site do API-level monitoring?
This determines which network strategy to use.
4

Choose the right network approach

Call endpoints from within the browser’s JavaScript context. The requests share the browser’s cookies, TLS fingerprint, and origin — they look identical to requests the site’s own code would make.
// In your workflow file
const data = await page.evaluate(async () => {
  const res = await fetch("/api/items?page=1");
  return res.json();
});
Use this when the site has no bot protection and fetch is not monkey-patched.
5

Rewrite the workflow

The agent replaces the DOM-extraction logic with typed API client methods:
import { workflow } from "libretto";

type Post = {
  id: number;
  title: string;
  url: string;
  score: number;
  by: string;
};

type Output = { posts: Post[] };

class HackerNewsClient {
  constructor(private page: import("playwright").Page) {}

  private async apiFetch(path: string): Promise<string> {
    return await this.page.evaluate(
      async ({ path }) => {
        const res = await fetch(`https://hacker-news.firebaseio.com${path}`);
        if (!res.ok) throw new Error(`${res.status} for ${path}`);
        return res.text();
      },
      { path },
    );
  }

  async getTopStoryIds(): Promise<number[]> {
    const raw = await this.apiFetch("/v0/topstories.json");
    return JSON.parse(raw);
  }

  async getItem(id: number): Promise<Post> {
    const raw = await this.apiFetch(`/v0/item/${id}.json`);
    return JSON.parse(raw);
  }
}

export const topPosts = workflow<{}, Output>(async (ctx): Promise<Output> => {
  const { page } = ctx;
  await page.goto("https://news.ycombinator.com");

  const client = new HackerNewsClient(page);
  const ids = (await client.getTopStoryIds()).slice(0, 10);
  const posts = await Promise.all(ids.map((id) => client.getItem(id)));

  return { posts };
});
6

Validate headless

npx libretto run ./integration.ts topPosts --headless
Confirm the output matches what the browser-based version returned.

When to use direct network calls

The network approach is the default preference for new integrations. Use it when:
  • The site exposes a usable JSON API
  • You need to paginate deeply (fetching page 50 without clicking “next” 49 times)
  • You want data the UI doesn’t display (hidden fields, metadata, IDs)
  • Speed and reliability matter more than DOM fidelity

When not to use the network approach

Check the site’s security posture before committing to a network-first strategy. Do not use direct fetch() calls if:
  • The site has enterprise bot protection (Akamai, PerimeterX, Shape Security) and you’ve confirmed fetch is monkey-patched
  • window.fetch is wrapped with call-stack inspection — your calls will be flagged because they don’t originate from the site’s own bundled code
  • The site does API-level monitoring that correlates request timing to UI interactions
In these cases, use passive interception (page.on('response', ...)) instead. It captures the same data at zero additional detection risk because the requests come from the site’s own code.

Security analysis

The Libretto skill includes a site-security review reference that guides your agent through checking for bot protection services, fetch interception, and challenge flows. The agent uses this to produce a Site Assessment Summary before choosing an integration strategy:
## Site Assessment: https://example.com

### Bot Detection Profile
- Enterprise bot protection: None detected
- Fetch/XHR interception: Native (not patched)
- Challenge pages: None
- Overall security posture: Low

### Safe Approaches
- page.evaluate(fetch(...)): Safe
- page.on('response', ...): Viable
- DOM extraction: Always available as fallback

One-shot script generation

Start from scratch — give your agent a goal and let it build the workflow.

Debugging workflows

Reproduce failures and fix broken automations interactively.

Build docs developers (and LLMs) love