Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/browserbase/stagehand/llms.txt

Use this file to discover all available pages before exploring further.

Stagehand provides flexible logging capabilities to help you debug and monitor your browser automation workflows.

Verbosity Levels

Stagehand supports three verbosity levels:
verbose
0 | 1 | 2
default:1
0 - Error/Warning only (critical issues)1 - Info (standard information messages)2 - Debug (detailed debugging information)

Basic Configuration

Set the verbosity level when initializing Stagehand:
import { Stagehand } from '@browserbasehq/stagehand';

const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2, // Enable debug logging
});

await stagehand.init();

Verbosity Level Details

Level 0: Error/Warning

Only log critical errors and important warnings:
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 0,
});
Output includes:
  • Fatal errors
  • Connection failures
  • Timeout errors
  • Critical warnings
Example output:
[2025-02-24T10:30:15.123Z] ERROR: Failed to connect to browser
    error: Connection timeout after 15000ms

Level 1: Info (Default)

Standard information messages for normal operations:
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 1, // Default level
});
Output includes:
  • All Level 0 messages
  • Initialization status
  • Action execution summaries
  • Page navigation events
  • Session lifecycle events
Example output:
[2025-02-24T10:30:15.123Z] INFO: Stagehand initialized
    env: LOCAL
    model: gpt-4o
[2025-02-24T10:30:16.456Z] INFO: Navigated to https://example.com
[2025-02-24T10:30:18.789Z] INFO: Action executed successfully
    action: click
    selector: button[type="submit"]

Level 2: Debug

Detailed debugging information for troubleshooting:
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
});
Output includes:
  • All Level 0 and 1 messages
  • LLM prompts and responses
  • DOM snapshots
  • CDP protocol messages
  • Token usage statistics
  • Performance metrics
  • Internal state changes
Example output:
[2025-02-24T10:30:15.123Z] DEBUG: LLM request sent
    model: gpt-4o
    prompt_tokens: 1250
[2025-02-24T10:30:16.456Z] DEBUG: LLM response received
    completion_tokens: 85
    inference_time_ms: 1333
[2025-02-24T10:30:17.789Z] DEBUG: DOM snapshot captured
    elements: 342
    size_bytes: 45678

Custom Logger

Implement a custom logger to integrate with your logging infrastructure:
import { Stagehand, LogLine } from '@browserbasehq/stagehand';

const customLogger = (logLine: LogLine) => {
  // Send logs to your logging service
  console.log(JSON.stringify({
    timestamp: logLine.timestamp,
    level: logLine.level,
    message: logLine.message,
    category: logLine.category,
    ...logLine.auxiliary,
  }));
};

const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
  logger: customLogger,
});

LogLine Interface

The LogLine object passed to custom loggers:
interface LogLine {
  id?: string;                    // Unique log ID
  category?: string;              // Log category (e.g., "navigation", "action")
  message: string;                // Human-readable message
  level?: 0 | 1 | 2;             // Verbosity level
  timestamp?: string;             // ISO 8601 timestamp
  auxiliary?: {                   // Additional structured data
    [key: string]: {
      value: string;
      type: "object" | "string" | "html" | "integer" | "float" | "boolean";
    };
  };
}

Custom Logger Examples

JSON Logger

Log as structured JSON:
const jsonLogger = (logLine: LogLine) => {
  const logEntry = {
    timestamp: logLine.timestamp || new Date().toISOString(),
    level: logLine.level === 0 ? 'error' : logLine.level === 2 ? 'debug' : 'info',
    message: logLine.message,
    category: logLine.category,
  };
  
  if (logLine.auxiliary) {
    for (const [key, { value, type }] of Object.entries(logLine.auxiliary)) {
      logEntry[key] = type === 'object' ? JSON.parse(value) : value;
    }
  }
  
  console.log(JSON.stringify(logEntry));
};

File Logger

Write logs to a file:
import fs from 'fs';

const fileLogger = (logLine: LogLine) => {
  const timestamp = logLine.timestamp || new Date().toISOString();
  const level = logLine.level === 0 ? 'ERROR' : logLine.level === 2 ? 'DEBUG' : 'INFO';
  const message = `[${timestamp}] ${level}: ${logLine.message}\n`;
  
  fs.appendFileSync('stagehand.log', message);
};

const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
  logger: fileLogger,
});

Cloud Logger (Datadog, Splunk, etc.)

Integrate with cloud logging services:
import { datadogLogs } from '@datadog/browser-logs';

const datadogLogger = (logLine: LogLine) => {
  const level = logLine.level === 0 ? 'error' : logLine.level === 2 ? 'debug' : 'info';
  
  datadogLogs.logger.log(
    logLine.message,
    {
      category: logLine.category,
      ...logLine.auxiliary,
    },
    level
  );
};

const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
  logger: datadogLogger,
});

Disable Pino Backend

Disable the default Pino logging backend (useful for tests or minimal environments):
const stagehand = new Stagehand({
  env: "LOCAL",
  disablePino: true,
  logger: customLogger, // Provide your own logger
});
When disablePino: true, only your custom logger receives log messages. If no custom logger is provided, logs fall back to console.* methods.

Log to File

Enable automatic logging of LLM inference to a file:
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2,
  logInferenceToFile: true, // Logs LLM prompts and responses to a file
});
Logs are written to ./stagehand-inference.log by default.

Production Logging Best Practices

1

Use appropriate verbosity

Set verbose: 0 or verbose: 1 in production to avoid excessive logging overhead.
const stagehand = new Stagehand({
  env: "BROWSERBASE",
  verbose: process.env.NODE_ENV === 'production' ? 0 : 2,
});
2

Implement structured logging

Use a custom logger that outputs structured JSON for easier parsing and analysis.
logger: (logLine) => {
  console.log(JSON.stringify({
    service: 'stagehand',
    ...logLine,
  }));
}
3

Integrate with monitoring

Send logs to your monitoring service (Datadog, Splunk, CloudWatch, etc.) for centralized observability.
4

Filter sensitive data

Ensure your custom logger filters out sensitive information like API keys, passwords, or PII.
const sanitizeLogger = (logLine: LogLine) => {
  const sanitized = { ...logLine };
  // Remove sensitive fields
  if (sanitized.auxiliary?.apiKey) {
    delete sanitized.auxiliary.apiKey;
  }
  myLogger.log(sanitized);
};

Debugging Tips

Set verbose: 2 temporarily to debug issues:
const stagehand = new Stagehand({
  env: "LOCAL",
  verbose: 2, // Enable for debugging
});
Look for:
  • LLM prompt/response details
  • Token usage and costs
  • DOM snapshot information
  • Action execution details
Enable file logging to review LLM interactions:
const stagehand = new Stagehand({
  env: "LOCAL",
  logInferenceToFile: true,
});
Review ./stagehand-inference.log to see:
  • Full LLM prompts sent
  • Complete LLM responses
  • Model selection decisions
Implement category-based filtering in your custom logger:
const categoryLogger = (logLine: LogLine) => {
  // Only log navigation and action events
  if (['navigation', 'action'].includes(logLine.category)) {
    console.log(logLine.message);
  }
};
Performance Tip: Level 2 (debug) logging can impact performance due to additional data serialization and I/O. Use it for development and troubleshooting, not production.

Build docs developers (and LLMs) love