Skip to main content
A Lighthouse plugin is a Node module that implements a set of checks run by Lighthouse and displayed as a new category in the report. Plugins have a stable, semver-versioned API and are easily distributed on npm.

Plugin vs. custom config

Before writing a plugin, decide whether you need the full flexibility of a custom config.
CapabilityPluginCustom config
Include your own custom audits
Add a custom category
Easily shareable and extensible on npm
Semver-stable API
Gather custom data from the page (artifacts)
Modify core categories
Modify config.settings properties
If you need to collect new data from the page (custom gatherers) or modify core Lighthouse categories, use a custom config instead.

Creating a plugin

1

Create package.json

A Lighthouse plugin is a Node module whose name starts with lighthouse-plugin-. Use peerDependencies to declare the Lighthouse version requirement — do not add Lighthouse as a direct dependency.
package.json
{
  "name": "lighthouse-plugin-example",
  "type": "module",
  "main": "plugin.js",
  "peerDependencies": {
    "lighthouse": "^13.0.3"
  },
  "devDependencies": {
    "lighthouse": "^13.0.3"
  }
}
2

Create plugin.js

This is the configuration entry point for your plugin. It declares which audit files to load and defines the new category that will appear in the report.
plugin.js
export default {
  // Paths to your custom audit files.
  audits: [{path: 'lighthouse-plugin-example/audits/has-cat-images.js'}],

  // The new category added to the report.
  category: {
    title: 'Cats',
    description:
      'When integrated into your website effectively, cats deliver delight and bemusement.',
    auditRefs: [{id: 'has-cat-images-id', weight: 1}],
  },
};
3

Write your audit files

Each audit is a class with a static meta getter and a static audit() method. Place audit files in a subdirectory (e.g. audits/) and reference them by path in plugin.js.
audits/has-cat-images.js
import {Audit} from 'lighthouse';

class CatAudit extends Audit {
  static get meta() {
    return {
      id: 'has-cat-images-id',
      title: 'Page has at least one cat image',
      failureTitle: 'Page does not have at least one cat image',
      description:
        'Pages should have lots of cat images to keep users happy. ' +
        'Consider adding a picture of a cat to your page to improve engagement.',
      requiredArtifacts: ['ImageElements'],
    };
  }

  static audit(artifacts) {
    const images = artifacts.ImageElements;
    const catImages = images.filter(img => img.src.toLowerCase().includes('cat'));

    return {
      score: catImages.length > 0 ? 1 : 0,
      numericValue: catImages.length,
    };
  }
}

export default CatAudit;
4

Test locally

During development, set NODE_PATH to the parent of your plugin directory so that Lighthouse can resolve it as a module.
# Run from inside your plugin directory.
# NODE_PATH=.. lets Lighthouse resolve the plugin as if it were installed in node_modules.
NODE_PATH=.. npx lighthouse https://example.com \
  --plugins=lighthouse-plugin-example \
  --only-categories=lighthouse-plugin-example \
  --view
5

Publish to npm

Once your plugin is ready, publish it like any other npm package. Because the plugin name starts with lighthouse-plugin-, users can discover and install it directly.
npm publish
Users install your plugin and then pass --plugins=lighthouse-plugin-example to their Lighthouse CLI invocations or programmatic API calls.

Plugin config API

The plugin.js export is an object with the following top-level properties.

audits

Declares the new audits the plugin adds. Type: Array<{path: string}> Each path should be an absolute module-style path that a consumer could pass to require — use the form lighthouse-plugin-<name>/path/to/audit.js.

category

Defines the display strings and scoring for the plugin’s report section.
PropertyTypeRequiredDescription
titlestringYesDisplay name in the report. Keep it short (under 20 characters).
descriptionstringNoExplains the category’s purpose. Link to docs or your repo.
manualDescriptionstringNoDescription for manual audits only. Use only if you’ve added manual audits.
auditRefsArray<{id, weight, group?}>YesAudits to include. weight controls their contribution to the category score.
supportedModesstring[]NoLighthouse modes this category supports. Supports all modes if omitted.

groups

Optional. Groups allow you to visually cluster audits within a category in the HTML report.
groups: {
  'images': {
    title: 'Image audits',
    description: 'Audits related to images on the page.',
  },
  'scripts': {
    title: 'Script audits',
  },
},
Reference a group from auditRefs by setting the group property to the group key:
auditRefs: [
  {id: 'my-image-audit', weight: 1, group: 'images'},
  {id: 'my-script-audit', weight: 1, group: 'scripts'},
],

Plugin audit API

meta

A static getter returning the audit’s metadata.
PropertyTypeRequiredDescription
idstringYesKebab-case identifier, typically matching the filename.
titlestringYesShort, user-visible title when the audit passes.
failureTitlestringNoShort, user-visible title when the audit fails.
descriptionstringYesWhy the audit matters. Markdown links supported.
requiredArtifactsArray<string>YesArtifacts that must be present. See available artifacts below.
scoreDisplayMode"numeric" | "binary" | "manual" | "informative"NoHow the score is displayed in the report.

audit(artifacts, context)

The function that computes results. Returns an object with at least a score property.
Return propertyTypeDescription
scorenumber | null0 to 1. Use null with notApplicable: true when the audit doesn’t apply.
numericValuenumberOptional raw numeric value exposed in the JSON result.
notApplicablebooleanMark the audit as not applicable. Score should be null.
detailsobjectStructured table or list details for the report.
Scores above 0.9 are collapsed into the “Passed audits” section by default. Use this to signal a near-perfect result without fully passing.

Available artifacts

ArtifactDescription
fetchTimeISO timestamp of when the page was fetched
URLThe requested and final URLs of the page
GatherContextThe gather mode (navigation, timespan, snapshot)
settingsThe Lighthouse settings used for this run
TimingInternal performance timings for the Lighthouse run itself
ArtifactDescription
BenchmarkIndexRough estimate of host machine CPU speed
HostFormFactor'mobile' or 'desktop'
HostUserAgentThe user agent string of the host browser
HostProductThe Lighthouse integration (CLI, DevTools, etc.)
ArtifactDescription
ConsoleMessagesConsole API calls and runtime exceptions
MainDocumentContentThe HTML of the main document
ImageElementsAll <img> elements and their attributes
LinkElementsAll <link> elements and their attributes
MetaElementsAll <meta> elements and their attributes
ScriptsAll scripts loaded by the page
ViewportDimensionsThe inner width/height of the viewport
ArtifactDescription
DevtoolsLogAll DevTools Protocol events recorded during page load. Use with NetworkRecords to get structured request objects.
TraceRaw Chrome performance trace. Use with trace processor utilities for timing data.
Lighthouse has additional internal artifacts not on this list. Those are considered experimental and may change without notice. Only use listed artifacts if you need a stable plugin.

Using network requests

Network request data is derived from DevtoolsLog at audit time using the NetworkRecords computed artifact. Pass context to allow Lighthouse to cache the result across audits.
audits/header-police.js
import {Audit, NetworkRecords} from 'lighthouse';

class HeaderPoliceAudit extends Audit {
  static get meta() {
    return {
      id: 'header-police-audit-id',
      title: 'All headers stripped of debug data',
      failureTitle: 'Headers contained debug data',
      description: 'Pages should mask debug data in production.',
      requiredArtifacts: ['DevtoolsLog'],
    };
  }

  static async audit(artifacts, context) {
    const devtoolsLog = artifacts.DevtoolsLog;
    // Pass context so Lighthouse can cache and share the parsed result.
    const requests = await NetworkRecords.request(devtoolsLog, context);

    const badRequests = requests.filter(request =>
      request.responseHeaders.some(
        header => header.name.toLowerCase() === 'x-debug-data'
      )
    );

    return {
      score: badRequests.length === 0 ? 1 : 0,
    };
  }
}

export default HeaderPoliceAudit;

Naming best practices

Category titles

Keep category titles under 20 characters — ideally a single word or acronym. Avoid prefixes like “Lighthouse” or “Plugin”.

Audit titles

Write titles in the present tense that describe what the page is or is not doing. Do
  • “Uses HTTPS”
  • “Does not use HTTPS”
  • “Tap targets are sized appropriately”
Don’t
  • “Good job on alt attributes”
  • “Fix your headers”

Audit descriptions

Provide brief context for why the audit matters and link to guides. Markdown links are supported. Do
All sites should be protected with HTTPS, even ones that don’t handle sensitive data. HTTPS prevents intruders from tampering with communications. Learn more.
Don’t
Images need alt attributes.

Common mistakes

Forgetting to filter or normalize artifact data is a frequent source of bugs.
  • Forgetting to filter: Most audits have a specific use case, but edge cases come up frequently — blob:, data:, and file: URLs for network requests; non-JavaScript script types; 1×1 tracking pixel images.
  • Forgetting to normalize: Artifact values represent what was observed on the page. Header names and values, script type values, and src values may have leading/trailing whitespace, be mixed-case, or be relative URLs.

Examples

Build docs developers (and LLMs) love