Skip to main content
Custom audits let you run your own checks alongside the built-in Lighthouse suite. Unlike plugins, a custom audit wired into a config can also include custom gatherers that collect new data from the page.

Custom audit vs. plugin

Custom audit + configPlugin
Distribute on npmManual — ship the config file✅ Built-in
Stable public API❌ May change between minor versions✅ Semver-stable
Custom gatherers
Modify core categories
Change config.settings
Use a custom audit + config when you need to collect new page data or change core Lighthouse behaviour. Use a plugin when you want a shareable package that only scores against existing artifacts.

New audit principles

A good Lighthouse audit should satisfy all of the following criteria before it is written.
  1. Applicable at scale — affects a significant portion of web developers based on severity and reach.
  2. Improves mobile web — contributes meaningfully to better end-user experience.
  3. Low overhead — does not significantly impact Lighthouse runtime performance or bundle size.
  4. Non-duplicate — measures something not already covered by existing audits.
  5. Measurable — has a numeric score (performance) or clear pass/fail state.
  6. Actionable — when failing, gives specific advice. If failure can be tied to a resource (DOM element, script, line of code), uses the appropriate detail type.
  7. No third-party APIs — completes the audit check without external API calls.

Actionability requirements

  1. Specific advice must be given when an audit fails. If an audit can fail in multiple ways, each path needs its own guidance.
  2. If the failure applies to a specific resource, use the appropriate detail type (see table below).
  3. If multiple failures can occur on a single page, return a table — do not collapse everything into a binary score.

Detail types

Detail typeResourceNotes
nodeDOM elementSet path to a devtoolsNodePath
source-locationCode / network resourcePoints to a specific line and column
codeFreeformRendered as monospace like this
urlNetwork resourceRendered as a pretty link
thumbnailImage resourceSame as url, but shows a thumbnail
linkArbitrary label / URL combination
bytesValue in bytes, formatted as KiB in the report
text / ms / numericPlain values; ms and numeric accept a granularity field

Granularity

The bytes, ms, and numeric types accept an optional granularity field. The value must be an integer power of 10 (e.g. 0.001, 0.1, 1, 10). The formatted value is rounded to that nearest number. Default is 0.1 (except ms, which defaults to 10).

Writing a custom audit

1

Write the audit class

An audit extends Audit from lighthouse and implements meta and audit(). This example detects memory usage above a threshold using a custom MemoryProfile artifact.
memory-audit.js
import {Audit} from 'lighthouse';

const MAX_MEMORY_USAGE = 1_000_000; // 1 MB

class MemoryUsage extends Audit {
  static get meta() {
    return {
      id: 'memory-audit',
      title: 'Did not find any large memory usage',
      failureTitle: 'Found large memory usage',
      description: 'Detects if any memory sample was larger than 1 MB.',

      // Declare which artifacts this audit needs.
      requiredArtifacts: ['MemoryProfile'],
    };
  }

  static audit(artifacts) {
    let largestMemoryUsage = 0;
    for (const sample of artifacts.MemoryProfile.samples) {
      if (sample.total > largestMemoryUsage) {
        largestMemoryUsage = sample.total;
      }
    }

    return {
      numericValue: largestMemoryUsage,
      score: largestMemoryUsage > MAX_MEMORY_USAGE ? 0 : 1,
    };
  }
}

export default MemoryUsage;
2

Write a gatherer (if needed)

If your audit needs data not available in the built-in artifacts, write a custom gatherer. Gatherers run during page load and produce a named artifact.
memory-gatherer.js
import {Gatherer} from 'lighthouse';

class MemoryGatherer extends Gatherer {
  meta = {
    supportedModes: ['navigation'],
  };

  async getArtifact(context) {
    const session = context.driver.defaultSession;
    // Collect JS heap statistics via the Runtime domain.
    const {result} = await session.sendCommand('Runtime.evaluate', {
      expression: 'JSON.stringify(performance.memory)',
      returnByValue: true,
    });

    return {
      samples: [JSON.parse(result.value)],
    };
  }
}

export default MemoryGatherer;
3

Wire everything into a config

Use extends: 'lighthouse:default' to run your audits alongside the full default suite. All arrays are concatenated with the defaults; primitive values override them.
custom-config.js
export default {
  // 1. Run your custom tests along with all default Lighthouse tests.
  extends: 'lighthouse:default',

  // 2. Register new artifacts with custom gatherers.
  artifacts: [
    {id: 'MemoryProfile', gatherer: 'memory-gatherer'},
  ],

  // 3. Add your audit to the list that will run.
  audits: [
    'memory-audit',
  ],

  // 4. Create a new section in the report for your results.
  categories: {
    mysite: {
      title: 'My site audits',
      description: 'Audits for our site',
      auditRefs: [
        {id: 'memory-audit', weight: 1},
      ],
    },
  },
};
When extending the default config, arrays (like audits and artifacts) are concatenated with the defaults. Primitive values like settings.throttlingMethod override the default.
4

Run Lighthouse with your config

Pass the config file to the CLI with --config-path.
lighthouse --config-path=custom-config.js https://example.com
Or use it programmatically:
import lighthouse from 'lighthouse';
import customConfig from './custom-config.js';

const result = await lighthouse('https://example.com', {}, customConfig);
console.log(result.lhr.categories.mysite.score);

Audit meta properties

PropertyTypeRequiredDescription
idstringYesKebab-case identifier. Filename should match.
titlestringYesUser-visible title when the audit passes.
failureTitlestringNoUser-visible title when the audit fails.
descriptionstringYesWhy the audit matters. Markdown links supported.
requiredArtifactsArray<string>YesArtifacts that must be present for the audit to run.
scoreDisplayModestringNo"numeric", "binary", "manual", or "informative".

Audit ID naming policy

Audit IDs should be based on the noun of what the audit surfaces, not a verb describing the check.
BadGoodReason
no-vulnerable-dependenciesvulnerable-dependenciesNo no- prefix
redirects-httphttp-redirectNo verbs
uses-long-cache-ttlcache-headersNo verbs
is-crawlablecrawlabilityNo verbs
imagesoversized-imagesToo broad
used-cssunused-cssPrefer failing-state adjective

Audit title conventions

  • Opportunity audits: Use an imperative title describing the fix. Example: “Compress large images”.
  • Standard audits: Use a descriptive title and failureTitle that state what the page is or is not doing. Example: title: "Page works offline", failureTitle: "Page does not work offline".

Returning structured details

When your audit fails for a specific resource, return a details object so the report can render a table or list.
static audit(artifacts) {
  const badImages = artifacts.ImageElements
    .filter(img => !img.alt)
    .map(img => ({
      type: /** @type {const} */ ('node'),
      lhId: img.lhId,
      devtoolsNodePath: img.devtoolsNodePath,
      snippet: img.snippet,
      nodeLabel: img.nodeLabel,
    }));

  return {
    score: badImages.length === 0 ? 1 : 0,
    details: {
      type: 'table',
      headings: [
        {key: 'node', valueType: 'node', label: 'Element'},
      ],
      items: badImages,
    },
  };
}
If multiple failures can occur on a single page, always return a table with one row per failure. A binary score alone does not give users enough information to act.

Scoring guidance

  • Return a number between 0 and 1. Scores above 0.9 are collapsed into “Passed audits” by default.
  • Weight audits within a category by their relative importance using the weight field in auditRefs.
  • When an audit’s advice does not apply to the current page, return {score: null, notApplicable: true} rather than inflating the score with a pass.

Build docs developers (and LLMs) love