notebooklm-py ships with a skill system that gives AI agents like Claude Code and Codex structured instructions for using the library. A skill is aDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/teng-lin/notebooklm-py/llms.txt
Use this file to discover all available pages before exploring further.
SKILL.md file placed in a directory that the agent tool reads at startup. Once installed, the agent understands available commands, expected output formats, and recommended patterns—without you having to explain them each time.
Codex-compatible environments read a different file (AGENTS.md) that lives at the repository root and is automatically discovered when an agent clones or opens the project.
Installing the skill for Claude Code
You have two ways to install the skill. Both write the sameSKILL.md content; choose the one that fits your workflow.
Install with the notebooklm CLI
The
skill install command copies the bundled skill file into the local agent skill directories managed by the CLI.notebooklm skill install targets both ~/.claude/skills/notebooklm/SKILL.md and ~/.agents/skills/notebooklm/SKILL.md by default (scope: user, target: all).Check installation status
Verify that the skill was installed correctly and review which targets are active.This shows the installed path, the version of the skill file, and whether the file matches the version bundled with the currently installed package.
Codex does not use the
skill subcommand. In a repository checkout it reads the root AGENTS.md file directly, which contains the same guidance in Codex-compatible format. You do not need to run notebooklm skill install for Codex—just ensure the Python package is installed and authentication is set up.What the skill provides
The skill file (SKILL.md) contains:
- A list of all available
notebooklmCLI commands with arguments and options - Recommended autonomy rules (which commands to run automatically vs. confirm first)
- Common workflow patterns (research to podcast, document analysis, bulk import)
- JSON output schemas for parsing command output programmatically
- Error handling guidance and exit code meanings
- Known limitations and rate-limit workarounds
Uninstalling
To remove the skill from one or all targets:Best practices for agents using the CLI
The following recommendations come directly from the CLI documentation and skill file, and apply to any agent driving the CLI programmatically. Notebook context:- Use
notebooklm use <id>only in single-agent sessions. In parallel workflows, pass-n <id>directly to each command so agents do not overwrite each other’s context file. - Prefer full UUIDs over partial IDs in automation to avoid ambiguity.
- All
generatecommands exceptmind-mapare asynchronous by default. They return atask_idimmediately. - Do not use
--waitin a main conversation loop—generation can take 5 to 45 minutes. Instead, useartifact wait <id>in a subagent. mind-mapis synchronous and completes instantly; no polling is needed.
source addauto-detects the content type from its argument (URL, YouTube URL, or file path).- For deep research, use
source add-research --mode deep --no-waitand thenresearch wait --import-allin a background task.
- Partial ID prefixes (six or more characters) work with
use,source delete,artifact wait, and other ID-based commands. - For automated workflows, use full UUIDs to avoid prefix collisions.
Parallel agent isolation
When multiple agents run concurrently on the same machine, they share the~/.notebooklm/context.json file. Use one of the following strategies to prevent context collisions:
NOTEBOOKLM_HOME) provides complete isolation including separate storage files and contexts.