Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/teng-lin/notebooklm-py/llms.txt

Use this file to discover all available pages before exploring further.

client.research gives you programmatic access to NotebookLM’s research agents. You start a research session with a natural-language query, poll until results are ready, and then selectively import the discovered web pages or Drive documents as notebook sources. The web agent supports both a fast mode and a deep-research mode for more thorough exploration.

Methods

start(notebook_id, query, source, mode)

Starts a new research session and returns immediately with a task ID for polling.
async def start(
    notebook_id: str,
    query: str,
    source: str = "web",
    mode: str = "fast",
) -> dict
notebook_id
str
required
The notebook ID to associate research with.
query
str
required
Natural language research question or topic.
source
str
default:"web"
Data source to search. Valid values: "web" or "drive".
mode
str
default:"fast"
Research depth. "fast" for a quick search; "deep" for a thorough multi-step investigation. Deep mode is only available when source="web".
return
dict
Dict with keys: task_id (str), report_id (str), notebook_id (str), query (str), mode (str).
Raises: ValueError when an invalid source/mode combination is used (e.g. source="drive" with mode="deep").
result = await client.research.start(nb_id, "AI safety regulations 2025")
task_id = result["task_id"]
Google Drive research is always performed in fast mode. Passing mode="deep" with source="drive" raises a ValueError.

poll(notebook_id)

Checks the status of the most recent research session for a notebook.
async def poll(notebook_id: str) -> dict
notebook_id
str
required
The notebook ID.
return
dict
Dict with keys:
  • task_id (str) — the research task ID
  • status (str) — "completed", "in_progress", or "no_research"
  • query (str) — the original query
  • sources (list) — discovered sources, each a dict with url and title
  • summary (str) — brief AI-generated summary of findings
import asyncio

status = await client.research.poll(nb_id)
print(status["status"])   # "in_progress" | "completed" | "no_research"
print(status["summary"])
for src in status["sources"]:
    print(f"  {src['title']}: {src['url']}")

import_sources(notebook_id, task_id, sources)

Imports a selection of sources discovered during a research session into the notebook.
async def import_sources(
    notebook_id: str,
    task_id: str,
    sources: list[dict],
) -> list[dict]
notebook_id
str
required
The notebook ID.
task_id
str
required
The task_id from start() or poll().
sources
list[dict]
required
List of source dicts to import. Each dict must have url (str) and title (str) keys — these come from the sources list returned by poll().
return
list[dict]
List of imported source records, each containing id and title.
imported = await client.research.import_sources(
    nb_id,
    task_id,
    status["sources"][:5],  # Import first 5 discovered sources
)
print(f"Imported {len(imported)} sources")
for src in imported:
    print(f"  {src['id']}: {src['title']}")

Polling loop example

The most common pattern is to start research, poll in a loop, and import once complete:
import asyncio
from notebooklm import NotebookLMClient

async def main():
    async with await NotebookLMClient.from_storage() as client:
        nb_id = "your_notebook_id"

        # Start research
        result = await client.research.start(nb_id, "AI safety regulations")
        task_id = result["task_id"]

        # Poll until done
        while True:
            status = await client.research.poll(nb_id)
            if status["status"] == "completed":
                break
            if status["status"] == "no_research":
                print("No research session found")
                return
            print(f"Still researching... ({len(status.get('sources', []))} found so far)")
            await asyncio.sleep(10)

        print(f"Research complete: {status['summary']}")
        print(f"Found {len(status['sources'])} sources")

        # Import the top 5 results
        imported = await client.research.import_sources(
            nb_id, task_id, status["sources"][:5]
        )
        print(f"Imported {len(imported)} sources into notebook")

asyncio.run(main())

Non-blocking pattern

If you want to start research and do other work while it runs, use asyncio.sleep() inside a non-blocking loop:
import asyncio
from notebooklm import NotebookLMClient

async def main():
    async with await NotebookLMClient.from_storage() as client:
        nb_id = "your_notebook_id"

        # Start research (returns immediately)
        result = await client.research.start(nb_id, "climate policy 2025")
        task_id = result["task_id"]

        # Do other work while research runs
        notebooks = await client.notebooks.list()
        print(f"You have {len(notebooks)} notebooks")

        # Check status later
        status = await client.research.poll(nb_id)
        while status["status"] == "in_progress":
            await asyncio.sleep(15)
            status = await client.research.poll(nb_id)

        # Import findings
        if status["status"] == "completed":
            await client.research.import_sources(nb_id, task_id, status["sources"])

asyncio.run(main())

Source and mode reference

sourceValid mode valuesNotes
"web""fast" · "deep"Default. Deep mode performs multi-step searching.
"drive""fast"Drive research always uses fast mode.

poll() return structure

FieldTypeDescription
task_idstrTask identifier from start()
statusstr"completed" · "in_progress" · "no_research"
querystrThe original research query
sourceslist[dict]Each dict has url and title
summarystrAI-generated summary of the research findings

Build docs developers (and LLMs) love