End-to-end workflow patterns for research automation, content generation pipelines, CI/CD integration, and bulk source management using the CLI and Python API.
Use this file to discover all available pages before exploring further.
notebooklm-py is designed to be composed into repeatable pipelines. Each workflow below shows a complete end-to-end flow—from creating a notebook to downloading the final output—using the CLI, Python API, or both. The patterns scale from quick one-off tasks to fully automated CI/CD jobs.
This workflow discovers sources on a topic using the deep research agent and then generates an audio overview. It is the simplest approach when you are running it interactively and can wait five to ten minutes.
1
Create and activate a notebook
notebooklm create "Climate Change Research"notebooklm use <notebook_id>
The --import-all flag imports everything the research agent finds. Omit --no-wait here so the command blocks until research completes (up to five minutes).
notebooklm source add-research "climate change policy 2024" --mode deep --import-all
4
Generate and download the podcast
notebooklm generate audio "Focus on policy solutions and future outlook" \ --format debate --waitnotebooklm download audio ./climate-podcast.mp3
When an AI agent is driving the workflow, avoid blocking the main conversation with a long --wait. Instead, start research in a non-blocking way and handle completion in a subagent.
This pattern is preferred for LLM agents because deep research can take 15 to 30 minutes and would otherwise block the entire conversation.
1
Create the notebook and add a seed source
notebooklm create "Climate Change Research"notebooklm use <notebook_id>notebooklm source add "https://en.wikipedia.org/wiki/Climate_change"
2
Start deep research without waiting
notebooklm source add-research "climate change policy 2024" --mode deep --no-wait# Returns immediately with a status message
3
Wait and import in a subagent
Spawn a background agent with the following command. It blocks until research completes, then imports all discovered sources automatically.
notebooklm research wait --import-all --timeout 1800 -n <notebook_id>
4
Generate the podcast once sources are ready
notebooklm generate audio --format debate --json -n <notebook_id># Parse the task_id from JSON output, then wait in another subagent:notebooklm artifact wait <task_id> -n <notebook_id> --timeout 1200notebooklm download audio ./climate-podcast.mp3 -n <notebook_id>
This complete Python example shows the full research-to-podcast pipeline: create a notebook, add sources, run research, wait for sources to index, then generate and download an audio overview.
import asynciofrom notebooklm import NotebookLMClient, AudioFormat, AudioLengthasync def research_pipeline(topic: str, seed_url: str, output_path: str): async with await NotebookLMClient.from_storage() as client: # 1. Create notebook nb = await client.notebooks.create(f"Research: {topic}") print(f"Notebook: {nb.id}") # 2. Add seed source source = await client.sources.add_url(nb.id, seed_url) print(f"Seed source: {source.id}") # 3. Start deep web research (non-blocking) research = await client.research.start(nb.id, topic, source="web", mode="deep") task_id = research["task_id"] print(f"Research started: {task_id}") # 4. Poll until research completes while True: status = await client.research.poll(nb.id) if status["status"] == "completed": print(f"Research complete. Found {len(status['sources'])} sources.") break print(f"Research in progress… ({status['status']})") await asyncio.sleep(15) # 5. Import discovered sources imported = await client.research.import_sources( nb.id, task_id, status["sources"] ) print(f"Imported {len(imported)} sources") # 6. Chat with the sources result = await client.chat.ask(nb.id, f"Summarize the key findings on {topic}") print(result.answer[:500]) # 7. Generate a podcast gen_status = await client.artifacts.generate_audio( nb.id, audio_format=AudioFormat.DEEP_DIVE, audio_length=AudioLength.DEFAULT, ) print(f"Generation task: {gen_status.task_id}") # 8. Wait for completion final = await client.artifacts.wait_for_completion( nb.id, gen_status.task_id, timeout=1200, poll_interval=15 ) if final.is_complete: path = await client.artifacts.download_audio(nb.id, output_path) print(f"Podcast saved to: {path}") else: print(f"Generation did not complete in time: {final.status}")asyncio.run(research_pipeline( topic="AI safety regulations", seed_url="https://en.wikipedia.org/wiki/AI_safety", output_path="./ai-safety-podcast.mp3",))
To set up NOTEBOOKLM_AUTH_JSON: run notebooklm login locally, then copy the contents of ~/.notebooklm/profiles/default/storage_state.json into a repository secret named NOTEBOOKLM_AUTH_JSON.