Documentation Index
Fetch the complete documentation index at: https://mintlify.com/teng-lin/notebooklm-py/llms.txt
Use this file to discover all available pages before exploring further.
client.chat lets you interact with a notebook’s indexed content through natural language. Each call to ask() returns a typed AskResult that includes the answer text, inline citation markers, and a list of ChatReference objects pointing back to specific passages in your sources. You can continue any conversation across multiple turns by passing the conversation_id returned from the previous call.
Methods
ask(notebook_id, question, source_ids, conversation_id)
Asks a question against the notebook and returns an answer with source citations.
The notebook ID to query.
The question to ask. Phrased as natural language.
Restrict the answer to specific sources.
None uses all sources in the notebook.Pass the
conversation_id from a previous AskResult to continue that conversation. Each follow-up builds on prior context.Typed result containing the answer, conversation metadata, and citation references.
- Single question
- Multi-turn conversation
- Source filtering
configure(notebook_id, goal, response_length, custom_prompt)
Sets the chat persona for a notebook. This changes how the assistant formulates answers — its tone, depth, and focus.
The notebook ID.
Controls the assistant’s purpose.
DEFAULT (general), LEARNING_GUIDE (educational focus), CUSTOM (uses custom_prompt).Controls answer verbosity.
DEFAULT · LONGER · SHORTER.System prompt used when
goal=ChatGoal.CUSTOM. Ignored for other goal values.True when configuration was saved successfully.get_history(notebook_id, limit, conversation_id)
Returns a list of question-answer pairs from a recent conversation.
The notebook ID.
Maximum number of turns to return.
Specific conversation to fetch. When
None, returns history from the most recent conversation.List of
(question, answer) string pairs, oldest first.get_conversation_id(notebook_id)
Fetches the ID of the most recent conversation for a notebook from the server.
The notebook ID.
The conversation ID string, or
None if no conversation has been started.Working with citations
Theanswer field contains inline citation markers like [1], [2]. Each marker corresponds to a ChatReference in result.references.
SourceFulltext.find_citation_context() to locate the citation in the original indexed text:
AskResult dataclass
The answer text with inline citation markers such as
[1], [2].Conversation identifier. Pass to subsequent
ask() calls to continue the conversation.The turn index within this conversation (starts at 1).
True when a conversation_id was passed and this is a continuation.Source references cited in the answer. See
ChatReference fields below.First 1000 characters of the raw API response. Useful for debugging.
ChatReference dataclass
UUID of the source containing the cited passage.
The numeric citation marker (
[1], [2], …) that appears in the answer text.A snippet or section header from the cited passage. May not be the full quote.
Start position in NotebookLM’s internal chunk index (not raw fulltext).
End position in NotebookLM’s internal chunk index.
Internal chunk identifier, useful for debugging.
Chat configuration enums
ChatGoal
| Value | Description |
|---|---|
ChatGoal.DEFAULT | General-purpose assistant |
ChatGoal.CUSTOM | Uses the custom_prompt you provide |
ChatGoal.LEARNING_GUIDE | Educational focus with structured answers |
ChatResponseLength
| Value | Description |
|---|---|
ChatResponseLength.DEFAULT | Standard length |
ChatResponseLength.LONGER | More detailed answers |
ChatResponseLength.SHORTER | Concise answers |
ChatMode
ChatMode is a service-level enum providing named presets for common configurations. It is distinct from ChatGoal, which is the low-level RPC enum used by configure().
| Value | Description |
|---|---|
ChatMode.DEFAULT | General purpose |
ChatMode.LEARNING_GUIDE | Educational focus |
ChatMode.CONCISE | Brief responses |
ChatMode.DETAILED | Verbose responses |