Skip to main content
Prism turns your personal document collection into a searchable, conversational knowledge base. This guide walks you through creating an account, uploading your first file, and getting your first AI-grounded answer.
1

Create your account

Go to the Prism app and click Sign up. You have two options:
  • Email and password — enter your name, email address, and a password, then click Sign Up.
  • Google OAuth — click Continue with Google to sign in with your Google account in one step.
After registering with email and password, you’ll be redirected to the login page. Sign in with your new credentials to reach the dashboard.
2

Log in to your dashboard

On the login page, sign in using email and password or Continue with Google. Successful authentication takes you directly to your dashboard.
New accounts start on the Free plan, which includes up to 10 documents and 5 GB of storage. You can upgrade to Pro (500 documents, 15 GB) or Enterprise (unlimited) later.
3

Upload your first file

From the dashboard, click Upload and drag a file into the drop zone, or click to browse. Prism supports:
CategoryFormats
DocumentsPDF, DOCX, MD, TXT
ImagesJPG, JPEG, PNG, GIF, WEBP, BMP, SVG
CodeJS, JSX, TS, TSX, PY, JAVA, CPP, C, H, HPP, CS, RB, GO, RS, PHP, SWIFT, KT, SCALA, R, CSS, SCSS, SASS, HTML, XML, JSON, YAML, YML, SQL, SH, BASH, PS1, BAT, CMAKE, Dockerfile
Once uploaded, Prism automatically extracts the text content, splits it into semantic chunks, generates 768-dimensional embeddings using Gemini’s text-embedding-004 model, and indexes everything in Qdrant. Images are handled differently — Gemini Vision generates a detailed text description that becomes the searchable content.Processing typically completes within a few seconds for most files.
4

Run your first semantic search

Click Search in the sidebar and type a question or phrase in natural language — for example, “transformer attention mechanisms” or “authentication middleware”.Prism converts your query into an embedding and retrieves the most semantically similar chunks from your library. Results show the matched document name, file type, and the relevant excerpt. Unlike keyword search, this finds content based on meaning — so “neural network training loop” will surface Python files containing optimizer.step() even if the phrase never appears verbatim.
5

Start a RAG chat session

Click Chat in the sidebar and type a question about your documents. Prism retrieves the most relevant chunks from your library, passes them as context to Gemini, and streams back a grounded answer that cites which document sections it drew from.
Ask specific questions for best results — for example, “What are the key findings in my Q3 report?” rather than “Tell me about my documents.”
Chat sessions use RAG mode by default: every response is anchored to your actual content. If Prism cannot find relevant context in your library, it will say so clearly rather than hallucinating an answer.
Free plan limits: 10 documents and 5 GB storage. Uploading beyond these limits requires upgrading to Pro or Enterprise. See Plans and limits for details.

What to explore next

Semantic search

Learn how vector similarity search works and how to filter by file type or category

RAG chat

Understand how Prism grounds answers in your documents and cites sources

Multimodal support

See how images are described and made searchable by Gemini Vision

Vector insights

Visualize relationships between your documents on an interactive force graph

Build docs developers (and LLMs) love