Skip to main content
Chroma Logo

The open-source search engine for AI

Chroma is the fastest way to build Python or JavaScript LLM apps that search over your data. Whether you’re building a chatbot, semantic search, or RAG application, Chroma makes it simple to store embeddings and search by nearest neighbors.

Simple

Fully-typed, fully-tested, fully-documented == happiness

Dev, Test, Prod

The same API that runs in your python notebook, scales to your cluster

Feature-rich

Queries, filtering, regex and more

Free & Open Source

Apache 2.0 Licensed

Quick Example

The core API is only 4 functions:
import chromadb
# Setup Chroma in-memory, for easy prototyping. Can add persistence easily!
client = chromadb.Client()

# Create collection. get_collection, get_or_create_collection, delete_collection also available!
collection = client.create_collection("all-my-documents")

# Add docs to the collection. Can also update and delete. Row-based API coming soon!
collection.add(
    documents=["This is document1", "This is document2"], # we handle tokenization, embedding, and indexing automatically. You can skip that and add your own embeddings as well
    metadatas=[{"source": "notion"}, {"source": "google-docs"}], # filter on these!
    ids=["doc1", "doc2"], # unique for each doc
)

# Query/search 2 most similar results. You can also .get by id
results = collection.query(
    query_texts=["This is a query document"],
    n_results=2,
    # where={"metadata_field": "is_equal_to_this"}, # optional filter
    # where_document={"$contains":"search_string"}  # optional filter
)

What are embeddings?

  • Literal: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => [1.2, 2.1, ....]. This process makes documents “understandable” to a machine learning model.
  • By analogy: An embedding represents the essence of a document. This enables documents and queries with the same essence to be “near” each other and therefore easy to find.
  • Technical: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
  • A small example: If you search your photos for “famous bridge in San Francisco”. By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.
Chroma allows you to store these vectors or embeddings and search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses Sentence Transformers to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.

Integrations

Chroma integrates seamlessly with popular LLM frameworks:

Use case: ChatGPT for ______

For example, the “Chat your data” use case:
  1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Chroma embed them for you.
  2. Query relevant documents with natural language.
  3. Compose documents into the context window of an LLM like GPT4 for additional summarization or analysis.

Chroma Cloud

Our hosted service, Chroma Cloud, powers serverless vector, hybrid, and full-text search. It’s extremely fast, cost-effective, scalable and painless. Create a DB and try it out in under 30 seconds with $5 of free credits. Get started with Chroma Cloud →

Next Steps

Quickstart

Get up and running in 5 minutes

Installation

Detailed installation guide for all platforms

Deployment

Deploy Chroma to production

API Reference

Explore the full API documentation

Build docs developers (and LLMs) love