Skip to main content
This quickstart guide will walk you through a complete example of using LAFT for anomaly detection. You’ll learn how to load a CLIP model, construct a concept subspace from language prompts, and transform image features for anomaly detection.

Prerequisites

Make sure you have completed the installation steps before proceeding.

Basic Example

Here’s a complete working example that demonstrates the core LAFT workflow:
1

Import and Setup

First, import LAFT and disable PyTorch’s autograd to prevent out-of-memory errors:
import laft
import torch

# Disable autograd (prevents OOM)
torch.set_grad_enabled(False)
Disabling autograd is important when working with large CLIP models to avoid memory issues during inference.
2

Load CLIP Model

Load a pre-trained CLIP model and its preprocessing transform:
# Load CLIP model with DFN pre-trained weights
model, transform = laft.load_clip("ViT-B-16-quickgelu:dfn2b")
The model returns both the CLIP model and an image preprocessing transform. The ViT-B-16-quickgelu:dfn2b variant uses Vision Transformer backbone with DFN (DataFiltering Network) pre-training.
3

Prepare Data and Prompts

Load your dataset and define language prompts that describe the concept you want to guide or ignore:
# Get pre-configured prompts for Color MNIST
prompts = laft.prompts.get_prompts("color_mnist", "number")

# Load and preprocess images
# (assuming you have loaded images already)
images = transform(raw_images)

# Encode images and text
image_features = model.encode_image(images)
text_features = model.encode_text(prompts["all"])
The get_prompts() function returns a dictionary with different prompt sets:
  • "all": All prompts for the concept
  • "normal": Only normal state prompts
  • "exact": Exact matching prompts
4

Construct Concept Subspace

Build a concept subspace by computing pairwise differences between text embeddings and applying PCA:
# Compute pairwise differences between prompts
pair_diffs = laft.prompt_pair(text_features)

# Extract principal components as concept basis
concept_basis = laft.pca(pair_diffs, n_components=24)
The prompt_pair() function computes differences between all pairs of prompts, which helps capture semantic directions in the embedding space. PCA then extracts the most important directions as a basis for the concept subspace.
5

Transform Features

Apply language-assisted feature transformation to guide or ignore the concept:
# Guide: project features onto concept subspace
guided_features = laft.inner(image_features, concept_basis)

# Ignore: project features orthogonal to concept subspace
ignored_features = laft.orthogonal(image_features, concept_basis)
Choose the transformation based on your use case:
  • Use inner() when you want to guide detection toward the concept
  • Use orthogonal() when you want to ignore the concept
6

Detect Anomalies

Use the transformed features for anomaly detection with k-NN:
# Use k-NN for anomaly scoring
anomaly_scores = laft.knn(
    train_features=guided_features[train_indices],
    test_features=guided_features[test_indices],
    n_neighbors=30
)

# Evaluate with metrics
from laft.metrics import binary_auroc, binary_auprc

auroc = binary_auroc(anomaly_scores, labels)
auprc = binary_auprc(anomaly_scores, labels)

print(f"AUROC: {auroc:.3f}")
print(f"AUPRC: {auprc:.3f}")

Complete Script

Here’s the complete code in one place:
import laft
import torch

# Setup
torch.set_grad_enabled(False)

# Load model
model, transform = laft.load_clip("ViT-B-16-quickgelu:dfn2b")

# Get prompts
prompts = laft.prompts.get_prompts("color_mnist", "number")

# Encode features (assuming images are already loaded)
image_features = model.encode_image(images)
text_features = model.encode_text(prompts["all"])

# Construct concept subspace
pair_diffs = laft.prompt_pair(text_features)
concept_basis = laft.pca(pair_diffs, n_components=24)

# Transform features
guided_features = laft.inner(image_features, concept_basis)

# Detect anomalies
anomaly_scores = laft.knn(
    train_features=guided_features[train_indices],
    test_features=guided_features[test_indices],
    n_neighbors=30
)

# Evaluate
auroc = laft.binary_auroc(anomaly_scores, labels)
print(f"AUROC: {auroc:.3f}")

Different Use Cases

For semantic datasets like Waterbirds or CelebA:
# Load Waterbirds dataset
model, data = laft.get_clip_cached_features(
    model_name="ViT-B-16-quickgelu:dfn2b",
    dataset_name="waterbirds",
    splits=["train", "test"]
)

# Get prompts for bird detection (guide)
prompts = laft.prompts.get_prompts("waterbirds", "bird")
text_features = model.encode_text(prompts["all"])

# Build concept subspace
pair_diffs = laft.prompt_pair(text_features)
concept_basis = laft.pca(pair_diffs, n_components=24)

# Transform and evaluate
train_features, train_attrs = data["train"]
test_features, test_attrs = data["test"]

guided_train = laft.inner(train_features, concept_basis)
guided_test = laft.inner(test_features, concept_basis)

Running the Scripts

The repository includes pre-configured scripts for different datasets. Here are some examples:
python scripts/semantic/laft.py \
  -m ViT-B-16-quickgelu:dfn2b \
  -d color_mnist \
  -g guide_number \
  -o results/color_mnist.txt \
  --mnist-seed 42

Next Steps

Core Concepts

Learn the theory behind LAFT

Datasets

Explore available datasets

API Reference

Dive into the API documentation

Guides

Follow detailed usage guides

Build docs developers (and LLMs) love