Language-Assisted Feature Transformation
Transform visual features using natural language guidance for enhanced anomaly detection. LAFT leverages vision-language models to guide feature transformations through concept subspaces.
Quick Start
Get up and running with LAFT in minutes
Install LAFT
Install LAFT and its dependencies using pip:Clone the repository and install requirements:
Key Features
Powerful tools for language-guided anomaly detection
Language-Guided Transformations
Use natural language to guide feature transformations through CLIP embeddings and concept subspaces
Concept Subspace Construction
Build robust concept representations using PCA on prompt pair differences
Flexible Projections
Project features onto or orthogonal to concept subspaces with inner() and orthogonal() operations
Comprehensive Metrics
Evaluate anomaly detection with AUROC, AUPRC, FPR95, and optimal threshold selection
Multi-Domain Datasets
Built-in support for semantic (MNIST, Waterbirds, CelebA) and industrial datasets (MVTec AD, VisA)
Prompt Engineering
Pre-configured prompt templates and utilities for different guidance types
Explore Documentation
Learn more about LAFT’s capabilities
Core Concepts
Understand how LAFT works under the hood
Datasets
Work with semantic and industrial anomaly datasets
Evaluation
Measure and compare anomaly detection performance
Ready to Get Started?
Follow our quickstart guide to build your first language-assisted anomaly detector in minutes
Get Started Now