Skip to main content

Class Signature

class LPTMModel(Basemodel):
    def __init__(self, config=None)
The LPTMModel class implements the Large Pre-trained Time Series Model, a versatile foundation model supporting multiple tasks including forecasting, imputation, anomaly detection, and classification.

Initialization Parameters

config
dict
default:"None"
Model configuration dictionary. The model loads from the pre-trained checkpoint kage08/lptm-large2 with the provided configuration.

Configuration Parameters

The config dictionary supports the following parameters:
max_patch
int
Maximum patch length for the model.

Methods

finetune()

def finetune(dataset: LPTMDataset, task_name: str = "forecasting", **kwargs)
Finetune the LPTM model on a dataset.
dataset
LPTMDataset
required
Dataset used for finetuning. Use get_data_loader() to obtain the dataloader.
task_name
str
default:"forecasting"
Training task to perform. Options:
  • "forecasting": Standard forecasting task
  • "forecasting2": Alternative forecasting mode
  • "imputation": Fill in missing values
  • "detection": Anomaly detection
  • "classification": Time series classification
lr
float
default:"1e-4"
Learning rate for training.
epoch
int
default:"5"
Number of training epochs.
norm
float
default:"5.0"
Maximum norm for gradient clipping.
mask_ratio
float
default:"0.25"
Masking ratio for imputation and detection tasks.
quantization
bool
default:"False"
Whether to apply quantization during training.
return
nn.Module
The trained model instance.

evaluate()

def evaluate(dataset: LPTMDataset, task_name: str = "forecasting", metric_only=False, **kwargs)
Evaluate the LPTM model on a dataset.
dataset
LPTMDataset
required
Dataset for evaluation. Use get_data_loader() to obtain the dataloader.
task_name
str
default:"forecasting"
Evaluation task. Options:
  • "forecasting": Standard forecasting
  • "forecasting2": Alternative forecasting mode
  • "imputation": Imputation evaluation
  • "detection": Anomaly detection evaluation
  • "classification": Classification evaluation
metric_only
bool
default:"False"
If True, return only computed metrics.
return
Dict[str, float] | Tuple
For forecasting tasks (when metric_only=True):Dictionary containing metrics:
  • mse: Mean Squared Error
  • mae: Mean Absolute Error
  • mase: Mean Absolute Scaled Error
  • mape: Mean Absolute Percentage Error
  • rmse: Root Mean Squared Error
  • nrmse: Normalized RMSE
  • smape: Symmetric Mean Absolute Percentage Error
  • msis: Mean Scaled Interval Score
  • nd: Normalized Deviation
When metric_only=False (forecasting):Tuple of (metrics, trues, preds, histories):
  • metrics: Dictionary of metrics (as above)
  • trues: Ground truth values, shape (num_samples, num_ts, horizon_len)
  • preds: Predictions, shape (num_samples, num_ts, horizon_len)
  • histories: Historical context, shape (num_samples, num_ts, context_len)
For imputation:Returns (trues, preds, masks) tupleFor detection:Returns (trues, preds, labels) tupleFor classification:Returns (accuracy, embeddings, labels) tuple

plot()

def plot(dataset: LPTMDataset, task_name="forecasting", **kwargs)
Visualize forecasting results.
dataset
LPTMDataset
required
Dataset for plotting.
task_name
str
default:"forecasting"
Task to visualize (currently only supports “forecasting”).
**kwargs
dict
Additional keyword arguments forwarded to visualization.

quantize()

def quantize(quant_type="int8", device="cuda")
Quantize the model for efficient inference.
quant_type
str
default:"int8"
Quantization type to apply.
device
str
default:"cuda"
Device to perform quantization on.
return
nn.Module
Quantized model instance.

Usage Example

from samay.model import LPTMModel
from samay.dataset import LPTMDataset

# Initialize model
config = {"max_patch": 512}
model = LPTMModel(config=config)

# Prepare dataset
dataset = LPTMDataset(...)

# Finetune for forecasting
model.finetune(dataset, task_name="forecasting", lr=1e-4, epoch=10)

# Evaluate
metrics = model.evaluate(dataset, metric_only=True)
print(f"MSE: {metrics['mse']}")

# Quantize for efficient inference
model.quantize(quant_type="int8")

Notes

  • LPTM supports multiple tasks: forecasting, imputation, detection, and classification
  • The model automatically loads from the kage08/lptm-large2 checkpoint
  • Mixed precision training is used automatically with gradient scaling
  • OneCycleLR scheduler is applied during training

Build docs developers (and LLMs) love