Skip to main content

Class Signature

class MomentModel(Basemodel):
    def __init__(self, config=None, repo=None)
The MomentModel class implements the MOMENT (Multi-task Open Model for Time Series) foundation model, supporting forecasting, imputation, anomaly detection, and classification tasks.

Initialization Parameters

config
dict
default:"None"
Model configuration dictionary containing model-specific parameters.
repo
str
default:"None"
Hugging Face model repository ID. If not provided, initializes a new model without pre-trained weights.

Methods

finetune()

def finetune(dataset: MomentDataset, task_name: str = "forecasting", **kwargs)
Finetune the model on the given dataset.
dataset
MomentDataset
required
Dataset for finetuning. Call get_data_loader() to get the dataloader.
task_name
str
default:"forecasting"
Task name. Options:
  • "forecasting": Time series forecasting
  • "imputation": Missing value imputation
  • "detection": Anomaly detection
  • "classification": Time series classification
lr
float
default:"1e-4"
Learning rate for training.
epoch
int
default:"5"
Number of training epochs.
norm
float
default:"1.0"
Maximum norm for gradient clipping.
mask_ratio
float
default:"0.25"
Masking ratio for imputation and detection tasks.
return
MOMENTPipeline
The finetuned model.

evaluate()

def evaluate(dataset: MomentDataset, task_name: str = "forecasting", metric_only: bool = False, **kwargs)
Evaluate the model on a dataset.
dataset
MomentDataset
required
Dataset for evaluation. Call get_data_loader() to get the dataloader.
task_name
str
default:"forecasting"
Task name: “forecasting”, “imputation”, “detection”, or “classification”.
metric_only
bool
default:"False"
If True, return only metrics.
return
Dict[str, float] | Tuple
For forecasting (when metric_only=True):Dictionary containing:
  • mse: Mean Squared Error
  • mae: Mean Absolute Error
  • mase: Mean Absolute Scaled Error
  • mape: Mean Absolute Percentage Error
  • rmse: Root Mean Squared Error
  • nrmse: Normalized RMSE
  • smape: Symmetric Mean Absolute Percentage Error
  • msis: Mean Scaled Interval Score
  • nd: Normalized Deviation
When metric_only=False (forecasting):Tuple of (metrics, trues, preds, histories):
  • metrics: Dictionary of metrics
  • trues: Ground truth, shape (num_samples, num_ts, horizon_len)
  • preds: Predictions, shape (num_samples, num_ts, horizon_len)
  • histories: Historical context, shape (num_samples, num_ts, context_len)
For imputation:Returns (trues, preds, masks) tupleFor detection:Returns (trues, preds, labels) tupleFor classification:Returns (accuracy, embeddings, labels) tuple

plot()

def plot(dataset: MomentDataset, task_name: str = "forecasting")
Visualize results from the MOMENT model.
dataset
MomentDataset
required
Dataset for plotting. Use get_data_loader() to obtain the dataloader.
task_name
str
default:"forecasting"
Task to visualize. Options: “forecasting”, “imputation”, “detection”, or “classification”.
return
None
This method does not return a value. It displays visualizations.

quantize()

def quantize(quant_type="int8", device="cuda")
Quantize the model for efficient inference.
quant_type
str
default:"int8"
Quantization type to apply.
device
str
default:"cuda"
Device to perform quantization on.
return
MOMENTPipeline
Quantized model instance.

Usage Example

from samay.model import MomentModel
from samay.dataset import MomentDataset

# Load pre-trained model
model = MomentModel(repo="AutonLab/MOMENT-1-large")

# Or initialize without pre-trained weights
config = {...}
model = MomentModel(config=config)

# Prepare dataset
dataset = MomentDataset(...)

# Finetune for forecasting
model.finetune(dataset, task_name="forecasting", lr=5e-5, epoch=10)

# Evaluate
metrics, trues, preds, histories = model.evaluate(dataset, metric_only=False)
print(f"RMSE: {metrics['rmse']}")

# Visualize results
model.plot(dataset, task_name="forecasting")

# Quantize for deployment
model.quantize(quant_type="int8")

Notes

  • MOMENT is a multi-task foundation model supporting forecasting, imputation, detection, and classification
  • The model uses mixed precision training with gradient scaling for efficiency
  • OneCycleLR scheduler is applied during training
  • MSELoss is used for forecasting/imputation/detection, CrossEntropyLoss for classification
  • Data denormalization is automatically applied during evaluation if the dataset was normalized

Build docs developers (and LLMs) love