Skip to main content

Class Signature

class MoiraiTSModel(Basemodel):
    def __init__(
        self,
        config=None,
        repo=None,
        model_type="moirai-moe",
        model_size="small",
        **kwargs
    )
The MoiraiTSModel class implements Salesforce’s Moirai, a universal time series forecasting model with support for multiple variants including standard Moirai, Moirai-MoE (Mixture of Experts), and Moirai2.

Initialization Parameters

config
dict
default:"None"
Model configuration dictionary containing hyperparameters.
repo
str
default:"None"
Hugging Face model repository ID. If not provided, defaults to Salesforce repositories based on model_type and model_size.
model_type
str
default:"moirai-moe"
Type of Moirai model:
  • "moirai": Standard Moirai model
  • "moirai-moe": Moirai with Mixture of Experts
  • "moirai2": Moirai 2.0 with quantile predictions
model_size
str
default:"small"
Model size: “small”, “base”, or “large”.

Configuration Parameters

horizon_len
int
default:"32"
Forecast horizon length.
context_len
int
default:"128"
Context window length.
patch_size
int
default:"16"
Patch size for tokenization.
batch_size
int
default:"16"
Batch size for training and inference.
num_samples
int
default:"100"
Number of samples for probabilistic forecasting (ignored for moirai2).
quantiles
list[float]
Quantile levels for moirai2 (e.g., [0.1, 0.5, 0.9]).
target_dim
int
default:"1"
Dimension of target variables.
feat_dynamic_real_dim
int
default:"0"
Dimension of dynamic real-valued features.
past_feat_dynamic_real_dim
int
default:"0"
Dimension of past dynamic real-valued features.

Methods

finetune()

def finetune(dataset: MoiraiDataset, **kwargs)
Finetune the model on the given dataset.
dataset
MoiraiDataset
required
Dataset containing the input data and relevant functions like dataloaders.
max_epochs
int
required
Maximum number of training epochs.
batch_size
int
Override default batch size.
num_batches_per_epoch
int
Number of batches per epoch (optional).
tf32
bool
Enable TF32 tensor cores for faster training.
return
None
The finetuned model is stored in self.finetuned_model.

evaluate()

def evaluate(
    dataset: MoiraiDataset,
    metrics: list[str] = ["MSE"],
    output_transforms: transforms.Compose = None,
    num_sample_flag: bool = False,
    zero_shot: bool = True,
    metric_only: bool = False,
    **kwargs
)
Evaluate the model on the given test dataset using the specified metrics.
dataset
MoiraiDataset
required
Dataset to evaluate the model on.
metrics
list[str]
default:"['MSE']"
Metrics to evaluate. Options: “MSE”, “MASE”.
output_transforms
transforms.Compose
default:"None"
Transforms to apply on the model output.
num_sample_flag
bool
default:"False"
If True, use number of samples for distribution sampling.
zero_shot
bool
default:"True"
If True, use the standard model; else use the finetuned model.
metric_only
bool
default:"False"
If True, only return metrics.
return
Dict[str, float] | Tuple
When metric_only=True:Dictionary containing:
  • mse: Mean Squared Error
  • mae: Mean Absolute Error
  • mase: Mean Absolute Scaled Error
  • mape: Mean Absolute Percentage Error
  • rmse: Root Mean Squared Error
  • nrmse: Normalized RMSE
  • smape: Symmetric Mean Absolute Percentage Error
  • msis: Mean Scaled Interval Score
  • nd: Normalized Deviation
  • mwsq: Mean Weighted Scaled Quantile Loss
  • crps: Continuous Ranked Probability Score
When metric_only=False:Tuple of (metrics, trues, preds, histories, quantile_preds):
  • metrics: Dictionary of metrics
  • trues: Ground truth values, shape (num_samples, num_ts, horizon_len)
  • preds: Mean predictions, shape (num_samples, num_ts, horizon_len)
  • histories: Historical values, shape (num_samples, num_ts, context_len)
  • quantile_preds: Quantile predictions, shape (num_samples, num_ts, horizon_len, num_quantiles)

plot()

def plot(dataset: MoiraiDataset, zero_shot: bool = False, **kwargs)
Plot the results of the model on the given dataset.
dataset
MoiraiDataset
required
Dataset containing the input data and relevant functions.
zero_shot
bool
default:"False"
If True, use the standard model; else use the finetuned model.
return
None
This method does not return a value. It displays visualizations.

Usage Example

from samay.model import MoiraiTSModel
from samay.dataset import MoiraiDataset

# Initialize Moirai-MoE model
config = {
    "horizon_len": 96,
    "context_len": 512,
    "patch_size": 32,
    "batch_size": 32,
    "model_type": "moirai-moe"
}
model = MoiraiTSModel(
    config=config,
    model_type="moirai-moe",
    model_size="small"
)

# Prepare dataset
dataset = MoiraiDataset(...)

# Zero-shot evaluation
metrics = model.evaluate(dataset, zero_shot=True, metric_only=True)
print(f"Zero-shot MSE: {metrics['mse']}")

# Finetune model
model.finetune(dataset, max_epochs=10, tf32=True)

# Evaluate finetuned model
metrics = model.evaluate(dataset, zero_shot=False, metric_only=True)
print(f"Finetuned MSE: {metrics['mse']}")

# Visualize
model.plot(dataset, zero_shot=False)

Notes

  • Moirai supports three variants: standard, MoE (Mixture of Experts), and Moirai2
  • The model uses patch-based tokenization for efficient processing
  • Moirai2 directly outputs quantile predictions, while earlier versions use sampling
  • The model supports variable context and horizon lengths through autoregressive generation
  • Finetuning uses frozen initial layers and only trains the last encoder layers and projection heads
  • For long horizons exceeding max patches, autoregressive forecasting is automatically applied

Build docs developers (and LLMs) love