Your First Forecast in 5 Minutes
This guide will walk you through making your first time-series forecast using Samay. We’ll use the LPTM model for this example.
Import required libraries
from samay.model import LPTMModel
from samay.dataset import LPTMDataset
Load a pre-trained model
Configure and load the LPTM model: config = {
"task_name" : "forecasting" ,
"forecast_horizon" : 192 ,
"freeze_encoder" : True , # Freeze the patch embedding layer
"freeze_embedder" : True , # Freeze the transformer encoder
"freeze_head" : False , # The linear forecasting head must be trained
}
model = LPTMModel(config)
The model will automatically download pre-trained weights from HuggingFace on first use.
Prepare your dataset
Load your time-series data. For this example, we’ll use the ETT (Electricity Transformer Temperature) dataset: train_dataset = LPTMDataset(
name = "ett" ,
datetime_col = "date" ,
path = "./data/ETTh1.csv" ,
mode = "train" ,
horizon = 192 ,
)
val_dataset = LPTMDataset(
name = "ett" ,
datetime_col = "date" ,
path = "./data/ETTh1.csv" ,
mode = "test" ,
horizon = 192 ,
)
The dataset expects a CSV file with a datetime column and one or more value columns.
Fine-tune the model (optional)
For better performance on your specific dataset, fine-tune the model: finetuned_model = model.finetune(train_dataset)
This will train the forecasting head while keeping the pre-trained encoder frozen.
Make predictions
Evaluate the model and get predictions: metrics, trues, preds, histories = model.evaluate(
val_dataset,
task_name = "forecasting"
)
print ( f "MSE: { metrics } " )
Visualize results
Plot the forecasts against ground truth: import matplotlib.pyplot as plt
import numpy as np
# Convert to numpy arrays
trues = np.array(trues)
preds = np.array(preds)
histories = np.array(histories)
# Pick a random sample to visualize
channel_idx = np.random.randint( 0 , 7 )
time_index = np.random.randint( 0 , trues.shape[ 0 ])
history = histories[time_index, channel_idx, :]
true = trues[time_index, channel_idx, :]
pred = preds[time_index, channel_idx, :]
plt.figure( figsize = ( 12 , 4 ))
# Plot history
plt.plot( range ( len (history)), history,
label = "History" , c = "darkblue" )
# Plot ground truth and prediction
offset = len (history)
plt.plot( range (offset, offset + len (true)), true,
label = "Ground Truth" , color = "darkblue" ,
linestyle = "--" , alpha = 0.5 )
plt.plot( range (offset, offset + len (pred)), pred,
label = "Forecast" , color = "red" , linestyle = "--" )
plt.title( f "Forecast Example (idx= { time_index } , channel= { channel_idx } )" )
plt.xlabel( "Time" )
plt.ylabel( "Value" )
plt.legend()
plt.show()
Zero-Shot Forecasting Example
You can also use models without fine-tuning for zero-shot forecasting:
from samay.model import TimesfmModel
from samay.dataset import TimesfmDataset
# Load model
repo = "google/timesfm-1.0-200m-pytorch"
config = {
"context_len" : 512 ,
"horizon_len" : 192 ,
"backend" : "gpu" ,
"per_core_batch_size" : 32 ,
}
model = TimesfmModel( config = config, repo = repo)
# Prepare dataset
val_dataset = TimesfmDataset(
name = "ett" ,
datetime_col = 'date' ,
path = 'data/ETTh1.csv' ,
mode = 'test' ,
context_len = 512 ,
horizon_len = 192
)
# Zero-shot evaluation
metrics, trues, preds, histories = model.evaluate(val_dataset)
print (metrics)
Understanding the Outputs
When you call model.evaluate(), you get four outputs:
Dictionary containing evaluation metrics:
mse: Mean Squared Error
mae: Mean Absolute Error
rmse: Root Mean Squared Error
mape: Mean Absolute Percentage Error
smape: Symmetric Mean Absolute Percentage Error
And more depending on the model
Ground truth values with shape (num_samples, num_channels, horizon_length)
Model predictions with shape (num_samples, num_channels, horizon_length)
Historical context used for predictions with shape (num_samples, num_channels, context_length)
Common Use Cases
Single-step Forecasting Predict the next time step
Multi-step Forecasting Predict multiple future time steps
Multivariate Forecasting Handle multiple time series simultaneously
Transfer Learning Fine-tune on your domain-specific data
Next Steps
Model Guides Learn about specific models in detail
Example Notebooks Explore complete code examples
API Reference Detailed API documentation
Tips for Success
Start with zero-shot : Try models without fine-tuning first to establish a baseline.
Normalize your data : Most models perform better with normalized time series.
Experiment with context length : Longer context can improve accuracy but increases computation.
Monitor GPU memory : Large models may require reducing batch size on smaller GPUs.