Skip to main content

HybridLossFunction

Hybrid loss function combining multiple loss components for robust MOS prediction. Components:
  • Smooth L1 loss for basic regression
  • Ranking loss for preserving relative order
  • Scale-aware loss for emphasizing extreme quality values
from qualivision.utils.training import HybridLossFunction

loss_fn = HybridLossFunction(
    smooth_l1_beta=0.1,
    ranking_margin=0.2,
    use_adaptive_weighting=True
)

loss, loss_components = loss_fn(predictions, targets, epoch=0)

Parameters

smooth_l1_beta
float
default:"0.1"
Beta parameter for smooth L1 loss
ranking_margin
float
default:"0.2"
Margin for ranking loss
scale_weights
Dict[str, float]
default:"None"
Weights for different quality ranges. Default: {'low_quality': 1.5, 'high_quality': 1.5, 'normal': 1.0}
use_adaptive_weighting
bool
default:"True"
Whether to use adaptive loss weighting that adjusts during training

Methods

__call__(pred, target, epoch=0)

Compute hybrid loss. Parameters:
  • pred (torch.Tensor): Predicted MOS scores with shape (B, 5)
  • target (torch.Tensor): Target MOS scores with shape (B, 5)
  • epoch (int): Current training epoch (default: 0)
Returns: Tuple[torch.Tensor, Dict[str, float]] containing:
  • total_loss (torch.Tensor): Combined loss value
  • loss_components (dict): Dictionary with:
    • total_loss (float): Total loss value
    • smooth_l1_loss (float): Smooth L1 component
    • ranking_loss (float): Ranking component
    • scale_loss (float): Scale-aware component
    • alpha (float): Weight for smooth L1 loss
    • beta (float): Weight for ranking loss
    • gamma (float): Weight for scale loss

train_epoch

Train model for one epoch with gradient accumulation and mixed precision.
from qualivision.utils.training import train_epoch

metrics = train_epoch(
    model=model,
    train_loader=train_loader,
    optimizer=optimizer,
    scheduler=scheduler,
    scaler=scaler,
    loss_fn=loss_fn,
    accumulation_steps=8,
    epoch=0
)

Parameters

model
nn.Module
required
Model to train
train_loader
DataLoader
required
Training data loader
optimizer
torch.optim.Optimizer
required
Optimizer for training
scheduler
_LRScheduler
default:"None"
Learning rate scheduler (optional)
scaler
GradScaler
required
Gradient scaler for mixed precision training
loss_fn
HybridLossFunction
required
Loss function
accumulation_steps
int
default:"8"
Number of gradient accumulation steps
epoch
int
default:"0"
Current epoch number
device
str
default:"'cuda'"
Device to use for training
max_grad_norm
float
default:"1.0"
Maximum gradient norm for clipping
log_interval
int
default:"50"
Logging interval in batches

Returns

metrics
Dict[str, float]
Dictionary containing:
  • train_loss (float): Average total loss
  • train_smooth_l1 (float): Average smooth L1 loss
  • train_ranking (float): Average ranking loss
  • train_scale (float): Average scale loss
  • num_batches (int): Number of batches processed

evaluate

Evaluate model on validation set.
from qualivision.utils.training import evaluate

metrics, predictions, targets = evaluate(
    model=model,
    val_loader=val_loader,
    loss_fn=loss_fn,
    device='cuda'
)

Parameters

model
nn.Module
required
Model to evaluate
val_loader
DataLoader
required
Validation data loader
loss_fn
HybridLossFunction
required
Loss function
device
str
default:"'cuda'"
Device to use for evaluation

Returns

metrics
Dict[str, float]
Dictionary containing:
  • val_loss (float): Average validation loss
  • val_smooth_l1 (float): Average smooth L1 loss
  • val_ranking (float): Average ranking loss
  • val_scale (float): Average scale loss
  • num_val_batches (int): Number of batches processed
predictions
List[float]
List of predicted overall MOS scores
targets
List[float]
List of target overall MOS scores

create_optimizer

Create optimizer with optional discriminative learning rates.
from qualivision.utils.training import create_optimizer

optimizer = create_optimizer(
    model=model,
    learning_rate=1e-4,
    weight_decay=1e-2,
    discriminative_lr={'text': 0.5, 'video': 0.1, 'head': 1.0}
)

Parameters

model
nn.Module
required
Model to optimize
learning_rate
float
default:"1e-4"
Base learning rate
weight_decay
float
default:"1e-2"
Weight decay for regularization
discriminative_lr
Dict[str, float]
default:"None"
Dictionary with component-specific LR multipliers. Keys: ‘text’, ‘video’, ‘head’

Returns

optimizer
torch.optim.AdamW
Configured AdamW optimizer

create_scheduler

Create learning rate scheduler.
from qualivision.utils.training import create_scheduler

scheduler = create_scheduler(
    optimizer=optimizer,
    num_training_steps=1000,
    warmup_steps=100,
    scheduler_type='cosine'
)

Parameters

optimizer
torch.optim.Optimizer
required
Optimizer to schedule
num_training_steps
int
required
Total number of training steps
warmup_steps
int
default:"100"
Number of warmup steps
scheduler_type
str
default:"'cosine'"
Type of scheduler: ‘cosine’, ‘linear’, or ‘constant’

Returns

scheduler
_LRScheduler
Configured learning rate scheduler

AdaptiveLossManager

Adaptive loss weight manager that dynamically adjusts loss component weights during training.
from qualivision.utils.training import AdaptiveLossManager

loss_manager = AdaptiveLossManager(
    initial_alpha=0.7,
    initial_beta=0.3,
    adaptation_rate=0.1
)

loss_manager.update_weights(mae_loss=0.5, ranking_loss=0.3)
alpha, beta = loss_manager.get_weights()

Parameters

initial_alpha
float
default:"0.7"
Initial weight for smooth L1 loss
initial_beta
float
default:"0.3"
Initial weight for ranking loss
adaptation_rate
float
default:"0.1"
Rate of adaptation for weight updates

Methods

update_weights(mae_loss, ranking_loss)

Update loss weights based on recent loss trends. Parameters:
  • mae_loss (float): Current MAE loss value
  • ranking_loss (float): Current ranking loss value

get_weights()

Get current loss weights. Returns: Tuple[float, float] - Current (alpha, beta) weights

Build docs developers (and LLMs) love