Skip to main content

Overview

The DeviceManager class provides utility methods for automatically detecting and selecting the best available PyTorch device for model inference. It supports MPS (Apple Silicon), CUDA (NVIDIA GPUs), and CPU fallback.

Class Definition

from trash_classificator.segmentation.device_manager import DeviceManager

# Get the best available device
device = DeviceManager.get_device()

Static Methods

get_device() -> torch.device

Automatically detects and returns the optimal PyTorch device based on hardware availability.
device
torch.device
A PyTorch device object representing the selected compute device. The selection follows this priority order:
  1. MPS (Metal Performance Shaders) - Apple Silicon GPUs
  2. CUDA - NVIDIA GPUs
  3. CPU - Fallback option

Device Selection Logic

The method uses the following priority-based selection:
if torch.backends.mps.is_available():
    device = torch.device("mps")
elif torch.cuda.is_available():
    device = torch.device("cuda")
else:
    device = torch.device("cpu")
The device selection is automatic and prioritizes GPU acceleration when available. MPS is checked first for Apple Silicon users, followed by CUDA for NVIDIA GPU users.

Behavior

  • Checks for MPS availability (Apple Silicon M1/M2/M3 chips)
  • Falls back to CUDA if MPS is not available
  • Falls back to CPU if neither GPU option is available
  • Automatically logs the selected device using log_device()

log_device(device: torch.device)

Logs information about the selected device to the console.
device
torch.device
required
The PyTorch device to log information about.

Logging Behavior

  • MPS devices: Logs as “MPS”
  • CUDA devices: Logs the specific GPU name (e.g., “NVIDIA GeForce RTX 3080”)
  • CPU devices: Logs as “CPU”
if device.type == "mps":
    device_name = "MPS"
elif device.type == "cuda":
    device_name = torch.cuda.get_device_name(device)
else:
    device_name = "CPU"
log.info(f"Model is using device: {device_name}")
The device information is automatically logged when using get_device(), so you typically don’t need to call log_device() manually.

Return Specifications

MethodReturn TypePossible ValuesDescription
get_device()torch.devicemps, cuda, cpuThe optimal available device
log_device()NoneN/ALogs device info, no return value

Usage Examples

Basic Usage

from trash_classificator.segmentation.device_manager import DeviceManager
from trash_classificator.segmentation.model_loader import ModelLoader

# Automatically select best device
device = DeviceManager.get_device()
# Output: "2026-03-07 10:30:45 - INFO - Model is using device: NVIDIA GeForce RTX 3080"

# Use the device to load a model
loader = ModelLoader(device=device)

Manual Device Logging

import torch
from trash_classificator.segmentation.device_manager import DeviceManager

# Create a device manually
device = torch.device("cuda")

# Log device information
DeviceManager.log_device(device)
# Output: "2026-03-07 10:30:45 - INFO - Model is using device: NVIDIA GeForce RTX 3080"

Check Device Type

from trash_classificator.segmentation.device_manager import DeviceManager

device = DeviceManager.get_device()

if device.type == "cuda":
    print("Using GPU acceleration")
elif device.type == "mps":
    print("Using Apple Silicon GPU")
else:
    print("Using CPU (slower performance)")

Complete Pipeline Example

import torch
from trash_classificator.segmentation.device_manager import DeviceManager
from trash_classificator.segmentation.model_loader import ModelLoader

# Step 1: Get optimal device
device = DeviceManager.get_device()
# Automatically logs: "Model is using device: MPS"

# Step 2: Load model on selected device
loader = ModelLoader(device=device)
model = loader.get_model()

# Step 3: Run inference
results = model("trash_image.jpg")

Device Selection Priority

The device selection follows this priority order:
  1. MPS (Metal Performance Shaders)
    • Available on: Apple Silicon (M1, M2, M3+)
    • Best for: MacBook Pro, Mac Studio, Mac Mini with Apple chips
    • Performance: Excellent GPU acceleration
  2. CUDA
    • Available on: Systems with NVIDIA GPUs
    • Best for: Workstations and servers with NVIDIA graphics cards
    • Performance: Excellent GPU acceleration
  3. CPU
    • Available on: All systems
    • Best for: Systems without GPU support
    • Performance: Slower than GPU options
GPU acceleration (MPS or CUDA) can provide 5-10x faster inference compared to CPU processing for deep learning models.

Logging Format

The DeviceManager uses Python’s logging module with the following format:
%(asctime)s - %(levelname)s - %(message)s
Example output:
2026-03-07 10:30:45 - INFO - Model is using device: NVIDIA GeForce RTX 3080
2026-03-07 10:30:46 - INFO - Model is using device: MPS
2026-03-07 10:30:47 - INFO - Model is using device: CPU

Source Reference

Implementation: trash_classificator/segmentation/device_manager.py:6-27

Build docs developers (and LLMs) love