Skip to main content

Overview

TrashDetect.py is a command-line interface for real-time waste detection using your webcam. It provides a lightweight, OpenCV-based detection window with bounding boxes and confidence scores.
TrashDetect.py requires a connected camera (default: camera index 0) and the trained YOLOv8 model at Modelos/best.pt.

Running the CLI Detector

1

Navigate to Project Directory

Open your terminal and navigate to the Reciclaje AI source directory:
cd /path/to/reciclaje-ai/source
2

Run TrashDetect.py

Execute the detection script:
python TrashDetect.py
The script will immediately start capturing video from your default webcam (camera index 0).
3

Interact with Detection Window

A window titled “Waste Detect” will appear showing:
  • Live video feed from your camera
  • Red bounding boxes around detected waste items
  • Classification labels (Metal, Glass, Plastic, Carton, Medical)
  • Confidence percentages
To exit, press ESC (key code 27).

Detection Output

The CLI detector provides two types of output:

Visual Output

The OpenCV window displays:
  • Bounding boxes: Red rectangles (BGR: 0, 0, 255) around detected objects
  • Labels: Class name and confidence percentage above each detection
  • Font: HERSHEY_COMPLEX with size 1 and thickness 2

Console Output

Real-time detection information is printed to the console:
Clase: 2 Confidence: 1
Clase: 0 Confidence: 1
Clase: 1 Confidence: 1
Where:
  • Clase (Class): Integer ID (0=Metal, 1=Glass, 2=Plastic, 3=Carton, 4=Medical)
  • Confidence: Detection confidence (0-1 scale, displayed as ceiling value)

Detected Waste Classes

The model can detect five types of waste materials:

Metal

Class ID: 0Recyclable metals like aluminum cans, steel containers

Glass

Class ID: 1Glass bottles, jars, and containers

Plastic

Class ID: 2Plastic bottles, containers, and packaging

Carton

Class ID: 3Cardboard boxes, paper cartons

Medical

Class ID: 4Medical waste requiring special disposal

Code Structure

Here’s the complete TrashDetect.py implementation:
# Importamos librerias
from ultralytics import YOLO
import cv2
import math

# Modelo
model = YOLO('Modelos/best.pt')

# Cap
cap = cv2.VideoCapture(0)
cap.set(3, 1280)
cap.set(4, 720)

# Clases
clsName = ['Metal', 'Glass', 'Plastic', 'Carton', 'Medical']

# Inference
while True:
    # Frames
    ret, frame = cap.read()

    # Yolo | AntiSpoof
    results = model(frame, stream=True, verbose=False)
    for res in results:
        # Box
        boxes = res.boxes
        for box in boxes:
            # Bounding box
            x1, y1, x2, y2 = box.xyxy[0]
            x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)

            # Error < 0
            if x1 < 0: x1 = 0
            if y1 < 0: y1 = 0
            if x2 < 0: x2 = 0
            if y2 < 0: y2 = 0

            # Class
            cls = int(box.cls[0])

            # Confidence
            conf = math.ceil(box.conf[0])
            print(f"Clase: {cls} Confidence: {conf}")

            if conf > 0:
                # Draw
                cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 0, 255), 2)
                cv2.putText(frame, f'{clsName[cls]} {int(conf * 100)}%', (x1, y1 - 20),
                            cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 255), 2)

    # Show
    cv2.imshow("Waste Detect", frame)

    # Close
    t = cv2.waitKey(5)
    if t == 27:
        break

cap.release()
cv2.destroyAllWindows()

Key Features

Model Inference

TrashDetect.py:23
results = model(frame, stream=True, verbose=False)
  • stream=True: Enables generator-based inference for memory efficiency
  • verbose=False: Suppresses detailed logging output

Bounding Box Validation

TrashDetect.py:33-36
if x1 < 0: x1 = 0
if y1 < 0: y1 = 0
if x2 < 0: x2 = 0
if y2 < 0: y2 = 0
Ensures bounding box coordinates stay within frame boundaries.

Confidence Thresholding

TrashDetect.py:45
if conf > 0:
Only displays detections with positive confidence scores to reduce false positives.

Keyboard Controls

Press ESC at any time to exit the detection loop and close the application gracefully.
KeyAction
ESC (27)Exit application and release camera

Troubleshooting

If you see a camera initialization error:
  1. Ensure your webcam is connected
  2. Check if another application is using the camera
  3. Try changing the camera index in line 10:
cap = cv2.VideoCapture(1)  # Try index 1, 2, etc.
Verify the model path exists:
ls Modelos/best.pt
If missing, ensure you’ve trained the model or downloaded the pre-trained weights.
Detection accuracy depends on:
  • Proper lighting conditions
  • Camera positioning and angle
  • Distance to objects (optimal: 30-100cm)
  • Object visibility and size in frame
See Camera Configuration for optimization tips.
To improve performance:
  1. Reduce camera resolution (see Camera Configuration)
  2. Ensure GPU support is enabled for YOLOv8
  3. Close other resource-intensive applications

Next Steps

GUI Application

Learn about the full-featured Tkinter GUI interface

Camera Configuration

Optimize camera settings for best detection results

Build docs developers (and LLMs) love