Get started with Reciclaje AI and run your first waste detection in under 10 minutes. This guide will walk you through installation, model setup, and running both CLI and GUI detection modes.
Prerequisites
Before you begin, ensure you have the following installed:
Webcam Built-in or external USB camera
pip Python package installer (included with Python)
Git For cloning the repository
Recommended: 4GB+ RAM and a modern CPU for real-time detection. GPU acceleration (CUDA) is optional but improves performance.
Installation
Clone the Repository
Clone the Reciclaje AI repository to your local machine: git clone https://github.com/AprendeIngenia/reciclaje_ai.git
cd reciclaje_ai
Install Dependencies
Install the required Python packages: pip install ultralytics opencv-python pillow imutils numpy
If you encounter permission errors, try using pip install --user or create a virtual environment: python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install ultralytics opencv-python pillow imutils numpy
Download the YOLOv8 Model
Create a Modelos directory and download the pre-trained model: Download the trained model from HuggingFace and place best.pt in the Modelos folder. The model file should be located at Modelos/best.pt relative to your script location.
Verify Camera Access
Test that your webcam is accessible: import cv2
cap = cv2.VideoCapture( 0 )
if cap.isOpened():
print ( "Camera is working!" )
cap.release()
else :
print ( "Camera not found" )
Quick Start: CLI Detection
The fastest way to start detecting waste is using the command-line interface with TrashDetect.py.
Running CLI Detection
This will:
Initialize the YOLOv8 model
Open your webcam feed
Start detecting waste in real-time
Display bounding boxes and confidence scores
Understanding the Output
The detection window will show:
Red bounding boxes around detected objects
Class labels : Metal, Glass, Plastic, Carton, or Medical
Confidence percentage (e.g., “Plastic 85%”)
Press ESC to stop the detection and close the application.
CLI Code Example
Here’s the complete CLI detection script:
# Import libraries
from ultralytics import YOLO
import cv2
import math
# Load model
model = YOLO( 'Modelos/best.pt' )
# Initialize webcam
cap = cv2.VideoCapture( 0 )
cap.set( 3 , 1280 ) # Width
cap.set( 4 , 720 ) # Height
# Detection classes
clsName = [ 'Metal' , 'Glass' , 'Plastic' , 'Carton' , 'Medical' ]
# Start inference loop
while True :
ret, frame = cap.read()
# Run YOLO detection
results = model(frame, stream = True , verbose = False )
for res in results:
boxes = res.boxes
for box in boxes:
# Get bounding box coordinates
x1, y1, x2, y2 = box.xyxy[ 0 ]
x1, y1, x2, y2 = int (x1), int (y1), int (x2), int (y2)
# Ensure coordinates are non-negative
if x1 < 0 : x1 = 0
if y1 < 0 : y1 = 0
if x2 < 0 : x2 = 0
if y2 < 0 : y2 = 0
# Get class and confidence
cls = int (box.cls[ 0 ])
conf = math.ceil(box.conf[ 0 ])
if conf > 0 :
# Draw rectangle and label
cv2.rectangle(frame, (x1, y1), (x2, y2), ( 0 , 0 , 255 ), 2 )
cv2.putText(frame, f ' { clsName[ cls ] } { int (conf * 100 ) } %' ,
(x1, y1 - 20 ), cv2. FONT_HERSHEY_COMPLEX ,
1 , ( 0 , 0 , 255 ), 2 )
# Display frame
cv2.imshow( "Waste Detect" , frame)
# Exit on ESC key
if cv2.waitKey( 5 ) == 27 :
break
cap.release()
cv2.destroyAllWindows()
Advanced: GUI Application
For a more user-friendly experience with visual classification feedback, use the GUI application.
Prerequisites for GUI
Before running the GUI, you’ll need:
UI Assets : Create a setUp folder with the required images:
Canva.png (background)
Icon images: metal.png, vidrio.png, plastico.png, carton.png, medical.png
Text images: metaltxt.png, vidriotxt.png, plasticotxt.png, cartontxt.png, medicaltxt.png
Tkinter : Usually included with Python, but verify:
python -c "import tkinter"
Running GUI Application
The GUI application requires the setUp folder with all UI assets. If these are missing, the application will fail to start.
GUI Features
Real-time Detection Live webcam feed with object detection overlay
Visual Classification Color-coded bounding boxes for each waste type
Category Display Dynamic icons showing detected waste categories
Confidence Scores Percentage-based accuracy for each detection
Detection Classes
Reciclaje AI can identify five categories of waste:
Metal (Class 0)
Aluminum cans, metal containers, foil Color : Yellow bounding box
Glass (Class 1)
Glass bottles, jars, containers Color : White bounding box
Plastic (Class 2)
Plastic bottles, containers, packaging Color : Red bounding box
Carton (Class 3)
Cardboard boxes, paper cartons, paperboard Color : Gray bounding box
Medical (Class 4)
Medical waste, syringes, medical containers Color : Blue bounding box
Troubleshooting
Camera Not Opening
If your camera doesn’t open, try:
# Try different camera indices
cap = cv2.VideoCapture( 1 ) # or 2, 3, etc.
# On Windows, use DirectShow
cap = cv2.VideoCapture( 0 , cv2. CAP_DSHOW )
# On Linux, try V4L2
cap = cv2.VideoCapture( 0 , cv2. CAP_V4L2 )
Model Not Found Error
FileNotFoundError: [Errno 2] No such file or directory: 'Modelos/best.pt'
Solution : Ensure the model file is in the correct location:
ls Modelos/best.pt # Should show the file
Low Detection Accuracy
The model may struggle with:
Poor lighting conditions
Noisy or blurry images
Unusual perspectives
Rare waste items not in training data
Tips for better detection :
Ensure good lighting
Hold objects closer to the camera
Keep the background clean
Adjust camera position for better angle
CUDA/GPU Issues
YOLOv8 will automatically use GPU if available. To force CPU mode:
model = YOLO( 'Modelos/best.pt' )
results = model(frame, device = 'cpu' )
Next Steps
Detection Classes Learn about the 5 waste categories and their characteristics
Model Overview Understand the YOLOv8 model specifications
Camera Configuration Optimize camera settings for better detection
Troubleshooting Solve common issues and errors
Example Use Cases
Basic Detection Script
Modify the detection threshold for more selective results:
# Only show detections with >70% confidence
conf_threshold = 0.7
if conf > conf_threshold:
cv2.rectangle(frame, (x1, y1), (x2, y2), ( 0 , 0 , 255 ), 2 )
cv2.putText(frame, f ' { clsName[ cls ] } { int (conf * 100 ) } %' ,
(x1, y1 - 20 ), cv2. FONT_HERSHEY_COMPLEX , 1 , ( 0 , 0 , 255 ), 2 )
Save Detection Results
Capture and save images with detections:
# Inside detection loop
if conf > 0 :
# Save annotated frame
timestamp = datetime.now().strftime( "%Y%m %d _%H%M%S" )
cv2.imwrite( f "detections/ { clsName[ cls ] } _ { timestamp } .jpg" , frame)
Count Detections by Category
Track how many items of each type are detected:
counts = { 'Metal' : 0 , 'Glass' : 0 , 'Plastic' : 0 , 'Carton' : 0 , 'Medical' : 0 }
# Inside detection loop
if conf > 0 :
counts[clsName[ cls ]] += 1
# Print summary
print ( f "Detection Summary: { counts } " )