Skip to main content

Overview

ThinkEx supports two storage backends for file uploads:
  1. Supabase Storage (default) - Cloud-based object storage
  2. Local Filesystem - Server-side file storage
The storage backend is configured via environment variables and applies globally to all file uploads.

Storage Backends

Supabase Storage

Recommended for production deployments. Provides scalable, reliable cloud storage with CDN support.

Configuration

.env
STORAGE_TYPE=supabase
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
STORAGE_TYPE
string
default:"supabase"
Set to supabase to use Supabase Storage (default)
NEXT_PUBLIC_SUPABASE_URL
string
required
Your Supabase project URL. Found in Project Settings > API > Project URL
SUPABASE_SERVICE_ROLE_KEY
string
required
Service role key with storage permissions. Found in Project Settings > API > service_role keyWarning: Keep this secret - never expose to client code

Bucket Setup

Create a file-upload bucket in your Supabase project:
  1. Go to Storage in Supabase dashboard
  2. Click New bucket
  3. Name: file-upload
  4. Public bucket: Enable (for public file access)
  5. File size limit: 200MB (or higher if needed)
  6. Allowed MIME types: Leave empty to allow all types

Bucket Policies

For authenticated-only uploads with public reads:
SQL Policy
-- Allow authenticated users to upload
CREATE POLICY "Authenticated users can upload"
ON storage.objects FOR INSERT
TO authenticated
WITH CHECK (bucket_id = 'file-upload');

-- Allow public reads
CREATE POLICY "Public can read files"
ON storage.objects FOR SELECT
TO public
USING (bucket_id = 'file-upload');

-- Users can delete their own files (optional)
CREATE POLICY "Users can delete own files"
ON storage.objects FOR DELETE
TO authenticated
USING (bucket_id = 'file-upload' AND owner = auth.uid());

Upload Flow

  1. Client requests signed URL from /api/upload-url
  2. Server generates 5-minute signed upload URL using service role key
  3. Client uploads file directly to Supabase Storage
  4. File immediately accessible at public URL

Benefits

  • Scalable: No server storage limits
  • Fast: Global CDN distribution
  • Reliable: Automatic backups and replication
  • Secure: Signed URLs with expiration
  • Cost-effective: Pay only for storage and bandwidth used

Limitations

  • Requires Supabase project (free tier available)
  • Network dependency for uploads and downloads
  • Additional configuration complexity

Local Filesystem

Useful for development, testing, or when you need full control over file storage.

Configuration

.env
STORAGE_TYPE=local
UPLOADS_DIR=/path/to/uploads  # Optional
NEXT_PUBLIC_APP_URL=http://localhost:3000
STORAGE_TYPE
string
Set to local to use local filesystem storage
UPLOADS_DIR
string
default:"./uploads"
Directory for storing uploaded files (relative or absolute path)Created automatically if it doesn’t exist
NEXT_PUBLIC_APP_URL
string
required
Base URL of your application (used to generate file URLs)

Upload Flow

  1. Client requests upload URL from /api/upload-url
  2. Server responds with mode: 'local'
  3. Client falls back to /api/upload-file endpoint
  4. Server saves file to UPLOADS_DIR
  5. Returns public URL: {APP_URL}/api/files/{filename}

File Serving

You need to create an API route to serve uploaded files:
src/app/api/files/[filename]/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { readFile } from 'fs/promises';
import { join } from 'path';

export async function GET(
  request: NextRequest,
  { params }: { params: { filename: string } }
) {
  const uploadsDir = process.env.UPLOADS_DIR || join(process.cwd(), 'uploads');
  const filePath = join(uploadsDir, params.filename);
  
  try {
    const file = await readFile(filePath);
    const contentType = getContentType(params.filename);
    
    return new NextResponse(file, {
      headers: {
        'Content-Type': contentType,
        'Cache-Control': 'public, max-age=31536000',
      },
    });
  } catch (error) {
    return new NextResponse('File not found', { status: 404 });
  }
}

function getContentType(filename: string): string {
  const ext = filename.split('.').pop()?.toLowerCase();
  const types: Record<string, string> = {
    pdf: 'application/pdf',
    jpg: 'image/jpeg',
    jpeg: 'image/jpeg',
    png: 'image/png',
    gif: 'image/gif',
    webp: 'image/webp',
  };
  return types[ext || ''] || 'application/octet-stream';
}

Benefits

  • Simple: No external dependencies
  • Fast: Direct filesystem access
  • Private: Full control over file access
  • Free: No storage costs beyond server disk space
  • Offline: Works without internet connectivity

Limitations

  • Not scalable: Limited by server disk space
  • No CDN: Files served from application server
  • Deployment complexity: Files must be persisted across deployments
  • No backup: You’re responsible for backups
  • Single server: Not suitable for multi-server deployments

Storage Migration

To migrate between storage backends:

Supabase to Local

# Download all files from Supabase bucket
supabase storage download file-upload/ ./uploads/

# Update environment variables
STORAGE_TYPE=local
UPLOADS_DIR=./uploads

Local to Supabase

# Upload all local files to Supabase
for file in ./uploads/*; do
  supabase storage upload file-upload/$(basename "$file") "$file"
done

# Update environment variables
STORAGE_TYPE=supabase
Note: Update database records with new file URLs after migration.

File URL Structure

Supabase URLs

https://your-project.supabase.co/storage/v1/object/public/file-upload/1234567890-abc123-document.pdf
  • Globally accessible via CDN
  • No authentication required for public buckets
  • Cached at edge locations

Local URLs

http://localhost:3000/api/files/1234567890-abc123-document.pdf
  • Served by Next.js API route
  • Requires /api/files/[filename] route implementation
  • No CDN caching (unless behind reverse proxy)

Security Best Practices

Supabase Storage

  1. Never expose service role key: Only use in server-side code
  2. Use signed URLs: For temporary access to private files
  3. Set bucket policies: Restrict upload/delete to authenticated users
  4. Validate file types: Check MIME types before upload
  5. Scan for malware: Implement virus scanning if accepting user uploads
  6. Set size limits: Prevent abuse with reasonable file size caps

Local Storage

  1. Sanitize filenames: Remove path traversal characters (handled automatically)
  2. Validate paths: Never use user input directly in file paths
  3. Set permissions: Ensure uploads directory has correct permissions (chmod 755)
  4. Implement access control: Add authentication to file serving route if needed
  5. Monitor disk space: Prevent disk full errors
  6. Regular backups: Automate backup of uploads directory

Monitoring and Maintenance

Supabase Storage

-- Check storage usage
SELECT 
  bucket_id,
  COUNT(*) as file_count,
  SUM(metadata->>'size')::bigint as total_bytes,
  pg_size_pretty(SUM(metadata->>'size')::bigint) as total_size
FROM storage.objects
GROUP BY bucket_id;

-- List large files
SELECT 
  name,
  pg_size_pretty((metadata->>'size')::bigint) as size,
  created_at
FROM storage.objects
WHERE bucket_id = 'file-upload'
ORDER BY (metadata->>'size')::bigint DESC
LIMIT 10;

Local Storage

# Check disk usage
du -sh /path/to/uploads

# Count files
find /path/to/uploads -type f | wc -l

# List large files
find /path/to/uploads -type f -exec du -h {} + | sort -rh | head -10

# Clean up old files (older than 30 days)
find /path/to/uploads -type f -mtime +30 -delete

Troubleshooting

Supabase Upload Fails

Error: “Failed to create upload URL”
  • Verify NEXT_PUBLIC_SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY
  • Check bucket exists and is named file-upload
  • Ensure service role key has storage permissions
Error: “Direct upload failed: 403 Forbidden”
  • Check bucket policies allow uploads
  • Verify signed URL hasn’t expired (5 minute limit)
  • Ensure bucket is public or user is authenticated

Local Upload Fails

Error: “ENOENT: no such file or directory”
  • Check UPLOADS_DIR path exists or can be created
  • Verify write permissions on parent directory
  • Use absolute path if relative path issues
Error: “EACCES: permission denied”
  • Fix directory permissions: chmod 755 /path/to/uploads
  • Ensure process user has write access

Files Not Accessible

Supabase: Check bucket is public or RLS policies allow reads Local: Implement /api/files/[filename] route (see example above)

See Also

Build docs developers (and LLMs) love