Create repository instances once and reuse them throughout your application. Don’t create new instances in every function.
// repositories/index.tsimport { FirestoreRepository } from '@spacelabstech/firestoreorm';import { db } from '../config/firebase';import { userSchema, User } from '../schemas';// ✅ Single instance, reused everywhereexport const userRepo = FirestoreRepository.withSchema<User>( db, 'users', userSchema);
Why this matters: Repository initialization is lightweight, but creating instances repeatedly is unnecessary and makes hook management inconsistent. Hooks registered on different instances won’t share state or behavior.
For large datasets, cursor-based pagination is significantly more efficient than offset pagination.
// ✅ Scales well - jumps directly to positionconst { items, nextCursorId } = await userRepo.query() .orderBy('createdAt', 'desc') .paginate(20, lastCursorId);// Next pageconst nextPage = await userRepo.query() .orderBy('createdAt', 'desc') .paginate(20, nextCursorId);
Performance Impact: Offset pagination requires Firestore to scan and skip all documents before your offset. For page 100 with 20 items per page, Firestore reads and discards 1,980 documents before returning your results. You’re charged for all those reads.
The efficient approach reads documents once and updates them in batches. The inefficient approach reads all documents, transfers them to your application, then sends them back for updates - doubling network traffic and operation time.
// ✅ Aggregation query - charges per 1000 docsconst total = await userRepo.query() .where('status', '==', 'active') .count();
Firestore’s count() aggregation is significantly cheaper than fetching documents. It charges 1 read per 1,000 documents counted, versus 1 read per document when fetching.
When you only need certain fields, use select() to reduce bandwidth.
// Reduces network transfer (still charges for full document read)const emails = await userRepo.query() .where('subscribed', '==', true) .select('email', 'name') .get();// Returns: [{ email: '...', name: '...' }, ...]// Instead of full user objects with all fields
Billing Note: You’re still charged for reading the full document, but select() reduces network bandwidth and deserialization time, which can improve performance in bandwidth-constrained environments.
When processing large datasets (exports, migrations, batch jobs), use streaming to avoid memory issues.
// ✅ Processes one document at a timeconst csvStream = createWriteStream('users.csv');csvStream.write('name,email,status\n');for await (const user of userRepo.query().stream()) { csvStream.write(`${user.name},${user.email},${user.status}\n`);}csvStream.end();
Performance Cost: Streaming still reads all matching documents, so you’re charged for every document read. Use with appropriate filters and limits. The benefit is memory efficiency, not reduced billing.