Philosophy & Scope
What Pushduck does (and doesn't do) - our principles, boundaries, and integration approach
Our Philosophy
Pushduck is a focused upload library, not a platform. We believe in doing one thing exceptionally well:
The fastest, most lightweight way to add S3 file uploads to any web application
This document defines the boundaries of what Pushduck will and won't do, and explains why.
Core Principles
ðŠķ Lightweight First
Bundle size is a feature, not an afterthought. Every dependency is carefully considered.
We use:
aws4fetch(6.4KB) instead of AWS SDK (500KB+)- Native
fetch()API - Zero unnecessary dependencies
Result: Core library stays under 10KB minified + gzipped
ðŊ Focused Scope
Do one thing (uploads) exceptionally well, rather than many things poorly.
We believe:
- Specialized tools beat all-in-one solutions
- Small, focused libraries are easier to maintain
- Users prefer composing tools over vendor lock-in
Result: You can replace Pushduck easily if needed, or use it alongside other tools
ð Extensibility Over Features
Provide hooks and APIs, not built-in everything.
We provide:
- Middleware system for custom logic
- Lifecycle hooks for integration points
- Type-safe APIs for extension
You implement:
- Your specific business logic
- Integration with your services
- Custom workflows
Result: Maximum flexibility without bloat
ð Document, Don't Implement
Show users how to integrate, don't build the integration.
We provide:
- Clear integration patterns
- Example code
- Best practices documentation
We don't build:
- Database adapters
- Auth providers
- Email services
- Analytics platforms
Result: Works with any stack, no vendor lock-in
â What Pushduck Does
Core Upload Features
Direct-to-S3 Uploads
Secure presigned URL generation
Upload files directly to S3 without touching your server. Reduces bandwidth costs and improves performance.
Progress Tracking
Real-time upload monitoring
Track upload progress, speed, and ETA. Per-file and overall progress metrics for multi-file uploads.
File Validation
Client and server-side checks
Validate file size, type, count, and custom rules. Prevent invalid uploads before they reach S3.
Multi-Provider Support
S3-compatible storage
Works with AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO, and any S3-compatible provider.
Storage Operations
// List files
const files = await storage.list.files({
prefix: "uploads/",
maxResults: 50
});
// Delete files
await storage.delete.file("uploads/old.jpg");
await storage.delete.byPrefix("temp/");
// Get metadata
const info = await storage.metadata.getInfo("uploads/doc.pdf");
// Generate download URLs
const url = await storage.download.presignedUrl("uploads/file.pdf", 3600);What we provide:
- â List files with pagination and filtering
- â Delete files (single, batch, by prefix)
- â Get file metadata (size, date, content-type)
- â Generate presigned URLs (upload/download)
- â Check file existence
What we don't provide:
- â File search/indexing (use Algolia, Elasticsearch)
- â File versioning (use S3 versioning)
- â Storage analytics (provide hooks for your analytics)
- â Duplicate detection (implement via hooks)
Developer Experience
Type-Safe APIs
Full TypeScript support
Intelligent type inference from server to client. Catch errors at compile time.
Framework Adapters
Universal compatibility
Works with Next.js, React, Express, Fastify, and more. Web Standards-based.
CLI Tools
Zero-config setup
Interactive setup wizard, automatic provider detection, and project scaffolding.
Testing Utilities
Mock providers
Test your upload flows without hitting real S3. Perfect for CI/CD.
Optional UI Components
Following the shadcn/ui approach:
# Copy components into your project
npx @pushduck/cli add upload-dropzone
npx @pushduck/cli add file-listWhat we provide:
- â Basic upload UI components (dropzone, file-list, progress-bar)
- â Headless/unstyled components you can customize
- â Copy-paste, not installed as dependency
What we don't provide:
- â Full-featured file manager UI
- â Image gallery/carousel components
- â File preview modals
- â Admin dashboard components
Philosophy: You own the code, you customize it. We provide starting points, not rigid components.
â What Pushduck Doesn't Do
File Processing
Out of Scope - Use specialized tools for these tasks
We don't process files. Use these instead:
| Task | Recommended Tool | Why |
|---|---|---|
| Image optimization | Sharp | Best-in-class, battle-tested |
| Video transcoding | FFmpeg | Industry standard |
| PDF generation | PDFKit | Specialized library |
| Image transformations | Cloudflare Images | Edge-optimized |
| Content moderation | AWS Rekognition, Cloudflare | Purpose-built services |
Why not?
- These tools do it better than we ever could
- Adding them would balloon our bundle size
- Creates unnecessary dependencies
- Limits user choice
Our approach: Document integration patterns
â ïļ Bandwidth Note: Server-side processing requires downloading from S3 (inbound) and uploading variants (outbound). This negates the "server never touches files" benefit.
Better options:
- Client-side preprocessing (before upload) - Zero server bandwidth
- URL-based transforms (Cloudinary, Imgix) - Zero server bandwidth
- See Image Uploads Guide for detailed patterns
// Example: Integrate with Sharp
import sharp from 'sharp';
const router = s3.createRouter({
imageUpload: s3.image()
.onUploadComplete(async ({ key }) => {
// â ïļ Downloads file from S3 to server
const buffer = await s3.download(key);
// Process with Sharp
const optimized = await sharp(buffer)
.resize(800, 600)
.webp({ quality: 80 })
.toBuffer();
// â ïļ Uploads processed file back to S3
await storage.upload.file(optimized, `optimized/${key}`);
})
});// â
Better: Client-side preprocessing (recommended)
import imageCompression from 'browser-image-compression';
function ImageUpload() {
const { uploadFiles } = upload.images();
const handleUpload = async (file: File) => {
// â
Compress on client BEFORE upload
const compressed = await imageCompression(file, {
maxSizeMB: 1,
maxWidthOrHeight: 1920,
});
// Upload already-optimized file
await uploadFiles([compressed]);
};
}Backend Services
Integration Pattern - We provide hooks, you connect services
We don't implement these services:
| Service | What We Provide | You Implement |
|---|---|---|
| Webhooks | Lifecycle hooks | Webhook delivery |
| Notifications | onUploadComplete hook | Email/SMS sending |
| Database | File metadata in hooks | DB storage logic |
| Queue Systems | Hooks with context | Queue integration |
| Background Jobs | Async hook support | Job processing |
| Analytics | Hooks with event data | Analytics tracking |
Example Integration:
import { db } from '@/lib/database';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, key, url, metadata }) => {
// You implement database logic
await db.files.create({
data: {
name: file.name,
size: file.size,
url: url,
s3Key: key,
userId: metadata.userId,
uploadedAt: new Date()
}
});
})
});import { sendWebhook } from '@/lib/webhooks';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, url }) => {
// You implement webhook delivery
await sendWebhook({
event: 'file.uploaded',
data: {
filename: file.name,
url: url,
timestamp: new Date().toISOString()
}
});
})
});import { sendEmail } from '@/lib/email';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, metadata }) => {
// You implement email notifications
await sendEmail({
to: metadata.userEmail,
subject: 'File Upload Complete',
body: `Your file "${file.name}" has been uploaded successfully.`
});
})
});import { queue } from '@/lib/queue';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, key }) => {
// You implement queue integration
await queue.add('process-file', {
fileKey: key,
fileName: file.name,
processType: 'thumbnail-generation'
});
})
});import { analytics } from '@/lib/analytics';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadStart(async ({ file, metadata }) => {
// Track upload start
await analytics.track('upload_started', {
userId: metadata.userId,
fileSize: file.size,
fileType: file.type
});
})
.onUploadComplete(async ({ file, url, metadata }) => {
// Track successful upload
await analytics.track('upload_completed', {
userId: metadata.userId,
fileName: file.name,
fileSize: file.size,
fileUrl: url
});
})
.onUploadError(async ({ error, metadata }) => {
// Track errors
await analytics.track('upload_failed', {
userId: metadata.userId,
error: error.message
});
})
});Why this approach?
- â You're not locked into our choice of services
- â Use your existing infrastructure
- â Switch services without changing upload library
- â Keeps our bundle size minimal
Platform Features
Not a Platform - Pushduck is a library, not a SaaS
We will never build:
â User Management - Use NextAuth, Clerk, Supabase Auth, etc.
â Team/Organization Systems - Build in your application
â Permission/Role Management - Implement in your middleware
â Analytics Dashboards - We provide hooks for your analytics
â Admin Panels - Build with your UI framework
â Billing/Subscriptions - Use Stripe, Paddle, etc.
â API Key Management - Implement in your system
â Audit Logs - Log via hooks to your logging service
Why not?
- Every app has different requirements
- Would require a backend service (we're a library)
- Creates vendor lock-in
- Massive scope creep from our core mission
Our approach: Provide middleware hooks
import { auth } from '@/lib/auth';
import { checkPermission } from '@/lib/permissions';
import { logAudit } from '@/lib/audit';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req, metadata }) => {
// YOUR auth system
const user = await auth.getUser(req);
if (!user) throw new Error('Unauthorized');
// YOUR permissions system
if (!checkPermission(user, 'upload:create')) {
throw new Error('Forbidden');
}
// YOUR audit logging
await logAudit({
userId: user.id,
action: 'file.upload.started',
metadata: metadata
});
return { userId: user.id };
})
});Authentication & Authorization
What we provide:
- â Middleware hooks for auth checks
- â Access to request context (headers, cookies, etc.)
- â Integration examples with popular auth providers
What we don't provide:
- â Built-in auth system
- â Session management
- â OAuth providers
- â API key generation
- â User database
Example Integrations:
import { auth } from '@/lib/auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const session = await auth.api.getSession({
headers: req.headers
});
if (!session?.user) {
throw new Error('Please sign in to upload files');
}
return {
userId: session.user.id,
userEmail: session.user.email
};
})
});import { getServerSession } from 'next-auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const session = await getServerSession();
if (!session?.user) {
throw new Error('Please sign in to upload files');
}
return {
userId: session.user.id,
userEmail: session.user.email
};
})
});import { auth } from '@clerk/nextjs';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async () => {
const { userId } = auth();
if (!userId) {
throw new Error('Unauthorized');
}
return { userId };
})
});import { createServerClient } from '@supabase/ssr';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const supabase = createServerClient(/* config */);
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
throw new Error('Unauthorized');
}
return { userId: user.id };
})
});import { verifyToken } from '@/lib/auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const token = req.headers.get('authorization')?.replace('Bearer ', '');
if (!token) {
throw new Error('No token provided');
}
const user = await verifyToken(token);
if (!user) {
throw new Error('Invalid token');
}
return { userId: user.id };
})
});ðŊ The Integration Pattern
This is our core philosophy in action:
// 1. We handle uploads
// 2. You connect your services via hooks
// 3. Everyone wins
const router = s3.createRouter({
fileUpload: s3.file()
// YOUR auth
.middleware(async ({ req }) => {
const user = await yourAuth.getUser(req);
return { userId: user.id };
})
// YOUR business logic
.onUploadStart(async ({ file, metadata }) => {
await yourAnalytics.track('upload_started', {
userId: metadata.userId,
fileSize: file.size
});
})
// YOUR database
.onUploadComplete(async ({ file, url, key, metadata }) => {
await yourDatabase.files.create({
userId: metadata.userId,
url: url,
s3Key: key,
name: file.name,
size: file.size
});
// YOUR notifications
await yourEmailService.send({
to: metadata.userEmail,
template: 'upload-complete',
data: { fileName: file.name }
});
// YOUR webhooks
await yourWebhooks.trigger({
event: 'file.uploaded',
data: { url, fileName: file.name }
});
// YOUR queue
await yourQueue.add('process-file', {
fileKey: key,
userId: metadata.userId
});
// YOUR analytics
await yourAnalytics.track('upload_completed', {
userId: metadata.userId,
fileName: file.name,
fileSize: file.size,
fileUrl: url,
fileKey: key
});
})
// YOUR error handling
.onUploadError(async ({ error, metadata }) => {
await yourErrorTracking.log({
error: error,
userId: metadata.userId
});
})
});Benefits:
- ðŠķ Pushduck stays lightweight (only upload logic)
- ð You use your preferred services
- ðŊ No vendor lock-in
- ⥠No unnecessary code in your bundle
- ð§ Maximum flexibility
ðĪ Decision Framework
When considering new features, we ask:
â Add if:
- Core to uploads - Directly helps files get to S3
- Universally needed - 80%+ of users need it
- Can't be solved externally - Must be part of upload flow
- Lightweight - Doesn't balloon bundle size
- Framework agnostic - Works everywhere
â Don't add if:
- Better tools exist - Sharp does image processing better
- Service-specific - Requires backend infrastructure
- Opinion-heavy - Database choice, auth provider, etc.
- UI-specific - Every app needs different UI
- Platform feature - User management, billing, etc.
ð Provide hooks if:
- Common integration point - Many users need it
- Can be external - Services can be swapped
- Timing matters - Needs to happen at specific point in upload lifecycle
ð What This Means For You
As a User
You get:
- â Lightweight, focused upload library
- â Freedom to choose your own tools
- â No vendor lock-in
- â Clear integration patterns
- â Stable, predictable API
You're responsible for:
- ð§ Choosing and integrating your services
- ð§ Building your UI (or copy ours)
- ð§ Implementing your business logic
- ð§ Managing your infrastructure
As a Contributor
Focus contributions on:
- â Core upload features (resumable, queuing, etc.)
- â Framework adapters
- â Testing utilities
- â Documentation & examples
- â Integration guides
We'll reject PRs for:
- â File processing features
- â Backend services (webhooks, notifications)
- â Database adapters
- â Auth providers
- â Platform features
ð Further Reading
Roadmap
See what we're building next
Our development roadmap and planned features
Contributing
Help build Pushduck
Guidelines for contributing to the project
Integration Guides
Connect your services
Patterns for integrating databases, auth, notifications, and more
Examples
Real-world implementations
Complete examples showing integration patterns
ðŽ Questions?
Have questions about scope or philosophy?
- ð GitHub Discussions
- ðŽ Discord Community
- ð GitHub Issues
Remember: We're focused on being the best upload library, not the biggest. Every feature we say "no" to keeps Pushduck fast, lightweight, and maintainable.