Pushduck
Pushduck// S3 uploads for any framework

Philosophy & Scope

What Pushduck does (and doesn't do) - our principles, boundaries, and integration approach

Our Philosophy

Pushduck is a focused upload library, not a platform. We believe in doing one thing exceptionally well:

The fastest, most lightweight way to add S3 file uploads to any web application

This document defines the boundaries of what Pushduck will and won't do, and explains why.


Core Principles

ðŸŠķ Lightweight First

Bundle size is a feature, not an afterthought. Every dependency is carefully considered.

We use:

  • aws4fetch (6.4KB) instead of AWS SDK (500KB+)
  • Native fetch() API
  • Zero unnecessary dependencies

Result: Core library stays under 10KB minified + gzipped


ðŸŽŊ Focused Scope

Do one thing (uploads) exceptionally well, rather than many things poorly.

We believe:

  • Specialized tools beat all-in-one solutions
  • Small, focused libraries are easier to maintain
  • Users prefer composing tools over vendor lock-in

Result: You can replace Pushduck easily if needed, or use it alongside other tools


🔌 Extensibility Over Features

Provide hooks and APIs, not built-in everything.

We provide:

  • Middleware system for custom logic
  • Lifecycle hooks for integration points
  • Type-safe APIs for extension

You implement:

  • Your specific business logic
  • Integration with your services
  • Custom workflows

Result: Maximum flexibility without bloat


📚 Document, Don't Implement

Show users how to integrate, don't build the integration.

We provide:

  • Clear integration patterns
  • Example code
  • Best practices documentation

We don't build:

  • Database adapters
  • Auth providers
  • Email services
  • Analytics platforms

Result: Works with any stack, no vendor lock-in


✅ What Pushduck Does

Core Upload Features

Direct-to-S3 Uploads

Secure presigned URL generation

Upload files directly to S3 without touching your server. Reduces bandwidth costs and improves performance.

Progress Tracking

Real-time upload monitoring

Track upload progress, speed, and ETA. Per-file and overall progress metrics for multi-file uploads.

File Validation

Client and server-side checks

Validate file size, type, count, and custom rules. Prevent invalid uploads before they reach S3.

Multi-Provider Support

S3-compatible storage

Works with AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO, and any S3-compatible provider.

Storage Operations

// List files
const files = await storage.list.files({
  prefix: "uploads/",
  maxResults: 50
});

// Delete files
await storage.delete.file("uploads/old.jpg");
await storage.delete.byPrefix("temp/");

// Get metadata
const info = await storage.metadata.getInfo("uploads/doc.pdf");

// Generate download URLs
const url = await storage.download.presignedUrl("uploads/file.pdf", 3600);

What we provide:

  • ✅ List files with pagination and filtering
  • ✅ Delete files (single, batch, by prefix)
  • ✅ Get file metadata (size, date, content-type)
  • ✅ Generate presigned URLs (upload/download)
  • ✅ Check file existence

What we don't provide:

  • ❌ File search/indexing (use Algolia, Elasticsearch)
  • ❌ File versioning (use S3 versioning)
  • ❌ Storage analytics (provide hooks for your analytics)
  • ❌ Duplicate detection (implement via hooks)

Developer Experience

Type-Safe APIs

Full TypeScript support

Intelligent type inference from server to client. Catch errors at compile time.

Framework Adapters

Universal compatibility

Works with Next.js, React, Express, Fastify, and more. Web Standards-based.

CLI Tools

Zero-config setup

Interactive setup wizard, automatic provider detection, and project scaffolding.

Testing Utilities

Mock providers

Test your upload flows without hitting real S3. Perfect for CI/CD.


Optional UI Components

Following the shadcn/ui approach:

# Copy components into your project
npx @pushduck/cli add upload-dropzone
npx @pushduck/cli add file-list

What we provide:

  • ✅ Basic upload UI components (dropzone, file-list, progress-bar)
  • ✅ Headless/unstyled components you can customize
  • ✅ Copy-paste, not installed as dependency

What we don't provide:

  • ❌ Full-featured file manager UI
  • ❌ Image gallery/carousel components
  • ❌ File preview modals
  • ❌ Admin dashboard components

Philosophy: You own the code, you customize it. We provide starting points, not rigid components.


❌ What Pushduck Doesn't Do

File Processing

Out of Scope - Use specialized tools for these tasks

We don't process files. Use these instead:

TaskRecommended ToolWhy
Image optimizationSharpBest-in-class, battle-tested
Video transcodingFFmpegIndustry standard
PDF generationPDFKitSpecialized library
Image transformationsCloudflare ImagesEdge-optimized
Content moderationAWS Rekognition, CloudflarePurpose-built services

Why not?

  • These tools do it better than we ever could
  • Adding them would balloon our bundle size
  • Creates unnecessary dependencies
  • Limits user choice

Our approach: Document integration patterns

⚠ïļ Bandwidth Note: Server-side processing requires downloading from S3 (inbound) and uploading variants (outbound). This negates the "server never touches files" benefit.

Better options:

  • Client-side preprocessing (before upload) - Zero server bandwidth
  • URL-based transforms (Cloudinary, Imgix) - Zero server bandwidth
  • See Image Uploads Guide for detailed patterns
// Example: Integrate with Sharp
import sharp from 'sharp';

const router = s3.createRouter({
  imageUpload: s3.image()
    .onUploadComplete(async ({ key }) => {
      // ⚠ïļ Downloads file from S3 to server
      const buffer = await s3.download(key);
      
      // Process with Sharp
      const optimized = await sharp(buffer)
        .resize(800, 600)
        .webp({ quality: 80 })
        .toBuffer();
      
      // ⚠ïļ Uploads processed file back to S3
      await storage.upload.file(optimized, `optimized/${key}`);
    })
});
// ✅ Better: Client-side preprocessing (recommended)
import imageCompression from 'browser-image-compression';

function ImageUpload() {
  const { uploadFiles } = upload.images();
  
  const handleUpload = async (file: File) => {
    // ✅ Compress on client BEFORE upload
    const compressed = await imageCompression(file, {
      maxSizeMB: 1,
      maxWidthOrHeight: 1920,
    });
    
    // Upload already-optimized file
    await uploadFiles([compressed]);
  };
}

Backend Services

Integration Pattern - We provide hooks, you connect services

We don't implement these services:

ServiceWhat We ProvideYou Implement
WebhooksLifecycle hooksWebhook delivery
NotificationsonUploadComplete hookEmail/SMS sending
DatabaseFile metadata in hooksDB storage logic
Queue SystemsHooks with contextQueue integration
Background JobsAsync hook supportJob processing
AnalyticsHooks with event dataAnalytics tracking

Example Integration:

import { db } from '@/lib/database';

const router = s3.createRouter({
  fileUpload: s3.file()
    .onUploadComplete(async ({ file, key, url, metadata }) => {
      // You implement database logic
      await db.files.create({
        data: {
          name: file.name,
          size: file.size,
          url: url,
          s3Key: key,
          userId: metadata.userId,
          uploadedAt: new Date()
        }
      });
    })
});
import { sendWebhook } from '@/lib/webhooks';

const router = s3.createRouter({
  fileUpload: s3.file()
    .onUploadComplete(async ({ file, url }) => {
      // You implement webhook delivery
      await sendWebhook({
        event: 'file.uploaded',
        data: {
          filename: file.name,
          url: url,
          timestamp: new Date().toISOString()
        }
      });
    })
});
import { sendEmail } from '@/lib/email';

const router = s3.createRouter({
  fileUpload: s3.file()
    .onUploadComplete(async ({ file, metadata }) => {
      // You implement email notifications
      await sendEmail({
        to: metadata.userEmail,
        subject: 'File Upload Complete',
        body: `Your file "${file.name}" has been uploaded successfully.`
      });
    })
});
import { queue } from '@/lib/queue';

const router = s3.createRouter({
  fileUpload: s3.file()
    .onUploadComplete(async ({ file, key }) => {
      // You implement queue integration
      await queue.add('process-file', {
        fileKey: key,
        fileName: file.name,
        processType: 'thumbnail-generation'
      });
    })
});
import { analytics } from '@/lib/analytics';

const router = s3.createRouter({
  fileUpload: s3.file()
    .onUploadStart(async ({ file, metadata }) => {
      // Track upload start
      await analytics.track('upload_started', {
        userId: metadata.userId,
        fileSize: file.size,
        fileType: file.type
      });
    })
    .onUploadComplete(async ({ file, url, metadata }) => {
      // Track successful upload
      await analytics.track('upload_completed', {
        userId: metadata.userId,
        fileName: file.name,
        fileSize: file.size,
        fileUrl: url
      });
    })
    .onUploadError(async ({ error, metadata }) => {
      // Track errors
      await analytics.track('upload_failed', {
        userId: metadata.userId,
        error: error.message
      });
    })
});

Why this approach?

  • ✅ You're not locked into our choice of services
  • ✅ Use your existing infrastructure
  • ✅ Switch services without changing upload library
  • ✅ Keeps our bundle size minimal

Platform Features

Not a Platform - Pushduck is a library, not a SaaS

We will never build:

❌ User Management - Use NextAuth, Clerk, Supabase Auth, etc.
❌ Team/Organization Systems - Build in your application
❌ Permission/Role Management - Implement in your middleware
❌ Analytics Dashboards - We provide hooks for your analytics
❌ Admin Panels - Build with your UI framework
❌ Billing/Subscriptions - Use Stripe, Paddle, etc.
❌ API Key Management - Implement in your system
❌ Audit Logs - Log via hooks to your logging service

Why not?

  • Every app has different requirements
  • Would require a backend service (we're a library)
  • Creates vendor lock-in
  • Massive scope creep from our core mission

Our approach: Provide middleware hooks

import { auth } from '@/lib/auth';
import { checkPermission } from '@/lib/permissions';
import { logAudit } from '@/lib/audit';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async ({ req, metadata }) => {
      // YOUR auth system
      const user = await auth.getUser(req);
      if (!user) throw new Error('Unauthorized');
      
      // YOUR permissions system
      if (!checkPermission(user, 'upload:create')) {
        throw new Error('Forbidden');
      }
      
      // YOUR audit logging
      await logAudit({
        userId: user.id,
        action: 'file.upload.started',
        metadata: metadata
      });
      
      return { userId: user.id };
    })
});

Authentication & Authorization

What we provide:

  • ✅ Middleware hooks for auth checks
  • ✅ Access to request context (headers, cookies, etc.)
  • ✅ Integration examples with popular auth providers

What we don't provide:

  • ❌ Built-in auth system
  • ❌ Session management
  • ❌ OAuth providers
  • ❌ API key generation
  • ❌ User database

Example Integrations:

import { auth } from '@/lib/auth';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async ({ req }) => {
      const session = await auth.api.getSession({
        headers: req.headers
      });
      
      if (!session?.user) {
        throw new Error('Please sign in to upload files');
      }
      
      return {
        userId: session.user.id,
        userEmail: session.user.email
      };
    })
});
import { getServerSession } from 'next-auth';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async ({ req }) => {
      const session = await getServerSession();
      
      if (!session?.user) {
        throw new Error('Please sign in to upload files');
      }
      
      return {
        userId: session.user.id,
        userEmail: session.user.email
      };
    })
});
import { auth } from '@clerk/nextjs';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async () => {
      const { userId } = auth();
      
      if (!userId) {
        throw new Error('Unauthorized');
      }
      
      return { userId };
    })
});
import { createServerClient } from '@supabase/ssr';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async ({ req }) => {
      const supabase = createServerClient(/* config */);
      const { data: { user } } = await supabase.auth.getUser();
      
      if (!user) {
        throw new Error('Unauthorized');
      }
      
      return { userId: user.id };
    })
});
import { verifyToken } from '@/lib/auth';

const router = s3.createRouter({
  fileUpload: s3.file()
    .middleware(async ({ req }) => {
      const token = req.headers.get('authorization')?.replace('Bearer ', '');
      
      if (!token) {
        throw new Error('No token provided');
      }
      
      const user = await verifyToken(token);
      
      if (!user) {
        throw new Error('Invalid token');
      }
      
      return { userId: user.id };
    })
});

ðŸŽŊ The Integration Pattern

This is our core philosophy in action:

// 1. We handle uploads
// 2. You connect your services via hooks
// 3. Everyone wins

const router = s3.createRouter({
  fileUpload: s3.file()
    // YOUR auth
    .middleware(async ({ req }) => {
      const user = await yourAuth.getUser(req);
      return { userId: user.id };
    })
    
    // YOUR business logic  
    .onUploadStart(async ({ file, metadata }) => {
      await yourAnalytics.track('upload_started', {
        userId: metadata.userId,
        fileSize: file.size
      });
    })
    
    // YOUR database
    .onUploadComplete(async ({ file, url, key, metadata }) => {
      await yourDatabase.files.create({
        userId: metadata.userId,
        url: url,
        s3Key: key,
        name: file.name,
        size: file.size
      });
      
      // YOUR notifications
      await yourEmailService.send({
        to: metadata.userEmail,
        template: 'upload-complete',
        data: { fileName: file.name }
      });
      
      // YOUR webhooks
      await yourWebhooks.trigger({
        event: 'file.uploaded',
        data: { url, fileName: file.name }
      });
      
      // YOUR queue
      await yourQueue.add('process-file', {
        fileKey: key,
        userId: metadata.userId
      });

      // YOUR analytics
      await yourAnalytics.track('upload_completed', {
        userId: metadata.userId,
        fileName: file.name,
        fileSize: file.size,
        fileUrl: url,
        fileKey: key
      });
    })
    
    // YOUR error handling
    .onUploadError(async ({ error, metadata }) => {
      await yourErrorTracking.log({
        error: error,
        userId: metadata.userId
      });
    })
});

Benefits:

  • ðŸŠķ Pushduck stays lightweight (only upload logic)
  • 🔌 You use your preferred services
  • ðŸŽŊ No vendor lock-in
  • ⚡ No unnecessary code in your bundle
  • 🔧 Maximum flexibility

ðŸĪ” Decision Framework

When considering new features, we ask:

✅ Add if:

  1. Core to uploads - Directly helps files get to S3
  2. Universally needed - 80%+ of users need it
  3. Can't be solved externally - Must be part of upload flow
  4. Lightweight - Doesn't balloon bundle size
  5. Framework agnostic - Works everywhere

❌ Don't add if:

  1. Better tools exist - Sharp does image processing better
  2. Service-specific - Requires backend infrastructure
  3. Opinion-heavy - Database choice, auth provider, etc.
  4. UI-specific - Every app needs different UI
  5. Platform feature - User management, billing, etc.

🔌 Provide hooks if:

  1. Common integration point - Many users need it
  2. Can be external - Services can be swapped
  3. Timing matters - Needs to happen at specific point in upload lifecycle

🌟 What This Means For You

As a User

You get:

  • ✅ Lightweight, focused upload library
  • ✅ Freedom to choose your own tools
  • ✅ No vendor lock-in
  • ✅ Clear integration patterns
  • ✅ Stable, predictable API

You're responsible for:

  • 🔧 Choosing and integrating your services
  • 🔧 Building your UI (or copy ours)
  • 🔧 Implementing your business logic
  • 🔧 Managing your infrastructure

As a Contributor

Focus contributions on:

  • ✅ Core upload features (resumable, queuing, etc.)
  • ✅ Framework adapters
  • ✅ Testing utilities
  • ✅ Documentation & examples
  • ✅ Integration guides

We'll reject PRs for:

  • ❌ File processing features
  • ❌ Backend services (webhooks, notifications)
  • ❌ Database adapters
  • ❌ Auth providers
  • ❌ Platform features

📚 Further Reading


💎 Questions?

Have questions about scope or philosophy?

Remember: We're focused on being the best upload library, not the biggest. Every feature we say "no" to keeps Pushduck fast, lightweight, and maintainable.