# AI & LLM Integration (/docs/ai-integration)
## AI & LLM Features
Pushduck documentation provides AI-friendly endpoints that make it easy for large language models (LLMs) and automated tools to access and process our documentation content.
## Available Endpoints
### π Complete Documentation Export
Access all documentation content in a single, structured format:
```
GET /llms.txt
```
This endpoint returns all documentation pages in a clean, AI-readable format with:
* Page titles and URLs
* Descriptions and metadata
* Full content with proper formatting
* Structured sections and hierarchies
**Example Usage:**
```bash
curl https://your-domain.com/llms.txt
```
### π Individual Page Access
Access any documentation page's raw content by appending `.mdx` to its URL:
```
GET /docs/{page-path}.mdx
```
**Examples:**
* `/docs/quick-start.mdx` - Quick start guide content
* `/docs/api/client/use-upload-route.mdx` - Hook documentation
* `/docs/providers/aws-s3.mdx` - AWS S3 setup guide
## Use Cases
### π€ **AI Assistant Integration**
* Train custom AI models on our documentation
* Create chatbots that can answer questions about Pushduck
* Build intelligent documentation search systems
### π§ **Development Tools**
* Generate code examples and snippets
* Create automated documentation tests
* Build CLI tools that reference our docs
### π **Content Analysis**
* Analyze documentation completeness
* Track content changes over time
* Generate documentation metrics
## Content Format
The LLM endpoints return content in a structured format:
```
# Page Title
URL: /docs/page-path
Page description here
# Section Headers
Content with proper markdown formatting...
## Subsections
- Lists and bullet points
- Code blocks with syntax highlighting
- Tables and structured data
```
## Technical Details
* **Caching**: Content is cached for optimal performance
* **Processing**: Uses Remark pipeline with MDX and GFM support
* **Format**: Clean markdown with frontmatter removed
* **Encoding**: UTF-8 text format
* **CORS**: Enabled for cross-origin requests
## Rate Limiting
These endpoints are designed for programmatic access and don't have aggressive rate limiting. However, please be respectful:
* Cache responses when possible
* Avoid excessive automated requests
* Use appropriate user agents for your tools
## Examples
### Python Script
```python
import requests
# Get all documentation
response = requests.get('https://your-domain.com/llms.txt')
docs_content = response.text
# Get specific page
page_response = requests.get('https://your-domain.com/docs/quick-start.mdx')
page_content = page_response.text
```
### Node.js/JavaScript
```javascript
// Fetch all documentation
const allDocs = await fetch("/llms.txt").then((r) => r.text());
// Fetch specific page
const quickStart = await fetch("/docs/quick-start.mdx").then((r) => r.text());
```
### cURL
```bash
# Download all docs to file
curl -o pushduck-docs.txt https://your-domain.com/llms.txt
# Get specific page content
curl https://your-domain.com/docs/api/client/use-upload-route.mdx
```
## Integration with Popular AI Tools
### OpenAI GPT
Use the `/llms.txt` endpoint to provide context about Pushduck in your GPT conversations.
### Claude/Anthropic
Feed documentation content to Claude for detailed analysis and code generation.
### Local LLMs
Download content for training or fine-tuning local language models.
***
These AI-friendly endpoints make it easy to integrate Pushduck documentation into your development workflow and AI-powered tools!
# Comparisons (/docs/comparisons)
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Overview
Choosing the right file upload solution depends on your project's requirements. This page compares Pushduck with popular alternatives to help you make an informed decision.
**TL;DR:** Pushduck is ideal if you want a **lightweight, self-hosted** solution with **full control** over your S3 storage, without vendor lock-in or ongoing upload fees.
**Note:** Pricing, features, and bundle sizes are approximate and current as of October 2025. Always verify current details from official sources before making decisions.
***
## Quick Comparison
| Feature | Pushduck | UploadThing | Uploadcare | AWS SDK | Uppy |
| --------------------- | ---------------------------------------- | -------------------- | -------------------- | -------------------- | -------------------- |
| **Bundle Size** | \~7KB | \~200KB+ | \~150KB+ | \~500KB+ | \~50KB+ (core) |
| **Setup Time** | 5 minutes | 10 minutes | 15 minutes | 15-20 hours | 30-60 minutes |
| **Edge Runtime** | β
Yes | β
Yes | β
Yes | β No | β
Partial |
| **Self-Hosted** | β
Yes | β No | β No | β
Yes | β
Yes |
| **Pricing Model** | Free (S3 costs only) | Per upload | Per upload + storage | Free (S3 costs only) | Free (S3 costs only) |
| **Type Safety** | β
Full | β
Full | β οΈ Partial | β οΈ Manual | β None |
| **Multi-Provider** | β
6 (AWS, R2, DO, MinIO, GCS, S3-compat) | β Own infrastructure | β Own infrastructure | β AWS only | β
Yes |
| **React Hooks** | β
Built-in | β
Built-in | β
Built-in | β Build yourself | β
Available |
| **Progress Tracking** | β
Automatic | β
Automatic | β
Automatic | β Build yourself | β
Automatic |
| **Presigned URLs** | β
Automatic | β
Automatic | N/A (managed) | β Build yourself | β οΈ Manual |
| **Best For** | Developers | Rapid prototyping | Enterprises | Full AWS control | UI flexibility |
***
## Detailed Comparisons
### vs UploadThing
**UploadThing** is a managed file upload service with tight Next.js integration and developer-friendly DX.
**When to choose Pushduck:**
* β
You want to avoid per-upload fees
* β
You need edge runtime support (UploadThing requires Node.js runtime)
* β
You want full control over storage and file URLs
* β
You're using multiple S3-compatible providers (R2, DigitalOcean, etc.)
**When to choose UploadThing:**
* β
You want zero infrastructure setup
* β
You prefer managed service over self-hosted
* β
You're building a rapid prototype or MVP
```
Pushduck: ~7KB (minified + gzipped)
UploadThing: ~200KB+ (includes server runtime)
Difference: 28x smaller
```
**Why Pushduck is smaller:**
* Uses `aws4fetch` (lightweight AWS signer) instead of heavy dependencies
* No built-in UI components (bring your own)
* Focused on upload logic only
* Optimized for tree-shaking
**Pushduck:**
* Library: **Free (MIT license)**
* Costs: **S3 storage only** (\~$0.023/GB on AWS, free tier: 5GB + 20k requests/month)
* Example: 10k uploads/month @ 2MB each = \~$0.50/month
**UploadThing:**
* Free tier: 2GB storage + 100 uploads/month
* Pro: $20/month (50GB storage + 10k uploads)
* Enterprise: Custom pricing
**Cost Comparison (10k monthly uploads):**
* Pushduck + AWS S3: **\~$0.50/month**
* UploadThing Pro: **$20/month** (if within limits)
| Aspect | Pushduck | UploadThing |
| -------------------- | ------------------------- | ------------------------------ |
| **Storage Provider** | Your choice (6 providers) | UploadThing's infrastructure |
| **File URLs** | Your domain/CDN | UploadThing's CDN |
| **Data Ownership** | 100% yours | Stored on their infrastructure |
| **Migration** | Easy (standard S3) | Requires re-uploading files |
| **Vendor Lock-in** | None | Medium |
***
### vs AWS SDK
**AWS SDK (`@aws-sdk/client-s3`)** is the official AWS library for S3 operations.
**When to choose Pushduck:**
* β
You need edge runtime support (AWS SDK requires Node.js)
* β
You want a smaller bundle (\~16KB vs \~500KB)
* β
You need React hooks and type-safe APIs
* β
You want multi-provider support (R2, DigitalOcean, MinIO)
* β
You prefer declarative schemas over imperative code
* β
You want presigned URLs handled automatically
* β
You want to avoid implementing upload infrastructure from scratch
**When to choose AWS SDK:**
* β
You need advanced S3 features (lifecycle policies, bucket management)
* β
You're already heavily invested in AWS ecosystem
* β
You need multipart uploads for very large files (100GB+)
* β
You need direct, low-level control over every S3 operation
```
Pushduck: ~7KB (core client, minified + gzipped)
AWS SDK: ~500KB (@aws-sdk/client-s3)
Difference: 71x smaller β‘οΈ
```
**Why it matters:**
* Faster page loads
* Lower bandwidth costs
* Better mobile experience
* Improved Core Web Vitals
**Pushduck:**
```typescript
// Declarative schema
const router = s3.createRouter({
imageUpload: s3.image()
.maxFileSize('5MB')
.middleware(async ({ req }) => {
const user = await auth(req);
return { userId: user.id };
}),
});
// Client (React)
const { uploadFiles } = useUploadRoute('imageUpload');
```
**AWS SDK:**
```typescript
// Imperative code
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ region: 'us-east-1' });
const uploadFile = async (file: File) => {
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: `uploads/${file.name}`,
Body: await file.arrayBuffer(),
ContentType: file.type,
});
await s3.send(command);
};
// + Manual progress tracking
// + Manual validation
// + Manual React state management
```
**Pushduck provides:**
* Type-safe schemas
* Built-in React hooks
* Automatic progress tracking
* Middleware system
* Multi-provider support
**AWS SDK provides:**
* Direct S3 control
* Advanced features
* Official AWS support
### What AWS SDK Requires You to Build
With AWS SDK, you need to **manually implement everything**:
#### 1. Presigned URL Generation
```typescript
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
// β You must handle:
// - Creating S3 client with credentials
// - Generating unique file keys
// - Setting correct content types
// - Configuring expiration times
// - Handling CORS headers
// - Managing bucket permissions
const s3Client = new S3Client({ region: 'us-east-1' });
const generatePresignedUrl = async (fileName: string) => {
const key = `uploads/${Date.now()}-${fileName}`;
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: key,
ContentType: 'application/octet-stream', // Must set manually
});
const url = await getSignedUrl(s3Client, command, { expiresIn: 3600 });
return { url, key };
};
```
#### 2. Client-Side Upload Logic
```typescript
// β You must build:
// - XMLHttpRequest wrapper for progress tracking
// - Error handling and retry logic
// - AbortController for cancellation
// - State management for multiple files
// - Progress aggregation
// - File validation (size, type)
const uploadFile = (file: File, presignedUrl: string) => {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.onprogress = (e) => {
const progress = (e.loaded / e.total) * 100;
// Update UI manually
};
xhr.onload = () => {
if (xhr.status === 200) {
resolve(xhr.response);
} else {
reject(new Error('Upload failed'));
}
};
xhr.onerror = () => reject(new Error('Network error'));
xhr.open('PUT', presignedUrl);
xhr.setRequestHeader('Content-Type', file.type);
xhr.send(file);
});
};
```
#### 3. API Route Handler
```typescript
// β You must implement:
// - Request parsing and validation
// - Authentication/authorization
// - File metadata validation
// - Error responses
// - Type safety
export async function POST(request: Request) {
const { fileName, fileSize, fileType } = await request.json();
// Validate manually
if (fileSize > 10 * 1024 * 1024) {
return Response.json({ error: 'File too large' }, { status: 400 });
}
// Auth manually
const user = await authenticateUser(request);
if (!user) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// Generate presigned URL
const { url, key } = await generatePresignedUrl(fileName);
return Response.json({ url, key });
}
```
#### 4. React Component State Management
```typescript
// β You must manage:
// - File state (idle, uploading, success, error)
// - Progress for each file
// - Overall progress
// - Error messages
// - Upload speed and ETA calculations
// - Cleanup on unmount
const [files, setFiles] = useState([]);
const [isUploading, setIsUploading] = useState(false);
const [progress, setProgress] = useState(0);
const [errors, setErrors] = useState([]);
// Implement all the upload logic...
```
#### 5. CORS Configuration
```json
// β You must configure S3 bucket CORS manually:
{
"CORSRules": [
{
"AllowedOrigins": ["https://your-domain.com"],
"AllowedMethods": ["PUT", "POST", "GET"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"]
}
]
}
```
***
### What Pushduck Handles For You
```typescript
// β
Pushduck handles ALL of the above:
// Server (3 lines)
const router = s3.createRouter({
imageUpload: s3.image().maxFileSize('5MB'),
});
export const { GET, POST } = router.handlers;
// Client (1 line)
const { uploadFiles, progress, isUploading } = useUploadRoute('imageUpload');
```
**Everything included:**
* β
Presigned URL generation (automatic)
* β
File validation (declarative)
* β
Progress tracking (real-time)
* β
Error handling (built-in)
* β
Multi-file support (automatic)
* β
React state management (handled)
* β
Type safety (end-to-end)
* β
Authentication hooks (middleware)
* β
CORS headers (automatic)
* β
Cancellation (AbortController)
***
### Time to Production
| Task | AWS SDK | Pushduck |
| ---------------------- | ----------------- | ---------------- |
| **Initial setup** | 2-4 hours | 5 minutes |
| **Progress tracking** | 1-2 hours | Included |
| **Error handling** | 1-2 hours | Included |
| **Multi-file uploads** | 2-3 hours | Included |
| **Type safety** | 2-4 hours | Included |
| **Testing** | 4-6 hours | Minimal |
| **Total** | **\~15-20 hours** | **\~30 minutes** |
**AWS SDK is a low-level tool.** You're responsible for building the entire upload infrastructure, handling edge cases, security, validation, progress tracking, and state management.
**Pushduck is a high-level framework.** All the infrastructure is built-in, tested, and production-ready out of the box.
***
### vs Uploadcare / Filestack
**Uploadcare** and **Filestack** are managed file upload platforms with built-in CDN, transformations, and processing.
**When to choose Pushduck:**
* β
You want to avoid per-upload and storage fees
* β
You need full control over file storage and URLs
* β
You prefer self-hosted over managed services
* β
You don't need built-in image processing (can integrate Sharp, Cloudinary, etc.)
* β
You want to avoid vendor lock-in
**When to choose Uploadcare/Filestack:**
* β
You need built-in image/video processing
* β
You want zero infrastructure management
* β
You need global CDN with automatic optimization
* β
You have budget for managed services
**Pushduck:**
* Library: **Free**
* Storage: **S3 costs** (\~$0.023/GB on AWS)
* CDN: **Optional** (CloudFront, Cloudflare, BunnyCDN)
* Processing: **Integrate your choice** (Sharp, Cloudinary, Imgix)
**Uploadcare:**
* Free: 3k uploads + 3GB storage/month
* Start: $25/month (10k uploads + 10GB)
* Pro: $99/month (50k uploads + 100GB)
* Enterprise: Custom
**Filestack:**
* Free: 100 uploads + 100 transformations/month
* Starter: $49/month (1k uploads)
* Professional: $249/month (10k uploads)
**Cost Example (10k monthly uploads, 20GB storage):**
* Pushduck + S3: **\~$0.50/month** (+ optional CDN \~$1-5)
* Uploadcare Start: **$25/month**
* Filestack Professional: **$249/month**
| Aspect | Pushduck | Uploadcare/Filestack |
| ------------------ | ------------------- | -------------------- |
| **Storage** | Your S3 bucket | Their infrastructure |
| **File URLs** | Your domain | Their CDN |
| **Processing** | Integrate as needed | Built-in |
| **Data Migration** | Standard S3 API | Proprietary API |
| **Privacy** | Full control | Trust third party |
| **Vendor Lock-in** | None | High |
***
### vs Uppy
**Uppy** is a modular JavaScript file uploader with a focus on UI components and extensibility.
**When to choose Pushduck:**
* β
You're using React (Pushduck has first-class React support)
* β
You need type-safe APIs with TypeScript inference
* β
You want tighter Next.js integration
* β
You prefer schema-based validation over manual configuration
**When to choose Uppy:**
* β
You need a rich, pre-built UI (Dashboard, Drag & Drop, Webcam, Screen Capture)
* β
You're using vanilla JS or other frameworks (Vue, Svelte)
* β
You need resumable uploads (tus protocol)
* β
You want highly customizable UI components
**Pushduck:**
* **Focus:** Direct-to-S3 uploads with presigned URLs
* **UI:** Bring your own (minimal bundle size)
* **Type Safety:** First-class TypeScript, runtime validation
* **Backend:** Server-first (schema definitions, middleware, hooks)
**Uppy:**
* **Focus:** Modular file uploader with rich UI
* **UI:** Built-in components (Dashboard, Drag & Drop, etc.)
* **Type Safety:** TypeScript definitions available
* **Backend:** Agnostic (works with any backend)
**Choose Pushduck if:**
```typescript
// You want schema-based validation
const router = s3.createRouter({
profilePic: s3.image()
.maxFileSize('2MB')
.types(['image/jpeg', 'image/png'])
.middleware(auth)
.onUploadComplete(updateDatabase),
});
```
**Choose Uppy if:**
```typescript
// You want rich UI out of the box
import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import Webcam from '@uppy/webcam';
import ScreenCapture from '@uppy/screen-capture';
const uppy = new Uppy()
.use(Dashboard, { inline: true })
.use(Webcam, { target: Dashboard })
.use(ScreenCapture, { target: Dashboard });
```
***
## Decision Matrix
### Choose Pushduck if:
β
You want **full control** over storage and infrastructure\
β
You need **edge runtime** compatibility (Vercel, Cloudflare Workers)\
β
You prefer **self-hosted** over managed services\
β
You want to **minimize costs** (pay only S3 storage fees)\
β
You need **type-safe APIs** with TypeScript inference\
β
You're building with **React** and **Next.js**\
β
You want to avoid **vendor lock-in**\
β
You need support for **multiple S3-compatible providers**
***
### Choose UploadThing if:
β
You want **zero infrastructure setup**\
β
You prefer **managed service** over self-hosting\
β
You're building a **rapid prototype** or **MVP**\
β
You want to **avoid S3 configuration**\
β
You're **okay with per-upload pricing**
***
### Choose AWS SDK if:
β
You need **advanced S3 features** (lifecycle, versioning, bucket management)\
β
You're **AWS-only** (not using other providers)\
β
You need **multipart uploads** for very large files (100GB+)\
β
You don't need **edge runtime** compatibility\
β
Bundle size is **not a concern**\
β
You have **15-20 hours** to build upload infrastructure from scratch\
β
You want **full low-level control** over every S3 operation\
β οΈ You're comfortable **manually implementing** presigned URLs, progress tracking, validation, error handling, and React state management
***
### Choose Uploadcare/Filestack if:
β
You need **built-in image/video processing**\
β
You want **zero infrastructure** management\
β
You need a **global CDN** with automatic optimization\
β
You have **budget for managed services** ($25-250/month)\
β
You want **all-in-one** (upload + storage + processing + CDN)
***
### Choose Uppy if:
β
You need **rich, pre-built UI** components\
β
You want **highly customizable** upload widgets\
β
You need **resumable uploads** (tus protocol)\
β
You're **not using React** (vanilla JS, Vue, Svelte)\
β
You want **modular architecture** (pick and choose plugins)
***
## Feature Comparison
### Core Features
| Feature | Pushduck | UploadThing | AWS SDK | Uploadcare | Uppy |
| --------------------- | -------- | ----------- | ---------- | ---------- | ---------- |
| **Direct-to-S3** | β
| β
| β
| β | β
|
| **Presigned URLs** | β
| β
| β
| β | β οΈ |
| **Progress Tracking** | β
| β
| β οΈ Manual | β
| β
|
| **Multi-file Upload** | β
| β
| β
| β
| β
|
| **Type Safety** | β
Full | β
Full | β οΈ Partial | β οΈ Partial | β οΈ Partial |
| **Schema Validation** | β
| β
| β | β οΈ | β οΈ |
| **Middleware System** | β
| β
| β | β | β οΈ |
| **Lifecycle Hooks** | β
| β
| β | β οΈ | β
|
| **React Hooks** | β
| β
| β | β
| β
|
### Storage & Providers
| Feature | Pushduck | UploadThing | AWS SDK | Uploadcare | Uppy |
| ------------------------ | -------- | ----------- | ------- | ---------- | ---- |
| **AWS S3** | β
| β | β
| β | β
|
| **Cloudflare R2** | β
| β | β | β | β
|
| **DigitalOcean Spaces** | β
| β | β | β | β
|
| **Google Cloud Storage** | β
| β | β | β | β
|
| **MinIO** | β
| β | β | β | β
|
| **Backblaze B2** | β
| β | β | β | β
|
| **Custom Domain** | β
| β οΈ Limited | β
| β
| β
|
### Runtime & Compatibility
| Feature | Pushduck | UploadThing | AWS SDK | Uploadcare | Uppy |
| ------------------------ | -------- | ----------- | ------- | ---------- | ---- |
| **Edge Runtime** | β
| β
| β | β
| β
|
| **Node.js** | β
| β
| β
| β
| β
|
| **Cloudflare Workers** | β
| β
| β | β
| β
|
| **Vercel Edge** | β
| β
| β | β
| β
|
| **Next.js App Router** | β
| β
| β
| β
| β
|
| **Next.js Pages Router** | β
| β
| β
| β
| β
|
| **Remix** | β
| β οΈ | β
| β
| β
|
| **SvelteKit** | β
| β | β
| β
| β
|
***
## Bundle Size Breakdown
```
ββββββββββββββββββββββββββββββββββββββββββββββββββ
β Bundle Size Comparison (minified + gzipped) β
ββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Pushduck: ββ 7KB β
β Uppy (core): ββββββββββββ 50KB β
β Uploadcare: ββββββββββββββββββ 150KB β
β UploadThing: ββββββββββββββββββββββββ 200KB β
β AWS SDK: ββββββββββββββββββββββββ 500KB β
ββββββββββββββββββββββββββββββββββββββββββββββββββ
```
**Why bundle size matters:**
* **Faster initial page load** (especially on mobile)
* **Lower bandwidth costs**
* **Better Core Web Vitals** (LCP, FCP)
* **Improved SEO** (Google considers page speed)
***
## Pricing Breakdown
### Monthly Cost Example
**Scenario:** 10,000 uploads/month, 2MB average file size, 20GB total storage
| Solution | Monthly Cost | Breakdown |
| ---------------------------- | ------------ | ---------------------------------- |
| **Pushduck + AWS S3** | **$0.50** | Storage: $0.46, Requests: $0.04 |
| **Pushduck + Cloudflare R2** | **$0.30** | Storage: $0.30, Egress: $0 |
| **AWS SDK + S3** | **$0.50** | Same as Pushduck (library is free) |
| **UploadThing Pro** | **$20** | (if within 10k upload limit) |
| **Uploadcare Start** | **$25** | (if within 10k upload limit) |
| **Filestack Professional** | **$249** | (10k uploads tier) |
**Note:** Pushduck and AWS SDK are libraries, not services. You pay only for S3 storage. Managed services (UploadThing, Uploadcare, Filestack) handle infrastructure but charge per-upload fees.
***
## Migration Guide
### From AWS SDK to Pushduck
```typescript
// Before (AWS SDK)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ region: 'us-east-1' });
const uploadFile = async (file: File) => {
const command = new PutObjectCommand({
Bucket: 'my-bucket',
Key: `uploads/${file.name}`,
Body: await file.arrayBuffer(),
});
await s3.send(command);
};
// After (Pushduck)
// Server
const { s3 } = createUploadConfig()
.provider('aws', { bucket: 'my-bucket', region: 'us-east-1' })
.build();
const router = s3.createRouter({
fileUpload: s3.file().maxFileSize('10MB'),
});
export const { GET, POST } = router.handlers;
// Client
const { uploadFiles } = useUploadRoute('fileUpload');
```
**Benefits:**
* β
31x smaller bundle (\~16KB vs \~500KB)
* β
Edge runtime compatible
* β
Built-in React hooks
* β
Type-safe APIs
* β
Automatic progress tracking
***
### From UploadThing to Pushduck
**Migration Steps:**
1. **Set up S3 bucket** (one-time setup)
2. **Replace UploadThing config with Pushduck config**
3. **Update client imports**
4. **Migrate files** (optional, can use UploadThing's S3 export if available)
**Data Ownership:**
* UploadThing: Files on their infrastructure (requires export)
* Pushduck: Files in your S3 bucket (you own them)
***
## Frequently Asked Questions
### "Should I use Pushduck or a managed service?"
**Use Pushduck if:**
* You want full control and ownership
* You want to minimize costs
* You're comfortable with basic S3 setup
**Use managed service if:**
* You want zero infrastructure setup
* You need built-in processing (images, videos)
* You have budget for convenience
***
### "Is Pushduck production-ready?"
β
Yes! Pushduck is used in production by multiple projects.
**Production features:**
* Comprehensive error handling
* Health checks and metrics
* Battle-tested S3 upload flow
* Type-safe APIs
* Extensive test coverage
See: [Production Checklist](/docs/guides/production-checklist)
***
### "Can I migrate from Pushduck later?"
β
Yes, easily! Your files are in standard S3 buckets.
**Migration path:**
1. Your files are already in S3 (standard format)
2. Switch to any S3-compatible solution
3. No data migration needed (files stay in your bucket)
4. No vendor lock-in
***
### "Does Pushduck support image processing?"
β οΈ **Not built-in** (by design - keeps bundle small).
**Integration options:**
* [Sharp](/docs/guides/image-uploads) - Server-side processing
* [Cloudinary](/docs/guides/image-uploads) - API-based
* [Imgix](/docs/guides/image-uploads) - URL-based
* Any image processing tool
See: [Image Uploads Guide](/docs/guides/image-uploads)
***
## Conclusion
**Pushduck** is ideal for developers who want:
* πͺΆ Lightweight library (\~16KB)
* π Full control over storage
* π° Minimal costs (S3 only)
* π Edge runtime support
* π No vendor lock-in
If you need a managed service with built-in processing and global CDN, consider **Uploadcare** or **Filestack**.
If you want rapid prototyping with zero S3 setup, consider **UploadThing**.
For advanced S3 features and AWS-only projects, **AWS SDK** is the right choice.
For rich UI components and resumable uploads, **Uppy** is a great option.
***
**Ready to get started?** Head to the [Quick Start](/docs/quick-start) guide to set up Pushduck in 5 minutes.
# Examples & Demos (/docs/examples)
import { Callout } from "fumadocs-ui/components/callout";
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
**Live Demos:** These are fully functional demos using real Cloudflare R2 storage. Files are uploaded to a demo bucket and may be automatically cleaned up. Don't upload sensitive information.
**Having Issues?** If uploads aren't working (especially with `next dev --turbo`), check our [Troubleshooting Guide](/docs/api/troubleshooting) for common solutions including the known Turbo mode compatibility issue.
## Interactive Upload Demo
The full-featured demo showcasing all capabilities:
**ETA & Speed Tracking:** Upload speed (MB/s) and estimated time remaining (ETA) appear below the progress bar during active uploads. Try uploading larger files (1MB+) to see these metrics in action! ETA becomes more accurate after the first few seconds of upload.
## Image-Only Upload
Focused demo for image uploads with preview capabilities:
## Document Upload
Streamlined demo for document uploads:
## Key Features Demonstrated
### β
**Type-Safe Client**
```typescript
// Property-based access with full TypeScript inference
const imageUpload = upload.imageUpload();
const fileUpload = upload.fileUpload();
// No string literals, no typos, full autocomplete
await imageUpload.uploadFiles(selectedFiles);
```
### β‘ **Real-Time Progress**
* Individual file progress tracking with percentage completion
* Upload speed monitoring (MB/s) with live updates
* ETA calculations showing estimated time remaining
* Pause/resume functionality (coming soon)
* Comprehensive error handling with retry mechanisms
### π **Built-in Validation**
* File type validation (MIME types)
* File size limits with user-friendly errors
* Custom validation middleware
* Malicious file detection
### π **Provider Agnostic**
* Same code works with any S3-compatible provider
* Switch between Cloudflare R2, AWS S3, DigitalOcean Spaces
* Zero vendor lock-in
## Code Examples
```typescript
"use client";
import { upload } from "@/lib/upload-client";
export function SimpleUpload() {
const { uploadFiles, files, isUploading } = upload.imageUpload();
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map(file => (
{file.name}
{file.status}
{file.url &&
View }
))}
);
}
```
```typescript
"use client";
import { upload } from "@/lib/upload-client";
import { useState } from "react";
export function MetadataUpload() {
const [albumId, setAlbumId] = useState('vacation-2025');
const [tags, setTags] = useState(['summer']);
const { uploadFiles, files, isUploading } = upload.imageUpload({
onSuccess: (results) => {
console.log(`Uploaded ${results.length} images to album: ${albumId}`);
}
});
const handleUpload = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
// Pass client-side context as metadata
uploadFiles(selectedFiles, {
albumId: albumId,
tags: tags,
visibility: 'private',
uploadSource: 'web-app'
});
};
return (
setAlbumId(e.target.value)}>
Vacation 2025
Family Photos
Work Events
{files.map(file => (
{file.name}
{file.status}
{file.url &&
View }
))}
);
}
```
```typescript
// app/api/upload/route.ts
import { createUploadConfig } from "pushduck/server";
const { s3, } = createUploadConfig()
.provider("cloudflareR2",{
accountId: process.env.CLOUDFLARE_ACCOUNT_ID!,
bucket: process.env.R2_BUCKET!,
})
.defaults({
maxFileSize: "10MB",
acl: "public-read",
})
.build();
const uploadRouter = s3.createRouter({
imageUpload: s3
.image()
.maxFileSize("5MB")
.formats(["jpeg", "png", "webp"])
.middleware(async ({ file, metadata }) => {
// Custom authentication and metadata
const session = await getServerSession();
if (!session) throw new Error("Unauthorized");
return {
...metadata,
userId: session.user.id,
uploadedAt: new Date().toISOString(),
};
})
.onUploadComplete(async ({ file, url, metadata }) => {
// Post-upload processing
console.log(`Upload complete: ${url}`);
await saveToDatabase({ url, metadata });
}),
});
export const { GET, POST } = uploadRouter.handlers;
export type AppRouter = typeof uploadRouter;
```
```typescript
"use client";
import { upload } from "@/lib/upload-client";
export function RobustUpload() {
const { uploadFiles, files, errors, reset } = upload.imageUpload();
const handleUpload = async (fileList: FileList) => {
try {
await uploadFiles(Array.from(fileList));
} catch (error) {
console.error("Upload failed:", error);
// Error is automatically added to the errors array
}
};
return (
e.target.files && handleUpload(e.target.files)}
/>
{/* Display errors */}
{errors.length > 0 && (
Upload Errors:
{errors.map((error, index) => (
{error}
))}
Clear Errors
)}
{/* Display files with status */}
{files.map(file => (
{file.name}
{file.status}
{file.status === "uploading" && (
)}
{file.status === "error" && (
{file.error}
)}
{file.status === "success" && file.url && (
View File
)}
))}
);
}
```
## Real-World Use Cases
### **Profile Picture Upload**
Single image upload with instant preview and crop functionality.
### **Document Management**
Multi-file document upload with categorization and metadata.
### **Media Gallery**
Batch image upload with automatic optimization and thumbnail generation.
### **File Sharing**
Secure file upload with expiration dates and access controls.
## Next Steps
# How It Works (/docs/how-it-works)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Mermaid } from "@/components/mdx/mermaid";
## Architecture Overview
Pushduck is built on a **direct-to-S3 upload pattern** using presigned URLs, eliminating the need for your server to handle file data.
**Core Principle:** Files go directly from the client to S3 storage, bypassing your server entirely. This enables infinite scalability and edge-compatible deployments.
***
## Upload Flow
### Complete Upload Process
**Key Benefits:**
* β
Server never touches file data (saves bandwidth)
* β
Scales infinitely (S3 handles the load)
* β
Edge-compatible (no file streaming needed)
* β
Real-time progress tracking on client
***
## Component Architecture
***
## Configuration Flow
***
## Type Safety System
***
## Middleware Chain
***
## Storage Provider System
**Key Insight:** All providers use the same S3-compatible API, so switching is just a configuration change.
***
## Client State Management
***
## Integration Points
***
## Comparison: Pushduck vs AWS SDK
### What You Need to Build with AWS SDK
***
## Key Takeaways
Files upload directly to S3 storage, bypassing your server. This enables infinite scalability and edge deployment.
\~7KB total bundle using `aws4fetch` instead of AWS SDK (\~500KB). 71x smaller, edge-compatible.
End-to-end TypeScript inference from server schema to client hook. Catch errors at compile-time.
Middleware and lifecycle hooks provide integration points without bloating the library with built-in features.
Universal Web Standard handlers work with 16+ frameworks via thin adapters.
Unified API works with 6 S3-compatible providers. Switch providers with just a config change.
***
## Next Steps
**Ready to build?** Check out the [Quick Start](/docs/quick-start) guide to get Pushduck running in 5 minutes.
**Learn More:**
* [Philosophy & Scope](/docs/philosophy) - What Pushduck does (and doesn't do)
* [API Reference](/docs/api) - Complete API documentation
* [Examples](/docs/examples) - Live demos and code samples
# Pushduck (/docs)
import { Card, Cards } from "fumadocs-ui/components/card";
import { Step, Steps } from "fumadocs-ui/components/steps";
## Simple S3 Uploads, Zero Vendor Lock-in
Upload files directly to S3-compatible storage. Lightweight (6KB), type-safe, and works everywhere. No monthly fees, no vendor lock-inβjust **3 files and \~50 lines of code**.
```typescript
// Create your upload client
const upload = createUploadClient({
endpoint: '/api/upload'
});
// Use anywhere in your app
export function MyComponent() {
const { uploadFiles, files, isUploading } = upload.imageUpload();
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
);
}
```
## Why Choose Pushduck?
**Alternative to UploadThing** - Own your infrastructure, zero recurring costs.
| Feature | Pushduck | UploadThing |
| ------------------ | -------------------- | ----------------------- |
| **Cost** | $0 (use your S3) | $10-25/month |
| **Bundle Size** | 6KB | Managed client |
| **Vendor Lock-in** | None - S3 compatible | Locked to their service |
| **File Ownership** | Your S3 bucket | Their storage |
| **Type Safety** | Full TypeScript | TypeScript support |
| **Setup Time** | \~2 minutes | \~2 minutes |
**Key benefits:**
* β
**6KB bundle** - No heavy AWS SDK
* β
**Type-safe** - Compile-time route validation
* β
**Own your files** - Any S3-compatible provider
* β
**No monthly fees** - Use your own S3
* β
**Focused library** - Does uploads, nothing else
## More Resources
## What's Included
* β
**Progress Tracking** - Real-time progress, speed, and ETA
* β
**Type Safety** - Full TypeScript from server to client
* β
**Multi-Provider** - AWS S3, Cloudflare R2, DigitalOcean, MinIO
* β
**Validation** - File type, size, and custom rules
* β
**Storage Operations** - List, delete, and manage files
* β
**Framework Support** - Next.js, Remix, Express, Fastify, and more
* β
**Drag & Drop Components** - Copy-paste UI components via CLI
π **What we don't do** - File processing, analytics, team management. See [Philosophy](/docs/philosophy) for our focused scope.
# Manual Setup (/docs/manual-setup)
import { Step, Steps } from "fumadocs-ui/components/steps";
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Prerequisites
* Next.js 13+ with App Router
* An S3-compatible storage provider (we recommend Cloudflare R2 for best performance and cost)
* Node.js 18+
## Install Pushduck
npm
pnpm
yarn
bun
```bash
npm install pushduck
```
```bash
pnpm add pushduck
```
```bash
yarn add pushduck
```
```bash
bun add pushduck
```
## Set Environment Variables
Create a `.env.local` file in your project root with your storage credentials:
Cloudflare R2
AWS S3
```dotenv title=".env.local"
# Cloudflare R2 Configuration
CLOUDFLARE_R2_ACCESS_KEY_ID=your_access_key
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your_secret_key
CLOUDFLARE_R2_ACCOUNT_ID=your_account_id
CLOUDFLARE_R2_BUCKET_NAME=your-bucket-name
```
```dotenv title=".env.local"
# AWS S3 Configuration
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
AWS_S3_BUCKET_NAME=your-bucket-name
```
**Don't have credentials yet?** Follow our [Provider setup guide](/docs/providers) to create a bucket and get your credentials in 2 minutes.
## Configure Upload Settings
First, create your upload configuration:
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
// Configure your S3-compatible storage
export const { s3, storage } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.CLOUDFLARE_R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.CLOUDFLARE_R2_SECRET_ACCESS_KEY!,
region: "auto",
endpoint: `https://${process.env.CLOUDFLARE_R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
bucket: process.env.CLOUDFLARE_R2_BUCKET_NAME!,
accountId: process.env.CLOUDFLARE_R2_ACCOUNT_ID!,
})
.build();
```
## Create Your Upload Router
Create an API route to handle file uploads:
```typescript
// app/api/s3-upload/route.ts
import { s3 } from "@/lib/upload";
const s3Router = s3.createRouter({
// Define your upload routes with validation
imageUpload: s3
.image()
.maxFileSize("10MB")
.formats(["jpg", "jpeg", "png", "webp"]),
documentUpload: s3.file().maxFileSize("50MB").types(["application/pdf", "application/msword", "application/vnd.openxmlformats-officedocument.wordprocessingml.document"]),
});
export const { GET, POST } = s3Router.handlers;
// Export the router type for client-side type safety
export type Router = typeof s3Router;
```
**What's happening here?** - `s3.createRouter()` creates a type-safe upload
handler - `s3.image()` and `s3.file()` provide validation and TypeScript
inference - The router automatically handles presigned URLs, validation, and
errors - Exporting the type enables full client-side type safety
## Create Upload Client
Create a type-safe client for your components:
```typescript
// lib/upload-client.ts
import { createUploadClient } from "pushduck";
import type { Router } from "@/app/api/s3-upload/route";
// Create a type-safe upload client
export const upload = createUploadClient({
baseUrl: "/api/s3-upload",
});
// You can also export specific upload methods
export const { imageUpload, documentUpload } = upload;
```
## Use in Your Components
Now you can use the upload client in any component with full type safety:
```typescript
// components/image-uploader.tsx
"use client";
import { upload } from "@/lib/upload-client";
export function ImageUploader() {
const { uploadFiles, uploadedFiles, isUploading, progress, error } =
upload.imageUpload();
const handleFileChange = (e: React.ChangeEvent) => {
const files = e.target.files;
if (files) {
uploadFiles(Array.from(files));
}
};
return (
{isUploading && (
Uploading... {Math.round(progress)}%
)}
{error && (
)}
{uploadedFiles.length > 0 && (
{uploadedFiles.map((file) => (
{file.name}
))}
)}
);
}
```
## Add to Your Page
Finally, use your upload component in any page:
```typescript
// app/page.tsx
import { ImageUploader } from "@/components/image-uploader";
export default function HomePage() {
return (
Upload Images
);
}
```
## π Congratulations!
You now have **production-ready file uploads** working in your Next.js app! Here's what you accomplished:
* β
**Type-safe uploads** with full TypeScript inference
* β
**Automatic validation** for file types and sizes
* β
**Progress tracking** with loading states
* β
**Error handling** with user-friendly messages
* β
**Secure uploads** using presigned URLs
* β
**Multiple file support** with image preview
**Turbo Mode Issue:** If you're using `next dev --turbo` and experiencing upload issues, try removing the `--turbo` flag from your dev script. There's a known compatibility issue with Turbo mode that can affect file uploads.
## What's Next?
Now that you have the basics working, explore these advanced features:
{" "}
{" "}
βοΈ Other Providers
Try Cloudflare R2 for better performance, or AWS S3, DigitalOcean, MinIO
Provider Setup β
## Need Help?
* π **Documentation**: Explore our comprehensive [guides](/docs/guides)
* π¬ **Community**: Join our [Discord community](https://pushduck.dev/discord)
* π **Issues**: Report bugs on [GitHub](https://github.com/abhay-ramesh/pushduck)
* π§ **Support**: Email us at [support@pushduck.com](mailto:support@pushduck.com)
**Loving Pushduck?** Give us a β on
[GitHub](https://github.com/abhay-ramesh/pushduck) and help spread the
word!
# Philosophy & Scope (/docs/philosophy)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## Our Philosophy
Pushduck is a **focused upload library**, not a platform. We believe in doing one thing exceptionally well:
> The fastest, most lightweight way to add S3 file uploads to any web application
This document defines the boundaries of what Pushduck will and won't do, and explains why.
***
## Core Principles
### πͺΆ Lightweight First
Bundle size is a feature, not an afterthought. Every dependency is carefully considered.
**We use:**
* `aws4fetch` (6.4KB) instead of AWS SDK (500KB+)
* Native `fetch()` API
* Zero unnecessary dependencies
**Result:** Core library stays under 10KB minified + gzipped
***
### π― Focused Scope
Do one thing (uploads) exceptionally well, rather than many things poorly.
**We believe:**
* Specialized tools beat all-in-one solutions
* Small, focused libraries are easier to maintain
* Users prefer composing tools over vendor lock-in
**Result:** You can replace Pushduck easily if needed, or use it alongside other tools
***
### π Extensibility Over Features
Provide hooks and APIs, not built-in everything.
**We provide:**
* Middleware system for custom logic
* Lifecycle hooks for integration points
* Type-safe APIs for extension
**You implement:**
* Your specific business logic
* Integration with your services
* Custom workflows
**Result:** Maximum flexibility without bloat
***
### π Document, Don't Implement
Show users how to integrate, don't build the integration.
**We provide:**
* Clear integration patterns
* Example code
* Best practices documentation
**We don't build:**
* Database adapters
* Auth providers
* Email services
* Analytics platforms
**Result:** Works with any stack, no vendor lock-in
***
## β
What Pushduck Does
### Core Upload Features
Upload files directly to S3 without touching your server. Reduces bandwidth costs and improves performance.
Track upload progress, speed, and ETA. Per-file and overall progress metrics for multi-file uploads.
Validate file size, type, count, and custom rules. Prevent invalid uploads before they reach S3.
Works with AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO, and any S3-compatible provider.
### Storage Operations
```typescript
// List files
const files = await storage.list.files({
prefix: "uploads/",
maxResults: 50
});
// Delete files
await storage.delete.file("uploads/old.jpg");
await storage.delete.byPrefix("temp/");
// Get metadata
const info = await storage.metadata.getInfo("uploads/doc.pdf");
// Generate download URLs
const url = await storage.download.presignedUrl("uploads/file.pdf", 3600);
```
**What we provide:**
* β
List files with pagination and filtering
* β
Delete files (single, batch, by prefix)
* β
Get file metadata (size, date, content-type)
* β
Generate presigned URLs (upload/download)
* β
Check file existence
**What we don't provide:**
* β File search/indexing (use Algolia, Elasticsearch)
* β File versioning (use S3 versioning)
* β Storage analytics (provide hooks for your analytics)
* β Duplicate detection (implement via hooks)
***
### Developer Experience
Intelligent type inference from server to client. Catch errors at compile time.
Works with Next.js, React, Express, Fastify, and more. Web Standards-based.
Interactive setup wizard, automatic provider detection, and project scaffolding.
Test your upload flows without hitting real S3. Perfect for CI/CD.
***
### Optional UI Components
Following the [shadcn/ui](https://ui.shadcn.com) approach:
```bash
# Copy components into your project
npx @pushduck/cli add upload-dropzone
npx @pushduck/cli add file-list
```
**What we provide:**
* β
Basic upload UI components (dropzone, file-list, progress-bar)
* β
Headless/unstyled components you can customize
* β
Copy-paste, not installed as dependency
**What we don't provide:**
* β Full-featured file manager UI
* β Image gallery/carousel components
* β File preview modals
* β Admin dashboard components
**Philosophy:** You own the code, you customize it. We provide starting points, not rigid components.
***
## β What Pushduck Doesn't Do
### File Processing
**Out of Scope** - Use specialized tools for these tasks
**We don't process files. Use these instead:**
| Task | Recommended Tool | Why |
| --------------------- | -------------------------------------------------- | ---------------------------- |
| Image optimization | [Sharp](https://sharp.pixelplumbing.com/) | Best-in-class, battle-tested |
| Video transcoding | [FFmpeg](https://ffmpeg.org/) | Industry standard |
| PDF generation | [PDFKit](https://pdfkit.org/) | Specialized library |
| Image transformations | [Cloudflare Images](https://cloudflare.com/images) | Edge-optimized |
| Content moderation | AWS Rekognition, Cloudflare | Purpose-built services |
**Why not?**
* These tools do it better than we ever could
* Adding them would balloon our bundle size
* Creates unnecessary dependencies
* Limits user choice
**Our approach:** Document integration patterns
**β οΈ Bandwidth Note:** Server-side processing requires downloading from S3 (inbound) and uploading variants (outbound). This negates the "server never touches files" benefit.
**Better options:**
* **Client-side preprocessing** (before upload) - Zero server bandwidth
* **URL-based transforms** (Cloudinary, Imgix) - Zero server bandwidth
* See [Image Uploads Guide](/docs/guides/image-uploads) for detailed patterns
```typescript
// Example: Integrate with Sharp
import sharp from 'sharp';
const router = s3.createRouter({
imageUpload: s3.image()
.onUploadComplete(async ({ key }) => {
// β οΈ Downloads file from S3 to server
const buffer = await s3.download(key);
// Process with Sharp
const optimized = await sharp(buffer)
.resize(800, 600)
.webp({ quality: 80 })
.toBuffer();
// β οΈ Uploads processed file back to S3
await storage.upload.file(optimized, `optimized/${key}`);
})
});
```
```typescript
// β
Better: Client-side preprocessing (recommended)
import imageCompression from 'browser-image-compression';
function ImageUpload() {
const { uploadFiles } = upload.images();
const handleUpload = async (file: File) => {
// β
Compress on client BEFORE upload
const compressed = await imageCompression(file, {
maxSizeMB: 1,
maxWidthOrHeight: 1920,
});
// Upload already-optimized file
await uploadFiles([compressed]);
};
}
```
***
### Backend Services
**Integration Pattern** - We provide hooks, you connect services
**We don't implement these services:**
| Service | What We Provide | You Implement |
| --------------- | ----------------------- | ------------------ |
| Webhooks | Lifecycle hooks | Webhook delivery |
| Notifications | `onUploadComplete` hook | Email/SMS sending |
| Database | File metadata in hooks | DB storage logic |
| Queue Systems | Hooks with context | Queue integration |
| Background Jobs | Async hook support | Job processing |
| Analytics | Hooks with event data | Analytics tracking |
**Example Integration:**
```typescript
import { db } from '@/lib/database';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, key, url, metadata }) => {
// You implement database logic
await db.files.create({
data: {
name: file.name,
size: file.size,
url: url,
s3Key: key,
userId: metadata.userId,
uploadedAt: new Date()
}
});
})
});
```
```typescript
import { sendWebhook } from '@/lib/webhooks';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, url }) => {
// You implement webhook delivery
await sendWebhook({
event: 'file.uploaded',
data: {
filename: file.name,
url: url,
timestamp: new Date().toISOString()
}
});
})
});
```
```typescript
import { sendEmail } from '@/lib/email';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, metadata }) => {
// You implement email notifications
await sendEmail({
to: metadata.userEmail,
subject: 'File Upload Complete',
body: `Your file "${file.name}" has been uploaded successfully.`
});
})
});
```
```typescript
import { queue } from '@/lib/queue';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadComplete(async ({ file, key }) => {
// You implement queue integration
await queue.add('process-file', {
fileKey: key,
fileName: file.name,
processType: 'thumbnail-generation'
});
})
});
```
```typescript
import { analytics } from '@/lib/analytics';
const router = s3.createRouter({
fileUpload: s3.file()
.onUploadStart(async ({ file, metadata }) => {
// Track upload start
await analytics.track('upload_started', {
userId: metadata.userId,
fileSize: file.size,
fileType: file.type
});
})
.onUploadComplete(async ({ file, url, metadata }) => {
// Track successful upload
await analytics.track('upload_completed', {
userId: metadata.userId,
fileName: file.name,
fileSize: file.size,
fileUrl: url
});
})
.onUploadError(async ({ error, metadata }) => {
// Track errors
await analytics.track('upload_failed', {
userId: metadata.userId,
error: error.message
});
})
});
```
**Why this approach?**
* β
You're not locked into our choice of services
* β
Use your existing infrastructure
* β
Switch services without changing upload library
* β
Keeps our bundle size minimal
***
### Platform Features
**Not a Platform** - Pushduck is a library, not a SaaS
**We will never build:**
β **User Management** - Use NextAuth, Clerk, Supabase Auth, etc.\
β **Team/Organization Systems** - Build in your application\
β **Permission/Role Management** - Implement in your middleware\
β **Analytics Dashboards** - We provide hooks for your analytics\
β **Admin Panels** - Build with your UI framework\
β **Billing/Subscriptions** - Use Stripe, Paddle, etc.\
β **API Key Management** - Implement in your system\
β **Audit Logs** - Log via hooks to your logging service
**Why not?**
* Every app has different requirements
* Would require a backend service (we're a library)
* Creates vendor lock-in
* Massive scope creep from our core mission
**Our approach:** Provide middleware hooks
```typescript
import { auth } from '@/lib/auth';
import { checkPermission } from '@/lib/permissions';
import { logAudit } from '@/lib/audit';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req, metadata }) => {
// YOUR auth system
const user = await auth.getUser(req);
if (!user) throw new Error('Unauthorized');
// YOUR permissions system
if (!checkPermission(user, 'upload:create')) {
throw new Error('Forbidden');
}
// YOUR audit logging
await logAudit({
userId: user.id,
action: 'file.upload.started',
metadata: metadata
});
return { userId: user.id };
})
});
```
***
### Authentication & Authorization
**What we provide:**
* β
Middleware hooks for auth checks
* β
Access to request context (headers, cookies, etc.)
* β
Integration examples with popular auth providers
**What we don't provide:**
* β Built-in auth system
* β Session management
* β OAuth providers
* β API key generation
* β User database
**Example Integrations:**
```typescript
import { auth } from '@/lib/auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const session = await auth.api.getSession({
headers: req.headers
});
if (!session?.user) {
throw new Error('Please sign in to upload files');
}
return {
userId: session.user.id,
userEmail: session.user.email
};
})
});
```
```typescript
import { getServerSession } from 'next-auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const session = await getServerSession();
if (!session?.user) {
throw new Error('Please sign in to upload files');
}
return {
userId: session.user.id,
userEmail: session.user.email
};
})
});
```
```typescript
import { auth } from '@clerk/nextjs';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async () => {
const { userId } = auth();
if (!userId) {
throw new Error('Unauthorized');
}
return { userId };
})
});
```
```typescript
import { createServerClient } from '@supabase/ssr';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const supabase = createServerClient(/* config */);
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
throw new Error('Unauthorized');
}
return { userId: user.id };
})
});
```
```typescript
import { verifyToken } from '@/lib/auth';
const router = s3.createRouter({
fileUpload: s3.file()
.middleware(async ({ req }) => {
const token = req.headers.get('authorization')?.replace('Bearer ', '');
if (!token) {
throw new Error('No token provided');
}
const user = await verifyToken(token);
if (!user) {
throw new Error('Invalid token');
}
return { userId: user.id };
})
});
```
***
## π― The Integration Pattern
This is our core philosophy in action:
```typescript
// 1. We handle uploads
// 2. You connect your services via hooks
// 3. Everyone wins
const router = s3.createRouter({
fileUpload: s3.file()
// YOUR auth
.middleware(async ({ req }) => {
const user = await yourAuth.getUser(req);
return { userId: user.id };
})
// YOUR business logic
.onUploadStart(async ({ file, metadata }) => {
await yourAnalytics.track('upload_started', {
userId: metadata.userId,
fileSize: file.size
});
})
// YOUR database
.onUploadComplete(async ({ file, url, key, metadata }) => {
await yourDatabase.files.create({
userId: metadata.userId,
url: url,
s3Key: key,
name: file.name,
size: file.size
});
// YOUR notifications
await yourEmailService.send({
to: metadata.userEmail,
template: 'upload-complete',
data: { fileName: file.name }
});
// YOUR webhooks
await yourWebhooks.trigger({
event: 'file.uploaded',
data: { url, fileName: file.name }
});
// YOUR queue
await yourQueue.add('process-file', {
fileKey: key,
userId: metadata.userId
});
// YOUR analytics
await yourAnalytics.track('upload_completed', {
userId: metadata.userId,
fileName: file.name,
fileSize: file.size,
fileUrl: url,
fileKey: key
});
})
// YOUR error handling
.onUploadError(async ({ error, metadata }) => {
await yourErrorTracking.log({
error: error,
userId: metadata.userId
});
})
});
```
**Benefits:**
* πͺΆ Pushduck stays lightweight (only upload logic)
* π You use your preferred services
* π― No vendor lock-in
* β‘ No unnecessary code in your bundle
* π§ Maximum flexibility
***
## π€ Decision Framework
When considering new features, we ask:
### β
Add if:
1. **Core to uploads** - Directly helps files get to S3
2. **Universally needed** - 80%+ of users need it
3. **Can't be solved externally** - Must be part of upload flow
4. **Lightweight** - Doesn't balloon bundle size
5. **Framework agnostic** - Works everywhere
### β Don't add if:
1. **Better tools exist** - Sharp does image processing better
2. **Service-specific** - Requires backend infrastructure
3. **Opinion-heavy** - Database choice, auth provider, etc.
4. **UI-specific** - Every app needs different UI
5. **Platform feature** - User management, billing, etc.
### π Provide hooks if:
1. **Common integration point** - Many users need it
2. **Can be external** - Services can be swapped
3. **Timing matters** - Needs to happen at specific point in upload lifecycle
***
## π What This Means For You
### As a User
**You get:**
* β
Lightweight, focused upload library
* β
Freedom to choose your own tools
* β
No vendor lock-in
* β
Clear integration patterns
* β
Stable, predictable API
**You're responsible for:**
* π§ Choosing and integrating your services
* π§ Building your UI (or copy ours)
* π§ Implementing your business logic
* π§ Managing your infrastructure
### As a Contributor
**Focus contributions on:**
* β
Core upload features (resumable, queuing, etc.)
* β
Framework adapters
* β
Testing utilities
* β
Documentation & examples
* β
Integration guides
**We'll reject PRs for:**
* β File processing features
* β Backend services (webhooks, notifications)
* β Database adapters
* β Auth providers
* β Platform features
***
## π Further Reading
Our development roadmap and planned features
Guidelines for contributing to the project
Patterns for integrating databases, auth, notifications, and more
Complete examples showing integration patterns
***
## π¬ Questions?
Have questions about scope or philosophy?
* π [GitHub Discussions](https://github.com/abhay-ramesh/pushduck/discussions)
* π¬ [Discord Community](https://pushduck.dev/discord)
* π [GitHub Issues](https://github.com/abhay-ramesh/pushduck/issues)
**Remember:** We're focused on being the best upload library, not the biggest. Every feature we say "no" to keeps Pushduck fast, lightweight, and maintainable.
# Quick Start (/docs/quick-start)
import { Step, Steps } from "fumadocs-ui/components/steps";
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Accordion, Accordions } from "fumadocs-ui/components/accordion";
Get file uploads working in **3 simple steps**. No overwhelming configuration, just the essentials.
**Prefer automated setup?** Use `npx @pushduck/cli init` for zero-config setup. This guide is for manual installation.
### Install Pushduck
npm
pnpm
yarn
bun
```bash
npm install pushduck
```
```bash
pnpm add pushduck
```
```bash
yarn add pushduck
```
```bash
bun add pushduck
```
### Create API Route
One file with your S3 config and upload route:
```ts title="app/api/upload/route.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3 } = createUploadConfig()
.provider("aws", {
bucket: process.env.AWS_BUCKET_NAME!,
region: process.env.AWS_REGION!,
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
})
.build();
const router = s3.createRouter({
imageUpload: s3.image().maxFileSize('5MB'),
});
export const { GET, POST } = router.handlers;
export type AppRouter = typeof router;
```
**Environment variables**: Add your S3 credentials to `.env.local`. See [Provider Setup](/docs/providers) for getting credentials.
### Create Client & Use in Components
Create a reusable client and use it in your components:
```ts title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppRouter } from '@/app/api/upload/route';
export const upload = createUploadClient({
endpoint: '/api/upload'
});
```
```tsx title="app/upload-demo.tsx"
'use client';
import { upload } from '@/lib/upload-client';
export function UploadDemo() {
const { uploadFiles, files, isUploading } = upload.imageUpload();
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map((file) => (
{file.name} - {file.progress}%
{file.status === 'success' &&
}
))}
);
}
```
## β
Done!
That's it - **3 steps, 3 files, \~40 lines of code**, and you have production-ready file uploads!
***
## Need More?
For production apps, you'll want to add authentication and custom paths:
```ts title="app/api/upload/route.ts"
const { s3 } = createUploadConfig()
.provider("aws", { /* ... */ })
.paths({
prefix: 'uploads',
generateKey: (file, metadata) =>
`${metadata.userId}/${Date.now()}/${file.name}`
})
.build();
const router = s3.createRouter({
imageUpload: s3.image()
.maxFileSize('5MB')
.middleware(async ({ req }) => {
const user = await getUser(req);
if (!user) throw new Error('Unauthorized');
return { userId: user.id };
}),
});
```
See [Configuration Guide](/docs/api/configuration/upload-config) for all options.
Using Remix, SvelteKit, Hono, or another framework? See the [Integrations](/docs/integrations) page for framework-specific examples.
Configure CORS on your S3 bucket:
```json
[
{
"AllowedOrigins": ["http://localhost:3000", "https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"]
}
]
```
See [Provider Setup](/docs/providers) for detailed CORS configuration.
Want drag & drop, progress bars, and styled components?
```bash
npx @pushduck/cli add upload-dropzone
npx @pushduck/cli add file-list
```
Or see our [Examples](/docs/examples) for production-ready components.
## Next Steps
# Roadmap (/docs/roadmap)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { File, Folder, Files } from "fumadocs-ui/components/files";
import { TypeTable } from "fumadocs-ui/components/type-table";
## Development Roadmap
Our mission is to make file uploads **simple**, **secure**, and **scalable** for every developer and every use case.
## β
Completed
### Core Foundation
β
**Universal Compatibility** - Works with 16+ frameworks and edge runtimes\
β
**Type-Safe APIs** - Full TypeScript inference from server to client\
β
**Multi-Provider Support** - AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO\
β
**Production Security** - Presigned URLs, file validation, CORS handling\
β
**Developer Experience** - Property-based client, comprehensive error handling\
β
**Overall Progress Tracking** - Now provides real-time aggregate progress metrics
* `progress` - 0-100% completion across all files
* `uploadSpeed` - Combined transfer rate in bytes/second
* `eta` - Overall time remaining in seconds
### Setup & Tooling
β
**Interactive CLI** - Guided setup with smart defaults and auto-detection\
β
**Code Generation** - Type-safe API routes and client components\
β
**Framework Detection** - Automatic Next.js App Router/Pages Router detection\
β
**Environment Setup** - Automated credential configuration
### Documentation & Examples
β
**Comprehensive Docs** - Complete API reference and integration guides\
β
**Live Examples** - Working demos for all supported frameworks\
β
**Migration Guides** - Step-by-step migration from other solutions\
β
**Best Practices** - Security, performance, and architecture guidance
## β οΈ Current Limitations
### Progress Tracking Constraints
β
**Overall Progress Tracking** - Now provides real-time aggregate progress metrics
* `progress` - 0-100% completion across all files
* `uploadSpeed` - Combined transfer rate in bytes/second
* `eta` - Overall time remaining in seconds
### Upload Control Limitations
Current upload management has constraints for handling real-world scenarios.
**The following features are NOT yet implemented but are planned for Q3 2025:**
β **No Resumable Uploads** - Cannot resume interrupted uploads from where they left off\
β **No Pausable Uploads** - Cannot pause ongoing uploads and resume later\
β **No Cancel Support** - Cannot cancel individual uploads in progress\
β **Limited Network Resilience** - No automatic retry on network failures or connection switching
These features are actively being designed and will be released based on community feedback and use case requirements.
## π§ In Progress
### Developer Experience
π§ **Enhanced Error Messages** - Contextual help and troubleshooting suggestions\
π§ **Testing Utilities** - Mock S3 providers and testing helpers for CI/CD\
π§ **Performance Profiling** - Debugging hooks for upload performance analysis
## π Planned
### Q3 2025 - Core Upload Features
* β
**Enhanced Hook APIs** - onProgress callbacks and advanced upload state management (with `onStart` lifecycle)
* **Advanced Upload Control** - Resumable, pausable uploads with cancel support and network resilience
* **Upload Queue Management** - Concurrent upload limits, prioritization, and bandwidth throttling
* **Chunk Upload Optimization** - Configurable chunk sizes and parallel chunk uploads for large files
* **Better Error Recovery** - Automatic retry with exponential backoff and network change detection
### Q4 2025 - Framework Support
* **Mobile SDKs** - React Native and Expo support for mobile apps
* **Additional Framework Adapters** - Astro, Qwik, Solid.js, and Fresh
* **Testing Framework** - Built-in test utilities and mock providers
* **Migration Tools** - Automated migration from uploadthing, uppy, and other libraries
### Q1 2026 - Developer Experience
* **Enhanced TypeScript** - Better type inference and IDE support
* **Plugin System** - Hooks-based extension system for custom upload logic
* **Debug Mode** - Detailed logging and upload lifecycle visualization
* **Performance Monitoring Hooks** - Metrics collection for custom analytics integration
### Q2 2026 - Integration Ecosystem
* **Integration Guides** - Documented patterns for Sharp, FFmpeg, Cloudflare Images, etc.
* **Example Applications** - Production-ready examples for common use cases
* **Community Adapters** - Framework adapters maintained by the community
* **Best Practices Library** - Shared configurations and patterns
## π― Long-term Vision
### Stay Focused, Stay Lightweight
Pushduck will remain a **focused upload library**, not a platform. Our mission:
> "The fastest, most lightweight way to add S3 file uploads to any web application"
**What we'll always prioritize:**
* πͺΆ **Minimal bundle size** - Keep dependencies light
* π― **Core upload features** - Do one thing exceptionally well
* π§ **Extensibility** - Provide hooks, not built-in features for everything
* π **Integration guides** - Document how to integrate with other tools
* π **Universal compatibility** - Work everywhere JavaScript runs
**What we won't build:**
* β File processing (use Sharp, FFmpeg, etc.)
* β Content moderation (use Cloudflare, AWS services)
* β Analytics dashboards (provide hooks for your analytics)
* β Team management (that's your app's concern)
* β Platform features (we're a library, not a SaaS)
π **Read our full philosophy** - For detailed scope boundaries, integration patterns, and decision framework, see [Philosophy & Scope](/docs/philosophy)
## π‘ Ideas & Suggestions
Have ideas for pushduck? We'd love to hear them!
* [Feature Requests](https://github.com/abhay-ramesh/pushduck/discussions/categories/ideas)
* [Community Discord](https://pushduck.dev/discord)
* [GitHub Issues](https://github.com/abhay-ramesh/pushduck/issues)
## π€ Contributing
Want to help build the future of file uploads? Check out our [Contributing Guide](https://github.com/abhay-ramesh/pushduck/blob/main/CONTRIBUTING.md) to get started.
### Current Priorities
We're actively looking for contributors in these areas:
* **Core Upload Features** - Resumable uploads, better error recovery
* **Framework Adapters** - Help us support more frameworks and platforms
* **Documentation** - Improve guides, examples, and API documentation
* **Testing** - Expand test coverage and add integration tests
* **Performance** - Optimize bundle size and runtime performance
* **Integration Guides** - Document patterns for common tools (Sharp, FFmpeg, etc.)
***
*Last updated: June 2025*
This roadmap is community-driven. **Your feedback shapes our priorities.**
Join our [Discord](https://pushduck.dev/discord) or open an issue on
[GitHub](https://github.com/abhay-ramesh/pushduck) to influence what we
build next.
## Current Status
We've already solved the core problems that have frustrated developers for years:
β
**Interactive CLI** - Guided setup with smart defaults and auto-detection\
β
**Type Safety** - Full TypeScript inference for upload schemas\
β
**Multiple Providers** - Cloudflare R2, AWS S3, Google Cloud, and more\
β
**Production Ready** - Used by teams processing millions of uploads\
β
**Developer Experience** - Property-based client access with enhanced IntelliSense
## What's Next
### π Q3 2025: Core Upload Features
**Planned for Q3 2025** - Complete control over upload lifecycle with automatic recovery from network issues:
```typescript
// PLANNED API - Not yet implemented
const { files, uploadFiles, pauseUpload, resumeUpload, cancelUpload } = upload.images
// Pause individual uploads
await pauseUpload(fileId)
// Resume from where it left off
await resumeUpload(fileId)
// Cancel with cleanup
await cancelUpload(fileId)
// Automatic network resilience
const config = {
retryAttempts: 3,
networkSwitchTolerance: true,
resumeOnReconnect: true
}
```
Manage upload queues with prioritization and bandwidth throttling:
```typescript
// PLANNED API
const { uploadFiles } = upload.images({
queue: {
maxConcurrent: 3,
maxBandwidth: '5MB/s',
priority: 'high'
}
})
```
Optimize large file uploads with configurable chunk sizes and parallel processing:
```typescript
// PLANNED API
const { uploadFiles } = upload.videos({
chunking: {
chunkSize: '10MB',
parallelChunks: 3,
retryChunks: true
}
})
```
Robust error handling with automatic retry and network change detection:
```typescript
// PLANNED API
const { uploadFiles } = upload.files({
retry: {
attempts: 3,
exponentialBackoff: true,
detectNetworkChange: true
}
})
```
### π Q4 2025: Framework Support
Complete framework support with the same developer experience:
```typescript
// Vue 3 Composition API
import { createUploadClient } from '@pushduck/vue'
const upload = createUploadClient({
endpoint: '/api/upload'
})
const { files, uploadFiles, isUploading } = upload.imageUpload
```
```typescript
// Svelte stores
import { uploadStore } from '@pushduck/svelte'
const upload = uploadStore('/api/upload')
// Reactive stores for upload state
$: ({ files, isUploading } = $upload.imageUpload)
```
```typescript
// Pure JavaScript
import { UploadClient } from '@pushduck/core'
const client = new UploadClient('/api/upload')
client.upload('imageUpload', files)
.on('progress', (progress) => console.log(progress))
.on('complete', (urls) => console.log(urls))
```
{" "}
Upload support for React Native and Expo apps with the same type-safe API. Platform-specific optimizations for iOS and Android.
Built-in mock S3 providers, upload simulation utilities, and testing helpers for comprehensive test coverage in your CI/CD pipeline.
## Community Feedback
### What We're Hearing From You
Based on community feedback, GitHub issues, and Discord discussions, here are the most requested **in-scope** features:
**π₯ High Priority:**
* β
**Upload Resume & Retry** - Automatic retry and resume for failed uploads (Q3 2025)
* **Better Error Messages** - More helpful error descriptions with suggested fixes
* **Queue Management** - Control concurrent uploads and bandwidth throttling
* **Progress Customization** - More granular progress tracking hooks
* **Example Library** - More real-world examples and integration patterns
**π Under Discussion:**
* **GraphQL Integration** - Native GraphQL subscription support for upload progress
* **Webhook Support** - Custom webhook configuration for upload lifecycle events
* **Component Library** - Headless UI components for common upload patterns
* **Performance Profiling** - Built-in profiling tools for debugging upload issues
**Out of Scope:**
We won't build features like file processing (use Sharp/FFmpeg), content moderation (use specialized services), or platform features (team management, dashboards). We're staying focused as a lightweight upload library.
## How We Prioritize
Our roadmap is driven by three key factors:
1. **Community Impact** - Features that solve real problems for the most developers
2. **Technical Excellence** - Maintaining our high standards for type safety and DX
3. **Ecosystem Health** - Building a sustainable, long-term solution
### Voting on Features
Have an idea or want to prioritize something? Here's how to influence our roadmap:
Use our feature request template with use cases and expected API design. Include code examples and real-world scenarios.
{" "}
Join our Discord server where we run monthly polls on upcoming features. Your
vote directly influences our development priorities.
First Friday of every month at 10 AM PT - open to all developers. Share your use cases and help shape the future.
## Development Principles
As we build new features, we never compromise on:
* **πͺΆ Lightweight First** - Bundle size is a feature, not an afterthought
* **π― Focused Scope** - Do one thing (uploads) exceptionally well
* **π Type Safety** - Every feature must have full TypeScript support
* **π Zero Breaking Changes** - Backward compatibility is non-negotiable
* **β‘ Performance** - New features can't slow down existing workflows
* **π Extensibility** - Provide hooks, not built-in everything
Follow our [GitHub project board](https://github.com/abhay-ramesh/pushduck/projects) for real-time updates on development progress.
## Get Involved
This roadmap exists because of developers like you. Here's how to shape the future:
### For Users
* **Share your use case** - Tell us what you're building
* **Report pain points** - What's still too complicated?
* **Request integrations** - Which providers or tools do you need?
### For Contributors
* **Code contributions** - Check our [contributing guide](https://github.com/abhay-ramesh/pushduck/blob/main/CONTRIBUTING.md)
* **Documentation** - Help improve examples and guides
* **Community support** - Answer questions in Discord and GitHub
### For Organizations
* **Sponsorship** - Support full-time development
* **Enterprise feedback** - Share your scale challenges
* **Partnership** - Integrate pushduck with your platform
***
**Ready to build the future of file uploads?** Join our [Discord
community](https://pushduck.dev/discord) and help us make file
uploads delightful for every Next.js developer.
# CLI Reference (/docs/api/cli)
import { Callout } from 'fumadocs-ui/components/callout'
import { Card, Cards } from 'fumadocs-ui/components/card'
import { Steps, Step } from 'fumadocs-ui/components/steps'
import { Tab, Tabs } from 'fumadocs-ui/components/tabs'
import { Files, Folder, File } from 'fumadocs-ui/components/files'
**π Recommended**: Use our CLI for the fastest setup experience
**Next.js Only**: The pushduck CLI currently only supports Next.js projects. Support for other frameworks is coming soon.
## Quick Start
Get your file uploads working in under 2 minutes with our interactive CLI tool.
npm
pnpm
yarn
bun
```bash
npx @pushduck/cli@latest init
```
```bash
pnpm dlx @pushduck/cli@latest init
```
```bash
yarn dlx @pushduck/cli@latest init
```
```bash
bun x @pushduck/cli@latest init
```
The CLI will automatically:
* π **Detect your package manager** (npm, pnpm, yarn, bun)
* π¦ **Install dependencies** using your preferred package manager
* βοΈ **Set up your storage provider** (Cloudflare R2, AWS S3, etc.)
* π οΈ **Generate type-safe code** (API routes, client, components)
* βοΈ **Configure environment** variables and bucket setup
## What the CLI Does
* Detects App Router vs Pages Router
* Finds existing TypeScript configuration
* Checks for existing upload implementations
* Validates project structure
* AWS S3, Cloudflare R2, DigitalOcean Spaces
* Google Cloud Storage, MinIO
* Automatic bucket creation
* CORS configuration
* Type-safe API routes
* Upload client configuration
* Example components
* Environment variable templates
The CLI walks you through each step, asking only what's necessary for your specific setup.
## CLI Commands
### `init` - Initialize Setup
npm
pnpm
yarn
bun
```bash
npx @pushduck/cli@latest init [options]
```
```bash
pnpm dlx @pushduck/cli@latest init [options]
```
```bash
yarn dlx @pushduck/cli@latest init [options]
```
```bash
bun x @pushduck/cli@latest init [options]
```
**Options:**
* `--provider ` - Skip provider selection (aws|cloudflare-r2|digitalocean|minio|gcs)
* `--skip-examples` - Don't generate example components
* `--skip-bucket` - Don't create S3 bucket automatically
* `--api-path ` - Custom API route path (default: `/api/upload`)
* `--dry-run` - Show what would be created without creating
* `--verbose` - Show detailed output
**Examples:**
Quick Setup
AWS Direct
Custom API Path
Components Only
```bash
# Interactive setup with all prompts
npx @pushduck/cli@latest init
```
```bash
# Skip provider selection, use AWS S3
npx @pushduck/cli@latest init --provider aws
```
```bash
# Use custom API route path
npx @pushduck/cli@latest init --api-path /api/files
```
```bash
# Generate only components, skip bucket creation
npx @pushduck/cli@latest init --skip-bucket --skip-examples
```
### `add` - Add Upload Route
```bash
npx @pushduck/cli@latest add
```
Add new upload routes to existing configuration:
```bash
# Interactive route builder
npx @pushduck/cli@latest add
# Example output:
# β¨ Added imageUpload route for profile pictures
# β¨ Added documentUpload route for file attachments
# β¨ Updated router types
```
### `test` - Test Configuration
```bash
npx @pushduck/cli@latest test [options]
```
**Options:**
* `--verbose` - Show detailed test output
Validates your current setup:
```bash
npx @pushduck/cli@latest test
# Example output:
# β
Environment variables configured
# β
S3 bucket accessible
# β
CORS configuration valid
# β
API routes responding
# β
Types generated correctly
```
## Interactive Setup Walkthrough
### Step 1: Project Detection
```
π Detecting your project...
β Next.js App Router detected
β TypeScript configuration found
β Package manager: pnpm detected
β No existing upload configuration
β Project structure validated
```
### Step 2: Provider Selection
```
? Which cloud storage provider would you like to use?
β― Cloudflare R2 (recommended)
AWS S3 (classic, widely supported)
DigitalOcean Spaces (simple, affordable)
Google Cloud Storage (enterprise-grade)
MinIO (self-hosted, open source)
Custom S3-compatible endpoint
```
### Step 3: Credential Setup
```
π§ Setting up Cloudflare R2...
π Checking for existing credentials...
β Found CLOUDFLARE_R2_ACCESS_KEY_ID
β Found CLOUDFLARE_R2_SECRET_ACCESS_KEY
β Found CLOUDFLARE_R2_ACCOUNT_ID
β CLOUDFLARE_R2_BUCKET_NAME not found
? Enter your R2 bucket name: my-app-uploads
? Create bucket automatically? Yes
```
### Step 4: API Configuration
```
? Where should we create the upload API?
β― app/api/upload/route.ts (recommended)
app/api/s3-upload/route.ts (classic)
Custom path
? Generate example upload page?
β― Yes, create app/upload/page.tsx with full example
Yes, just add components to components/ui/
No, I'll build my own
```
### Step 5: File Generation
```
π οΈ Generating files...
β¨ Created files:
βββ app/api/upload/route.ts
βββ app/upload/page.tsx
βββ components/ui/upload-button.tsx
βββ components/ui/upload-dropzone.tsx
βββ lib/upload-client.ts
βββ .env.example
π¦ Installing dependencies...
β pushduck
β react-dropzone
π Setup complete! Your uploads are ready.
```
## Generated Project Structure
After running the CLI, your project will have:
### Generated API Route
```typescript title="app/api/upload/route.ts"
// No longer needed - use uploadRouter.handlers directly
import { s3 } from '@/lib/upload'
import { getServerSession } from 'next-auth'
import { authOptions } from '@/lib/auth'
const s3Router = s3.createRouter({
// Image uploads for profile pictures
imageUpload: s3.image()
.maxFileSize("5MB")
.maxFiles(1)
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req, metadata }) => {
const session = await getServerSession(authOptions)
if (!session?.user?.id) {
throw new Error("Authentication required")
}
return {
...metadata,
userId: session.user.id,
folder: `uploads/${session.user.id}`
}
}),
// Document uploads
documentUpload: s3.file()
.maxFileSize("10MB")
.maxFiles(5)
.types(["application/pdf", "text/plain", "application/msword"])
.middleware(async ({ req, metadata }) => {
const session = await getServerSession(authOptions)
if (!session?.user?.id) {
throw new Error("Authentication required")
}
return {
...metadata,
userId: session.user.id,
folder: `documents/${session.user.id}`
}
})
})
export type AppRouter = typeof s3Router
export const { GET, POST } = s3Router.handlers
```
### Generated Upload Client
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from '@/app/api/upload/route'
export const upload = createUploadClient({
endpoint: '/api/upload'
})
```
### Generated Example Page
```typescript title="app/upload/page.tsx"
import { UploadButton } from '@/components/ui/upload-button'
import { UploadDropzone } from '@/components/ui/upload-dropzone'
export default function UploadPage() {
return (
File Upload Demo
Profile Picture
Documents
)
}
```
## Environment Variables
The CLI automatically creates `.env.example` and prompts for missing values:
```bash title=".env.example"
# Cloudflare R2 Configuration (Recommended)
CLOUDFLARE_R2_ACCESS_KEY_ID=your_access_key_here
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your_secret_key_here
CLOUDFLARE_R2_ACCOUNT_ID=your_account_id_here
CLOUDFLARE_R2_BUCKET_NAME=your-bucket-name
# Alternative: AWS S3 Configuration
# AWS_ACCESS_KEY_ID=your_access_key_here
# AWS_SECRET_ACCESS_KEY=your_secret_key_here
# AWS_REGION=us-east-1
# AWS_S3_BUCKET_NAME=your-bucket-name
# Next.js Configuration
NEXTAUTH_SECRET=your_nextauth_secret_here
NEXTAUTH_URL=http://localhost:3000
# Optional: Custom S3 endpoint (for MinIO, etc.)
# S3_ENDPOINT=https://your-custom-endpoint.com
```
## Provider-Specific Setup
```bash
npx @pushduck/cli@latest init --provider cloudflare-r2
```
**What gets configured:**
* Cloudflare R2 S3-compatible endpoints
* Global edge network optimization
* Zero egress fee configuration
* CORS settings for web uploads
```bash
npx @pushduck/cli@latest init --provider aws
```
**What gets configured:**
* AWS S3 regional endpoints
* IAM permissions and policies
* Bucket lifecycle management
* CloudFront CDN integration (optional)
```bash
npx @pushduck/cli@latest init --provider digitalocean
```
**Required Environment Variables:**
* `AWS_ACCESS_KEY_ID` (DO Spaces key)
* `AWS_SECRET_ACCESS_KEY` (DO Spaces secret)
* `AWS_REGION` (DO region)
* `AWS_S3_BUCKET_NAME`
* `S3_ENDPOINT` (DO Spaces endpoint)
**What the CLI does:**
* Configures DigitalOcean Spaces endpoints
* Sets up CDN configuration
* Validates access permissions
* Configures CORS policies
```bash
npx @pushduck/cli@latest init --provider minio
```
**Required Environment Variables:**
* `AWS_ACCESS_KEY_ID` (MinIO access key)
* `AWS_SECRET_ACCESS_KEY` (MinIO secret key)
* `AWS_REGION=us-east-1`
* `AWS_S3_BUCKET_NAME`
* `S3_ENDPOINT` (MinIO server URL)
**What the CLI does:**
* Configures self-hosted MinIO endpoints
* Sets up bucket policies
* Validates server connectivity
* Configures development-friendly settings
## Troubleshooting
### CLI Not Found
```bash
# If you get "command not found"
npm install -g pushduck
# Or use npx for one-time usage
npx @pushduck/cli@latest@latest init
```
### Permission Errors
```bash
# If you get permission errors during setup
sudo npx @pushduck/cli@latest init
# Or fix npm permissions
npm config set prefix ~/.npm-global
export PATH=~/.npm-global/bin:$PATH
```
### Existing Configuration
```bash
# Force overwrite existing configuration
npx @pushduck/cli@latest init --force
# Or backup and regenerate
cp app/api/upload/route.ts app/api/upload/route.ts.backup
npx @pushduck/cli@latest init
```
### Bucket Creation Failed
```bash
# Test your credentials first
npx @pushduck/cli@latest test
# Skip automatic bucket creation
npx @pushduck/cli@latest init --skip-bucket
# Create bucket manually, then run:
npx @pushduck/cli@latest test
```
## Advanced Usage
### Custom Templates
```bash
# Use custom file templates
npx @pushduck/cli@latest init --template enterprise
# Available templates:
# - default: Basic setup with examples
# - minimal: Just API routes, no examples
# - enterprise: Full security and monitoring
# - ecommerce: Product images and documents
```
### Monorepo Support
```bash
# For monorepos, specify the Next.js app directory
cd apps/web
npx @pushduck/cli@latest init
# Or use the --cwd flag
npx @pushduck/cli@latest init --cwd apps/web
```
### CI/CD Integration
```bash
# Non-interactive mode for CI/CD
npx @pushduck/cli@latest init \
--provider aws \
--skip-examples \
--api-path /api/upload \
--no-interactive
```
***
**Complete CLI Reference**: This guide covers all CLI commands, options, and use cases. For a quick start, see our [Quick Start guide](/docs/quick-start).
# API Reference (/docs/api)
import { Card, Cards } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
## Complete API Documentation
Complete reference documentation for all pushduck APIs, from client-side hooks to server configuration and storage operations.
**Type-Safe by Design**: All pushduck APIs are built with TypeScript-first design, providing excellent developer experience with full type inference and autocompletion.
## Client APIs
* `useUpload` - Core upload hook with progress tracking
* `useUploadRoute` - Route-specific uploads with validation
**Perfect for**: React applications, reactive UIs
* `createUploadClient` - Type-safe upload client
* Property-based route access
* Enhanced type inference
**Perfect for**: Complex applications, better DX
## Server Configuration
* Route definitions and validation
* File type and size restrictions
* Custom naming strategies
**Essential for**: Setting up upload routes
* Router configuration options
* Middleware integration
* Advanced routing patterns
**Essential for**: Server setup and customization
* Default upload options
* Error handling configuration
* Progress tracking settings
**Essential for**: Client-side configuration
* Dynamic path generation
* Custom naming strategies
* Folder organization
**Essential for**: File organization
## Server APIs
* Route definition and configuration
* Built-in validation and middleware
* Type-safe request/response handling
**Core API**: The heart of pushduck
* File listing and metadata
* Delete operations
* Presigned URL generation
**Perfect for**: File management features
## Developer Tools
* Project initialization
* Component generation
* Development utilities
**Perfect for**: Quick setup and scaffolding
* Error diagnosis and solutions
* Performance optimization
* Common gotchas and fixes
**Essential for**: Problem solving
## Quick Reference
### Basic Server Setup
```typescript
import { createS3Router, s3 } from 'pushduck/server';
const uploadRouter = createS3Router({
storage: {
provider: 'aws-s3',
region: 'us-east-1',
bucket: 'my-bucket',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
},
routes: {
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB"),
},
});
export const { GET, POST } = uploadRouter.handlers;
```
### Basic Client Usage
```typescript
import { useUpload } from 'pushduck/client';
function UploadComponent() {
const { upload, uploading, progress } = useUpload({
endpoint: '/api/upload',
route: 'imageUpload',
});
const handleUpload = async (file: File) => {
const result = await upload(file);
console.log('Public URL:', result.url); // Permanent access
console.log('Download URL:', result.presignedUrl); // Temporary access (1 hour)
};
return (
handleUpload(e.target.files![0])} />
{uploading &&
Progress: {progress}%
}
);
}
```
## Architecture Overview
**Getting Started**: New to pushduck? Start with the [Quick Start](/docs/quick-start) guide, then explore the specific APIs you need for your use case.
## API Categories
| Category | Purpose | Best For |
| ------------------- | ----------------------- | ---------------------------------------- |
| **Client APIs** | Frontend file uploads | React components, user interactions |
| **Server APIs** | Backend upload handling | Route definitions, validation |
| **Storage APIs** | File management | Listing, deleting, URL generation |
| **Configuration** | Setup and customization | Project configuration, advanced features |
| **Developer Tools** | Development workflow | Setup, debugging, optimization |
## Next Steps
1. **New to pushduck?** β Start with [Quick Start](/docs/quick-start)
2. **Setting up uploads?** β Check [S3 Router](/docs/api/s3-router)
3. **Building UI?** β Explore [React Hooks](/docs/api/client)
4. **Managing files?** β Use [Storage API](/docs/api/storage)
5. **Need help?** β Visit [Troubleshooting](/docs/api/troubleshooting)
# S3 Router (/docs/api/s3-router)
## S3 Router Configuration
The S3 router provides a type-safe way to define upload endpoints with schema validation, middleware, and lifecycle hooks.
## Basic Router Setup
```typescript title="app/api/upload/route.ts"
// app/api/upload/route.ts
import { s3 } from '@/lib/upload'
const s3Router = s3.createRouter({
imageUpload: s3
.image()
.maxFileSize('5MB')
.formats(['jpeg', 'jpg', 'png', 'webp'])
.middleware(async ({ file, metadata }) => { // [!code highlight]
// Add authentication and user context
return {
...metadata,
userId: 'user-123',
uploadedAt: new Date().toISOString(),
}
}),
documentUpload: s3
.file()
.maxFileSize('10MB')
.types(['application/pdf', 'text/plain'])
.paths({
prefix: 'documents',
}),
})
// Export the handler
export const { GET, POST } = s3Router.handlers; // [!code highlight]
```
## Schema Builders
### Image Schema
```typescript title="Image Schema Configuration"
s3.image()
.maxFileSize('5MB') // [!code highlight]
.formats(['jpeg', 'jpg', 'png', 'webp', 'gif'])
.dimensions({ minWidth: 100, maxWidth: 2000 })
.quality(0.8) // JPEG quality
```
### File Schema
```typescript title="File Schema Configuration"
s3.file()
.maxFileSize('10MB') // [!code highlight]
.types(['application/pdf', 'text/plain', 'application/json'])
.extensions(['pdf', 'txt', 'json'])
```
### Object Schema (Multiple Files)
```typescript title="Object Schema Configuration"
s3.object({
images: s3.image().maxFileSize('5MB').maxFiles(5), // [!code highlight]
documents: s3.file().maxFileSize('10MB').maxFiles(2),
thumbnail: s3.image().maxFileSize('1MB').maxFiles(1),
})
```
## Route Configuration
### Middleware
Add authentication, validation, and metadata:
```typescript title="Middleware Example"
.middleware(async ({ file, metadata, req }) => { // [!code highlight]
// Authentication
const user = await authenticateUser(req)
if (!user) {
throw new Error('Authentication required') // [!code highlight]
}
// File validation
if (file.size > 10 * 1024 * 1024) {
throw new Error('File too large')
}
// Return enriched metadata
return {
...metadata, // Client-provided metadata (e.g., albumId, tags)
userId: user.id, // [!code highlight]
userRole: user.role,
uploadedAt: new Date().toISOString(),
ipAddress: req.headers.get('x-forwarded-for'),
}
})
```
**Client Metadata Support:** The `metadata` parameter contains data sent from the client via `uploadFiles(files, metadata)`. This allows passing UI context like album selections, tags, or form data. The middleware can then enrich this client metadata with server-side data like authenticated user information.
**Example with Client Metadata:**
```typescript
// Client component
const { uploadFiles } = upload.imageUpload();
uploadFiles(files, {
albumId: 'vacation-2025',
tags: ['beach', 'sunset'],
visibility: 'private'
});
// Server middleware receives and validates
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Validate client-provided albumId
if (metadata?.albumId) {
const album = await db.albums.findFirst({
where: {
id: metadata.albumId,
userId: user.id // Ensure user owns the album
}
});
if (!album) throw new Error('Album not found or access denied');
}
return {
// Client metadata (validated)
albumId: metadata?.albumId,
tags: metadata?.tags || [],
visibility: metadata?.visibility || 'private',
// Server metadata (trusted)
userId: user.id, // From auth, NOT from client
role: user.role, // From auth, NOT from client
uploadedAt: new Date().toISOString()
};
})
```
**Security Warning:** Client metadata is UNTRUSTED user input. Always validate and never trust client-provided identity claims (userId, role, permissions, etc.). Extract identity from authenticated sessions on the server.
### Path Configuration
Control where files are stored:
```typescript
.paths({
// Simple prefix
prefix: 'user-uploads',
// Custom path generation
generateKey: (ctx) => {
const { file, metadata, routeName } = ctx
const userId = metadata.userId
const timestamp = Date.now()
return `${routeName}/${userId}/${timestamp}/${file.name}`
},
// Simple suffix
suffix: 'processed',
})
```
### Lifecycle Hooks
React to upload events:
```typescript
.onUploadStart(async ({ file, metadata }) => {
console.log(`Starting upload: ${file.name}`)
// Log to analytics
await analytics.track('upload_started', {
userId: metadata.userId,
filename: file.name,
fileSize: file.size,
})
})
.onUploadComplete(async ({ file, url, metadata }) => {
console.log(`Upload complete: ${file.name} -> ${url}`)
// Save to database
await db.files.create({
filename: file.name,
url,
userId: metadata.userId,
size: file.size,
contentType: file.type,
uploadedAt: new Date(),
})
// Send notification
await notificationService.send({
userId: metadata.userId,
type: 'upload_complete',
message: `${file.name} uploaded successfully`,
})
})
.onUploadError(async ({ file, error, metadata }) => {
console.error(`Upload failed: ${file.name}`, error)
// Log error
await errorLogger.log({
operation: 'file_upload',
error: error.message,
userId: metadata.userId,
filename: file.name,
})
})
```
## Advanced Examples
### E-commerce Product Images
```typescript
const productRouter = s3.createRouter({
productImages: s3
.image()
.maxFileSize('5MB')
.formats(['jpeg', 'jpg', 'png', 'webp'])
.dimensions({ minWidth: 800, maxWidth: 2000 })
.middleware(async ({ metadata, req }) => {
const user = await authenticateUser(req)
const productId = metadata.productId
// Verify user owns the product
const product = await db.products.findFirst({
where: { id: productId, ownerId: user.id }
})
if (!product) {
throw new Error('Product not found or access denied')
}
return {
...metadata,
userId: user.id,
productId,
productName: product.name,
}
})
.paths({
generateKey: (ctx) => {
const { metadata } = ctx
return `products/${metadata.productId}/images/${Date.now()}.jpg`
}
})
.onUploadComplete(async ({ url, metadata }) => {
// Update product with new image
await db.products.update({
where: { id: metadata.productId },
data: {
images: {
push: url
}
}
})
}),
productDocuments: s3
.file()
.maxFileSize('10MB')
.types(['application/pdf'])
.paths({
prefix: 'product-docs',
})
.onUploadComplete(async ({ url, metadata }) => {
await db.productDocuments.create({
productId: metadata.productId,
documentUrl: url,
type: 'specification',
})
}),
})
```
### User Profile System
```typescript
const profileRouter = s3.createRouter({
avatar: s3
.image()
.maxFileSize('2MB')
.formats(['jpeg', 'jpg', 'png'])
.dimensions({ minWidth: 100, maxWidth: 500 })
.middleware(async ({ req }) => {
const user = await authenticateUser(req)
return { userId: user.id, type: 'avatar' }
})
.paths({
generateKey: (ctx) => {
return `users/${ctx.metadata.userId}/avatar.jpg`
}
})
.onUploadComplete(async ({ url, metadata }) => {
// Update user profile
await db.users.update({
where: { id: metadata.userId },
data: { avatarUrl: url }
})
// Invalidate cache
await cache.del(`user:${metadata.userId}`)
}),
documents: s3
.object({
resume: s3.file().maxFileSize('5MB').types(['application/pdf']).maxFiles(1),
portfolio: s3.file().maxFileSize('10MB').maxFiles(3),
})
.middleware(async ({ req }) => {
const user = await authenticateUser(req)
return { userId: user.id }
})
.paths({
prefix: 'user-documents',
}),
})
```
## Client-Side Usage
Once you have your router set up, use it from the client:
```typescript
// components/FileUploader.tsx
import { useUploadRoute } from 'pushduck'
export function FileUploader() {
const { upload, isUploading } = useUploadRoute('imageUpload')
const handleUpload = async (files: FileList) => {
try {
const results = await upload(files, {
// This metadata will be passed to middleware
productId: 'product-123',
category: 'main-images',
})
console.log('Upload complete:', results)
} catch (error) {
console.error('Upload failed:', error)
}
}
return (
e.target.files && handleUpload(e.target.files)}
disabled={isUploading}
/>
{isUploading &&
Uploading...
}
)
}
```
## Type Safety
The router provides full TypeScript support:
```typescript
// Types are automatically inferred
type RouterType = typeof s3Router
// Get route names
type RouteNames = keyof RouterType // 'imageUpload' | 'documentUpload'
// Get route input types
type ImageUploadInput = InferRouteInput
// Get route metadata types
type ImageUploadMetadata = InferRouteMetadata
```
# Troubleshooting (/docs/api/troubleshooting)
import { Callout } from "fumadocs-ui/components/callout";
import { Tabs, Tab } from "fumadocs-ui/components/tabs";
## Common Issues and Solutions
Common issues and solutions when using pushduck.
## Development Issues
### Next.js Turbo Mode Compatibility
**Known Issue:** pushduck has compatibility issues with Next.js Turbo mode (`--turbo` flag).
**Problem:** Uploads fail or behave unexpectedly when using `next dev --turbo`.
**Solution:** Remove the `--turbo` flag from your development script:
```json
{
"scripts": {
// β This may cause issues
"dev": "next dev --turbo",
// β
Use this instead
"dev": "next dev"
}
}
```
```bash
# β This may cause issues
npm run dev --turbo
# β
Use this instead
npm run dev
```
**Why this happens:** Turbo mode's aggressive caching and bundling can interfere with the upload process, particularly with presigned URL generation and file streaming.
## Upload Failures
### CORS Errors
**Problem:** Browser console shows CORS errors when uploading files.
**Symptoms:**
```
Access to XMLHttpRequest at 'https://bucket.s3.amazonaws.com/...'
from origin 'http://localhost:3000' has been blocked by CORS policy
```
**Solution:** Configure CORS on your storage bucket.
**Comprehensive CORS Guide:** For detailed CORS configuration, testing, and troubleshooting across all providers, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
**Quick fixes:**
* See the [provider setup guides](/docs/providers) for basic CORS configuration
* Ensure your domain is included in `AllowedOrigins`
* Verify all required HTTP methods are allowed (`PUT`, `POST`, `GET`)
* Check that required headers are included in `AllowedHeaders`
### Environment Variables Not Found
**Problem:** Errors about missing environment variables.
**Symptoms:**
```
Error: Environment variable CLOUDFLARE_R2_ACCESS_KEY_ID is not defined
```
**Solution:** Ensure your environment variables are properly set:
1. **Check your `.env.local` file exists** in your project root
2. **Verify variable names** match exactly (case-sensitive)
3. **Restart your development server** after adding new variables
```bash
# .env.local
CLOUDFLARE_R2_ACCESS_KEY_ID=your_access_key
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your_secret_key
CLOUDFLARE_R2_ACCOUNT_ID=your_account_id
R2_BUCKET=your-bucket-name
```
### File Size Limits
**Problem:** Large files fail to upload.
**Solution:** Check and adjust size limits:
```typescript
// app/api/upload/route.ts
const uploadRouter = s3.createRouter({
imageUpload: s3
.image()
.maxFileSize("10MB") // Increase as needed
.formats(["jpeg", "png", "webp"]),
});
```
## Type Errors
### TypeScript Inference Issues
**Problem:** TypeScript errors with upload client.
**Solution:** Ensure proper type exports:
```typescript
// app/api/upload/route.ts
export const { GET, POST } = uploadRouter.handlers;
export type AppRouter = typeof uploadRouter; // β
Export the type
// lib/upload-client.ts
import type { AppRouter } from "@/app/api/upload/route";
export const upload = createUploadClient({ // β
Use the type
endpoint: "/api/upload",
});
```
## Performance Issues
### Slow Upload Speeds
**Problem:** Uploads are slower than expected.
**Solutions:**
1. **Choose the right provider region** close to your users
2. **Check your internet connection** and server resources
3. **Consider your provider's performance characteristics**
### Memory Issues with Large Files
**Problem:** Browser crashes or high memory usage with large files.
**Solution:** File streaming is handled automatically by pushduck:
```typescript
// File streaming is handled automatically
// No additional configuration needed
const { uploadFiles } = upload.fileUpload();
await uploadFiles(largeFiles); // β
Streams automatically
```
## Getting Help
If you're still experiencing issues:
1. **Check the documentation** for your specific provider
2. **For CORS/ACL issues** see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl)
3. **Enable debug logging** by setting `NODE_ENV=development`
4. **Check browser console** for detailed error messages
5. **Verify your provider configuration** is correct
**Need more help?** Create an issue on [GitHub](https://github.com/abhay-ramesh/pushduck/issues) with detailed information about your setup and the error you're experiencing.
# Client-Side Approaches (/docs/guides/client-approaches)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Client-Side Approaches
Pushduck provides **two ways** to integrate file uploads in your React components. Both approaches now provide **identical functionality** including per-route callbacks, progress tracking, and error handling.
**Recommendation**: Use the **Enhanced Structured Client** approach for the best developer experience. It now provides the same flexibility as hooks while maintaining superior type safety and centralized configuration.
## Quick Comparison
```typescript
const upload = createUploadClient({
endpoint: '/api/upload'
})
// Simple usage
const { uploadFiles, files } = upload.imageUpload()
// With per-route callbacks (NEW!)
const { uploadFiles, files } = upload.imageUpload({
onStart: (files) => setUploadStarted(true),
onSuccess: (results) => handleSuccess(results),
onError: (error) => handleError(error),
onProgress: (progress) => setProgress(progress)
})
```
**Best for**: Most projects - provides superior DX, type safety, and full flexibility
```typescript
const { uploadFiles, files } = useUploadRoute('imageUpload', {
onStart: (files) => setUploadStarted(true),
onSuccess: (results) => handleSuccess(results),
onError: (error) => handleError(error),
onProgress: (progress) => setProgress(progress)
})
```
**Best for**: Teams that strongly prefer React hooks, legacy code migration
## Feature Parity
Both approaches now support **identical functionality**:
| Feature | Enhanced Structured Client | Hook-Based |
| --------------------- | -------------------------------- | ---------------------------- |
| β
Type Safety | **Superior** - Property-based | Good - Generic types |
| β
Per-route Callbacks | **β
Full support** | β
Full support |
| β
Progress Tracking | **β
Full support** | β
Full support |
| β
Error Handling | **β
Full support** | β
Full support |
| β
Multiple Endpoints | **β
Per-route endpoints** | β
Per-route endpoints |
| β
Upload Control | **β
Enable/disable uploads** | β
Enable/disable uploads |
| β
Auto-upload | **β
Per-route control** | β
Per-route control |
| β
Overall Progress | **β
progress, uploadSpeed, eta** | β
progress, uploadSpeed, eta |
## API Comparison: Identical Capabilities
Both approaches now return **exactly the same** properties and accept **exactly the same** configuration options:
```typescript
// Hook-Based Approach
const {
uploadFiles, // (files: File[]) => Promise
files, // S3UploadedFile[]
isUploading, // boolean
errors, // string[]
reset, // () => void
progress, // number (0-100) - overall progress
uploadSpeed, // number (bytes/sec) - overall speed
eta // number (seconds) - overall ETA
} = useUploadRoute('imageUpload', {
onStart: (files) => setUploadStarted(true),
onSuccess: (results) => handleSuccess(results),
onError: (error) => handleError(error),
onProgress: (progress) => setProgress(progress),
endpoint: '/api/custom-upload',
});
// Enhanced Structured Client - IDENTICAL capabilities
const {
uploadFiles, // (files: File[]) => Promise
files, // S3UploadedFile[]
isUploading, // boolean
errors, // string[]
reset, // () => void
progress, // number (0-100) - overall progress
uploadSpeed, // number (bytes/sec) - overall speed
eta // number (seconds) - overall ETA
} = upload.imageUpload({
onStart: (files) => setUploadStarted(true),
onSuccess: (results) => handleSuccess(results),
onError: (error) => handleError(error),
onProgress: (progress) => setProgress(progress),
endpoint: '/api/custom-upload',
});
```
## Complete Options Parity
Both approaches support **identical configuration options**:
```typescript
interface CommonUploadOptions {
onStart?: (files: S3FileMetadata[]) => void;
onSuccess?: (results: UploadResult[]) => void;
onError?: (error: Error) => void;
onProgress?: (progress: number) => void;
endpoint?: string; // Custom endpoint per route
}
// Hook-based: useUploadRoute(routeName, options)
// Structured: upload.routeName(options)
// Both accept the same CommonUploadOptions interface
```
## Return Value Parity
Both approaches return **identical properties**:
```typescript
interface CommonUploadReturn {
uploadFiles: (files: File[]) => Promise;
files: S3UploadedFile[];
isUploading: boolean;
errors: string[];
reset: () => void;
// Overall progress tracking (NEW in both!)
progress?: number; // 0-100 percentage across all files
uploadSpeed?: number; // bytes per second across all files
eta?: number; // seconds remaining for all files
}
```
## Enhanced Structured Client Examples
### Basic Usage (Unchanged)
```typescript
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from '@/lib/upload'
const upload = createUploadClient({ endpoint: '/api/upload' })
export function SimpleUpload() {
const { uploadFiles, files, isUploading } = upload.imageUpload()
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
)
}
```
### With Per-Route Configuration (NEW!)
```typescript
export function AdvancedUpload() {
const [progress, setProgress] = useState(0)
const { uploadFiles, files, isUploading, errors, reset } =
upload.imageUpload({
onStart: (files) => {
console.log('π Upload starting!', files)
setUploadStarted(true)
},
onSuccess: (results) => {
console.log('β
Upload successful!', results)
results.forEach(file => {
console.log('Public URL:', file.url); // Permanent access
console.log('Download URL:', file.presignedUrl); // Temporary access (1 hour)
});
showNotification('Images uploaded successfully!')
setUploadStarted(false)
},
onError: (error) => {
console.error('β Upload failed:', error)
showErrorNotification(error.message)
setUploadStarted(false)
},
onProgress: (progress) => {
console.log(`π Progress: ${progress}%`)
setProgress(progress)
}
})
return (
)
}
```
### Multiple Routes with Different Configurations
```typescript
export function MultiUploadComponent() {
// Images with progress tracking
const images = upload.imageUpload({
onStart: (files) => setUploadingImages(true),
onProgress: (progress) => setImageProgress(progress)
})
// Documents with different endpoint and success handler
const documents = upload.documentUpload({
endpoint: '/api/secure-upload',
onStart: (files) => setUploadingDocuments(true),
onSuccess: (results) => {
// Use presigned URLs for private document downloads
updateDocumentLibrary(results.map(file => ({
id: file.id,
name: file.name,
downloadUrl: file.presignedUrl, // Secure, time-limited access
permanentUrl: file.url, // For internal operations
key: file.key
})));
}
})
// Videos with conditional logic in component
const videos = upload.videoUpload({
onStart: (files) => setUploadingVideos(true)
})
return (
)
}
```
### Global Configuration with Per-Route Overrides
```typescript
const upload = createUploadClient({
endpoint: '/api/upload',
// Global defaults (optional)
defaultOptions: {
onStart: (files) => console.log(`Starting upload of ${files.length} files`),
onProgress: (progress) => console.log(`Global progress: ${progress}%`),
onError: (error) => logError(error)
}
})
// This route inherits global defaults
const basic = upload.imageUpload()
// This route overrides specific options
const custom = upload.documentUpload({
endpoint: '/api/secure-upload', // Override endpoint
onSuccess: (results) => handleSecureUpload(results) // Add success handler
// Still inherits global onProgress and onError
})
```
## Hook-Based Approach (Unchanged)
```typescript
import { useUploadRoute } from 'pushduck/client'
export function HookBasedUpload() {
const { uploadFiles, files, isUploading, error } = useUploadRoute('imageUpload', {
onStart: (files) => console.log('Starting upload:', files),
onSuccess: (results) => console.log('Success:', results),
onError: (error) => console.error('Error:', error),
onProgress: (progress) => console.log('Progress:', progress)
})
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
)
}
```
## Migration Guide
### From Hook-Based to Enhanced Structured Client
```typescript
// Before: Hook-based
const { uploadFiles, files } = useUploadRoute('imageUpload', {
onStart: handleStart,
onSuccess: handleSuccess,
onError: handleError
})
// After: Enhanced structured client
const upload = createUploadClient({ endpoint: '/api/upload' })
const { uploadFiles, files } = upload.imageUpload({
onStart: handleStart,
onSuccess: handleSuccess,
onError: handleError
})
```
### Benefits of Migration
1. **Better Type Safety**: Route names are validated at compile time
2. **Enhanced IntelliSense**: Auto-completion for all available routes
3. **Centralized Configuration**: Single place to configure endpoints and defaults
4. **Refactoring Support**: Rename routes safely across your codebase
5. **No Performance Impact**: Same underlying implementation
## When to Use Each Approach
### Use Enhanced Structured Client When:
* β
**Starting a new project** - best overall developer experience
* β
**Want superior type safety** - compile-time route validation
* β
**Need centralized configuration** - single place for settings
* β
**Value refactoring support** - safe route renames
### Use Hook-Based When:
* β
**Migrating existing code** - minimal changes required
* β
**Dynamic route names** - routes determined at runtime
* β
**Team strongly prefers hooks** - familiar React patterns
* β
**Legacy compatibility** - maintaining older codebases
## Performance Considerations
Both approaches have **identical performance** characteristics:
* Same underlying `useUploadRoute` implementation
* Same network requests and upload logic
* Same React hooks rules and lifecycle
The enhanced structured client adds zero runtime overhead while providing compile-time benefits.
***
**Full Feature Parity**: Both approaches now support the same functionality. The choice comes down to developer experience preferences rather than feature limitations.
## Detailed Comparison
### Type Safety & Developer Experience
```typescript
// β
Complete type inference from server router
const upload = createUploadClient({
endpoint: '/api/upload'
})
// β
Property-based access - no string literals
const { uploadFiles, files } = upload.imageUpload()
// β
IntelliSense shows all available endpoints
upload. // <- Shows: imageUpload, documentUpload, videoUpload...
// β
Compile-time validation
upload.nonExistentRoute() // β TypeScript error
// β
Refactoring safety
// Rename routes in router β TypeScript shows all usage locations
```
**Benefits:**
* π― **Full type inference** from server to client
* π **IntelliSense support** - discover endpoints through IDE
* π‘οΈ **Refactoring safety** - rename with confidence
* π« **No string literals** - eliminates typos
* β‘ **Better DX** - property-based access feels natural
```typescript
// β
With type parameter - recommended for better type safety
const { uploadFiles, files } = useUploadRoute('imageUpload')
// β
Without type parameter - also works
const { uploadFiles, files } = useUploadRoute('imageUpload')
// Type parameter provides compile-time validation
const typed = useUploadRoute('imageUpload') // Route validated
const untyped = useUploadRoute('imageUpload') // Any string accepted
```
**Characteristics:**
* πͺ **React hook pattern** - familiar to React developers
* π€ **Flexible usage** - works with or without type parameter
* π§© **Component-level state** - each hook manages its own state
* π― **Type safety** - enhanced when using ``
* π **IDE support** - best with type parameter
### Code Examples
**Structured Client:**
```typescript
import { upload } from '@/lib/upload-client'
export function ImageUploader() {
const { uploadFiles, files, isUploading, error } = upload.imageUpload()
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{/* Upload UI */}
)
}
```
**Hook-Based:**
```typescript
import { useUploadRoute } from 'pushduck/client'
export function ImageUploader() {
const { uploadFiles, files, isUploading, error } = useUploadRoute('imageUpload')
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{/* Same upload UI */}
)
}
```
**Structured Client:**
```typescript
export function FileManager() {
const images = upload.imageUpload()
const documents = upload.documentUpload()
const videos = upload.videoUpload()
return (
)
}
```
**Hook-Based:**
```typescript
export function FileManager() {
const images = useUploadRoute('imageUpload')
const documents = useUploadRoute('documentUpload')
const videos = useUploadRoute('videoUpload')
return (
)
}
```
**Structured Client:**
```typescript
// lib/upload-client.ts
export const upload = createUploadClient({
endpoint: '/api/upload',
headers: {
Authorization: `Bearer ${getAuthToken()}`
}
})
// components/secure-uploader.tsx
export function SecureUploader() {
const { uploadFiles } = upload.secureUpload()
// Authentication handled globally
}
```
**Hook-Based:**
```typescript
export function SecureUploader() {
const { uploadFiles } = useUploadRoute('secureUpload', {
headers: {
Authorization: `Bearer ${getAuthToken()}`
}
})
// Authentication per hook usage
}
```
## Conclusion
**Our Recommendation**: Use the **Enhanced Structured Client** approach (`createUploadClient`) for most projects. It provides superior developer experience, better refactoring safety, and enhanced type inference.
**Both approaches are supported**: The hook-based approach (`useUploadRoute`) is fully supported and valid for teams that prefer traditional React patterns.
**Quick Decision Guide:**
* **Most projects** β Use `createUploadClient` (recommended)
* **Strongly prefer React hooks** β Use `useUploadRoute`
* **Want best DX and type safety** β Use `createUploadClient`
* **Need component-level control** β Use `useUploadRoute`
### Next Steps
* **New Project**: Start with [createUploadClient](/docs/api/client/create-upload-client)
* **Existing Hook Code**: Consider [migrating gradually](/docs/guides/migrate-to-enhanced-client)
* **Need Help**: Join our [Discord community](https://pushduck.dev/discord) for guidance
# Client-Side Metadata (/docs/guides/client-metadata)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Client-Side Metadata
Client-side metadata allows you to pass contextual information from your UI directly to the server during file uploads. This enables dynamic file organization, categorization, and processing based on user selections and application state.
**New Feature:** As of v0.1.23, you can now pass metadata from the client when calling `uploadFiles()`. This metadata flows through to your middleware, lifecycle hooks, and path generation functions.
## Why Use Client Metadata?
Client metadata bridges the gap between UI context and server-side processing:
* π― **UI State** - Pass album selections, categories, or form data
* π’ **Multi-tenant Context** - Send workspace, project, or organization IDs
* π·οΈ **User Preferences** - Include tags, visibility settings, or custom fields
* π **Dynamic Organization** - Organize files based on client context
* π¨ **Flexible Workflows** - Adapt to different use cases without API changes
## Basic Usage
**Client: Pass metadata with uploadFiles**
```typescript
import { upload } from '@/lib/upload-client';
export function ImageUploader() {
const { uploadFiles } = upload.imageUpload();
const handleUpload = (files: File[]) => {
// Pass metadata as second parameter
uploadFiles(files, {
albumId: 'vacation-2025',
tags: ['beach', 'sunset'],
visibility: 'private'
});
};
return handleUpload(Array.from(e.target.files || []))} />;
}
```
**Server: Receive in middleware**
```typescript
// app/api/upload/route.ts
const s3Router = s3.createRouter({
imageUpload: s3.image()
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
return {
...metadata, // Client data: { albumId, tags, visibility }
userId: user.id, // Server data from auth
};
})
});
```
**Use in hooks and path generation**
```typescript
.paths({
generateKey: (ctx) => {
// Metadata available in path generation
return `users/${ctx.metadata.userId}/albums/${ctx.metadata.albumId}/${ctx.file.name}`;
}
})
.onUploadComplete(async ({ metadata, url }) => {
// Metadata available in lifecycle hooks
await db.images.create({
url,
albumId: metadata.albumId,
tags: metadata.tags,
userId: metadata.userId
});
})
```
## Real-World Examples
### Multi-Tenant SaaS Application
```typescript
// Client component
export function WorkspaceFileUpload({ workspace, project }: Props) {
const { uploadFiles } = upload.documentUpload();
const handleUpload = (files: File[]) => {
uploadFiles(files, {
workspaceId: workspace.id,
projectId: project.id,
teamId: workspace.team.id,
folder: selectedFolder.path,
permissions: {
canEdit: currentUser.role === 'admin',
canDelete: currentUser.role === 'admin',
canShare: true
}
});
};
return ;
}
```
```typescript
// Server middleware - validates tenant isolation
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Verify user belongs to workspace
const membership = await db.workspaceMemberships.findFirst({
where: {
workspaceId: metadata?.workspaceId,
userId: user.id
}
});
if (!membership) {
throw new Error('Access denied to workspace');
}
// Verify user can access project
const project = await db.projects.findFirst({
where: {
id: metadata?.projectId,
workspaceId: metadata?.workspaceId
}
});
if (!project) {
throw new Error('Project not found');
}
return {
// Validated client metadata
workspaceId: metadata.workspaceId,
projectId: metadata.projectId,
folder: metadata.folder || '/',
// Server metadata
userId: user.id,
userRole: membership.role,
uploadedAt: new Date().toISOString()
};
})
.paths({
generateKey: (ctx) => {
const { metadata, file } = ctx;
return `workspaces/${metadata.workspaceId}/projects/${metadata.projectId}${metadata.folder}/${file.name}`;
}
})
```
### E-Commerce Product Images
```typescript
// Client: Product image upload with variants
export function ProductImageManager({ product }: { product: Product }) {
const [imageType, setImageType] = useState<'main' | 'gallery' | 'thumbnail'>('gallery');
const [selectedVariant, setSelectedVariant] = useState(null);
const { uploadFiles, files } = upload.productImages();
const handleUpload = (files: File[]) => {
uploadFiles(files, {
productId: product.id,
variantId: selectedVariant,
imageType: imageType,
sortOrder: product.images.length + 1,
altText: `${product.name} - ${imageType} image`
});
};
return (
setImageType(e.target.value as any)}>
Main Product Image
Gallery Image
Thumbnail
{product.variants.length > 0 && (
setSelectedVariant(e.target.value || null)}>
All Variants
{product.variants.map(v => (
{v.name}
))}
)}
e.target.files && handleUpload(Array.from(e.target.files))} />
);
}
```
```typescript
// Server: Organize by product and variant
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Verify user owns product
const product = await db.products.findFirst({
where: {
id: metadata?.productId,
ownerId: user.id
}
});
if (!product) {
throw new Error('Product not found or access denied');
}
return {
productId: metadata.productId,
variantId: metadata.variantId,
imageType: metadata.imageType,
sortOrder: metadata.sortOrder,
userId: user.id,
merchantId: product.merchantId
};
})
.paths({
generateKey: (ctx) => {
const { metadata, file } = ctx;
const variantPath = metadata.variantId ? `/variants/${metadata.variantId}` : '';
return `products/${metadata.productId}${variantPath}/${metadata.imageType}/${metadata.sortOrder}-${file.name}`;
}
})
.onUploadComplete(async ({ metadata, url }) => {
await db.productImages.create({
productId: metadata.productId,
variantId: metadata.variantId,
type: metadata.imageType,
url: url,
sortOrder: metadata.sortOrder,
altText: metadata.altText
});
})
```
### Content Management System
```typescript
// Client: Content upload with categorization
export function CMSMediaUpload() {
const [contentType, setContentType] = useState('article');
const [category, setCategory] = useState('technology');
const [tags, setTags] = useState([]);
const [publishDate, setPublishDate] = useState('');
const { uploadFiles } = upload.mediaUpload();
const handleUpload = (files: File[]) => {
uploadFiles(files, {
contentType: contentType,
category: category,
tags: tags,
publishDate: publishDate || new Date().toISOString(),
featured: false,
author: currentUser.username
});
};
return (
);
}
```
```typescript
// Server: CMS organization
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Verify user has content creation permissions
if (!user.permissions.includes('create:content')) {
throw new Error('Insufficient permissions');
}
return {
contentType: metadata?.contentType || 'article',
category: metadata?.category,
tags: metadata?.tags || [],
publishDate: metadata?.publishDate,
authorId: user.id,
authorName: user.name,
status: 'draft'
};
})
.paths({
generateKey: (ctx) => {
const { metadata, file } = ctx;
const date = new Date(metadata.publishDate);
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0');
return `content/${metadata.contentType}/${year}/${month}/${metadata.category}/${file.name}`;
}
})
```
## Security Best Practices
**β οΈ CRITICAL: Client metadata is UNTRUSTED user input.**
Never trust identity claims, permissions, or security-related data from the client. Always validate and extract identity from authenticated sessions on the server.
### β DON'T: Trust Client Identity
```typescript
// β BAD: Trusting client-provided userId
uploadFiles(files, {
userId: currentUser.id, // Client can fake this
isAdmin: true, // Client can lie about this
role: 'admin' // Never trust this from client
});
.middleware(async ({ metadata }) => {
return {
userId: metadata.userId, // β DANGEROUS!
role: metadata.role // β SECURITY RISK!
};
})
```
### β
DO: Validate and Override
```typescript
// β
GOOD: Server determines identity
uploadFiles(files, {
albumId: selectedAlbum.id, // β
OK - contextual data
tags: selectedTags, // β
OK - user input (validate)
visibility: 'private' // β
OK - user preference
});
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req); // Server auth
// Validate client data
if (metadata?.albumId) {
const album = await db.albums.findFirst({
where: {
id: metadata.albumId,
userId: user.id // Verify ownership
}
});
if (!album) throw new Error('Invalid album');
}
return {
// Client metadata (validated)
albumId: metadata?.albumId,
tags: sanitizeTags(metadata?.tags || []),
// Server metadata (trusted)
userId: user.id, // β
From auth
role: user.role, // β
From auth
uploadedAt: new Date().toISOString()
};
})
```
## Validation Strategies
### Type Validation
```typescript
.middleware(async ({ metadata }) => {
// Validate data types
if (metadata?.albumId && typeof metadata.albumId !== 'string') {
throw new Error('Invalid albumId type');
}
if (metadata?.tags && !Array.isArray(metadata.tags)) {
throw new Error('Tags must be an array');
}
if (metadata?.sortOrder && typeof metadata.sortOrder !== 'number') {
throw new Error('Invalid sortOrder type');
}
return { ...metadata };
})
```
### Value Validation
```typescript
.middleware(async ({ metadata }) => {
// Validate UUIDs
const uuidRegex = /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i;
if (metadata?.albumId && !uuidRegex.test(metadata.albumId)) {
throw new Error('Invalid album ID format');
}
// Validate enums
const validVisibilities = ['public', 'private', 'unlisted'];
if (metadata?.visibility && !validVisibilities.includes(metadata.visibility)) {
throw new Error('Invalid visibility setting');
}
// Sanitize strings
const sanitizedTags = (metadata?.tags || []).map((tag: string) =>
tag.trim().toLowerCase().replace(/[^a-z0-9-]/g, '')
);
return {
...metadata,
tags: sanitizedTags
};
})
```
### Database Validation
```typescript
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Verify referenced entities exist and user has access
if (metadata?.projectId) {
const project = await db.projects.findFirst({
where: {
id: metadata.projectId,
members: {
some: { userId: user.id }
}
}
});
if (!project) {
throw new Error('Project not found or access denied');
}
}
if (metadata?.folderId) {
const folder = await db.folders.findFirst({
where: {
id: metadata.folderId,
projectId: metadata.projectId,
deleted: false
}
});
if (!folder) {
throw new Error('Folder not found');
}
}
return {
...metadata,
userId: user.id
};
})
```
## TypeScript Support
Define metadata interfaces for better type safety:
```typescript
// Define your metadata interface
interface UploadMetadata {
albumId: string;
tags: string[];
visibility: 'public' | 'private' | 'unlisted';
featured?: boolean;
}
// Client usage with type safety
const handleUpload = (files: File[]) => {
const metadata: UploadMetadata = {
albumId: selectedAlbum.id,
tags: selectedTags,
visibility: visibilityOption,
featured: isFeatured
};
uploadFiles(files, metadata);
};
// Server middleware with typed metadata
.middleware(async ({ req, metadata }: { req: NextRequest; metadata?: UploadMetadata }) => {
const user = await authenticateUser(req);
return {
...metadata,
userId: user.id
};
})
```
## Common Use Cases
### Album/Gallery Organization
```typescript
// Pass album selection from UI
uploadFiles(files, {
albumId: selectedAlbum.id,
albumName: selectedAlbum.name,
tags: selectedTags,
visibility: albumSettings.defaultVisibility
});
```
### Document Management
```typescript
// Pass folder structure and metadata
uploadFiles(files, {
folderId: currentFolder.id,
folderPath: currentFolder.fullPath,
category: documentCategory,
confidential: isConfidential,
expiresAt: expirationDate
});
```
### User Profile Assets
```typescript
// Pass asset type and purpose
uploadFiles(files, {
assetType: 'profile-picture',
purpose: 'avatar',
aspectRatio: '1:1',
previousAssetId: currentAvatar?.id // For cleanup
});
```
### Form Submissions
```typescript
// Pass form context with uploads
uploadFiles(files, {
formId: formSubmission.id,
formType: 'contact',
attachmentType: 'supporting-document',
relatedTo: formData.ticketId
});
```
## Advanced Patterns
### Conditional Metadata
```typescript
const handleUpload = (files: File[]) => {
const metadata: any = {
uploadSource: 'web-app',
timestamp: Date.now()
};
// Conditionally add metadata
if (selectedAlbum) {
metadata.albumId = selectedAlbum.id;
}
if (tags.length > 0) {
metadata.tags = tags;
}
if (isAdminUser) {
metadata.priority = 'high';
metadata.skipModeration = true;
}
uploadFiles(files, metadata);
};
```
### Metadata from Form State
```typescript
import { useForm } from 'react-hook-form';
export function FormWithUploads() {
const { register, handleSubmit } = useForm();
const { uploadFiles } = upload.attachments();
const onSubmit = async (formData: any) => {
// Upload files with form context
await uploadFiles(selectedFiles, {
formId: formData.id,
category: formData.category,
priority: formData.priority,
department: formData.department,
requestedBy: formData.requesterEmail
});
// Then submit form
await submitForm(formData);
};
return ;
}
```
### Dynamic Path Generation
```typescript
// Client
uploadFiles(files, {
organizationId: org.id,
departmentId: dept.id,
projectCode: project.code,
fileClass: 'confidential'
});
// Server - organize by metadata
.paths({
generateKey: (ctx) => {
const { metadata, file } = ctx;
const date = new Date();
const year = date.getFullYear();
const quarter = `Q${Math.ceil((date.getMonth() + 1) / 3)}`;
return [
'organizations',
metadata.organizationId,
'departments',
metadata.departmentId,
year.toString(),
quarter,
metadata.fileClass,
metadata.projectCode,
file.name
].join('/');
}
})
```
## Metadata Size Considerations
**Size Limits:** While there's no hard limit on metadata size, keep it reasonable (\< 10KB). Large metadata objects increase request size and processing time.
### β
Good Metadata
```typescript
// Compact and purposeful
{
albumId: 'abc123',
tags: ['vacation', 'beach'],
visibility: 'private',
featured: false
}
```
### β οΈ Avoid Large Metadata
```typescript
// Too large - send via separate API call
{
albumId: 'abc123',
fullImageData: base64EncodedImage, // β Don't embed files
entireUserProfile: { ... }, // β Too much data
allPreviousUploads: [ ... ], // β Unnecessary
complexNestedStructure: { ... } // β οΈ Keep it simple
}
```
## Error Handling
Handle metadata-related errors gracefully:
```typescript
const { uploadFiles } = upload.imageUpload({
onError: (error) => {
if (error.message.includes('album')) {
toast.error('Selected album is invalid or you don\'t have access');
resetAlbumSelection();
} else if (error.message.includes('metadata')) {
toast.error('Invalid upload settings. Please try again.');
} else {
toast.error('Upload failed: ' + error.message);
}
}
});
```
## Testing with Metadata
```typescript
import { render, fireEvent } from '@testing-library/react';
test('uploads files with correct metadata', async () => {
const mockUploadFiles = vi.fn();
const { getByLabelText, getByRole } = render(
);
// Select album
const albumSelect = getByLabelText('Album');
fireEvent.change(albumSelect, { target: { value: 'vacation-2025' } });
// Add tags
const tagsInput = getByLabelText('Tags');
fireEvent.change(tagsInput, { target: { value: 'beach,sunset' } });
// Upload files
const fileInput = getByRole('file-input');
const files = [new File(['content'], 'photo.jpg', { type: 'image/jpeg' })];
fireEvent.change(fileInput, { target: { files } });
// Verify metadata was passed correctly
expect(mockUploadFiles).toHaveBeenCalledWith(
expect.arrayContaining(files),
expect.objectContaining({
albumId: 'vacation-2025',
tags: ['beach', 'sunset']
})
);
});
```
## Best Practices
### β
DO
* Pass UI state and user selections
* Validate metadata in middleware
* Use metadata for dynamic path generation
* Keep metadata size reasonable (\< 10KB)
* Define TypeScript interfaces for metadata
* Sanitize user input (tags, descriptions)
* Verify access to referenced entities (albums, projects)
### β DON'T
* Trust client-provided identity (userId, role, permissions)
* Send sensitive data (passwords, tokens, secrets)
* Embed large objects (base64 files, entire datasets)
* Skip validation in middleware
* Use metadata for authentication
* Trust metadata without verification
## Migration Guide
If you're upgrading from a version without metadata support:
```typescript
// Before: No metadata support
const { uploadFiles } = upload.imageUpload();
uploadFiles(files);
// After: Add metadata (backward compatible)
const { uploadFiles } = upload.imageUpload();
// Still works without metadata
uploadFiles(files);
// Or add metadata
uploadFiles(files, { albumId: album.id });
```
The feature is **100% backward compatible** - existing code continues to work without changes.
***
# Image Uploads (/docs/guides/image-uploads)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { TypeTable } from "fumadocs-ui/components/type-table";
import { Files, Folder, File } from "fumadocs-ui/components/files";
## Image Upload Guide
Handle image uploads with built-in optimization, validation, and processing features for the best user experience.
Images are the most common upload type. This guide covers everything from
basic setup to advanced optimization techniques for production apps.
## Basic Image Upload Setup
### Server Configuration
```typescript
// app/api/upload/route.ts
import { s3 } from "@/lib/upload";
const s3Router = s3.createRouter({
// Basic image upload
profilePicture: s3.image()
.maxFileSize('5MB')
.maxFiles(1)
.formats(['jpeg', 'png', 'webp']),
// Multiple images with optimization
galleryImages: s3.image()
.maxFileSize('10MB')
.maxFiles(10)
.formats(['jpeg', 'png', 'webp', 'gif']),
});
export type AppS3Router = typeof s3Router;
export const { GET, POST } = s3Router.handlers;
```
### Client Implementation
```typescript
// components/image-uploader.tsx
import { upload } from "@/lib/upload-client";
export function ImageUploader() {
const { uploadFiles, files, isUploading } = upload.galleryImages;
const handleImageSelect = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
uploadFiles(selectedFiles);
};
return (
{files.map((file) => (
{file.status === "success" && (
)}
{file.status === "uploading" && (
)}
))}
);
}
```
## Client-Side Metadata for Images
Pass contextual information from your UI to organize and categorize images:
**New Feature:** You can now pass metadata directly from the client when uploading images. This allows you to send album IDs, tags, categories, or any contextual data from your UI to the server.
```typescript
// Client component
import { upload } from '@/lib/upload-client';
import { useState } from 'react';
export function GalleryUpload() {
const [selectedAlbum, setSelectedAlbum] = useState('vacation-2025');
const [tags, setTags] = useState([]);
const [isFeatured, setIsFeatured] = useState(false);
const { uploadFiles, files } = upload.galleryImages({
onSuccess: (results) => {
console.log(`Uploaded to album: ${selectedAlbum}`);
}
});
const handleUpload = (selectedFiles: File[]) => {
// Pass UI state as metadata
uploadFiles(selectedFiles, {
albumId: selectedAlbum,
tags: tags,
featured: isFeatured,
uploadSource: 'gallery-manager',
category: 'user-content'
});
};
return (
setSelectedAlbum(e.target.value)}>
Vacation 2025
Family Photos
setIsFeatured(e.target.checked)} />
Mark as Featured
e.target.files && handleUpload(Array.from(e.target.files))} />
);
}
```
**Server-side validation:**
```typescript
// Server router
.middleware(async ({ req, metadata }) => {
const user = await authenticateUser(req);
// Validate client-provided album access
if (metadata?.albumId) {
const hasAccess = await db.albums.canUserAccess(metadata.albumId, user.id);
if (!hasAccess) throw new Error('Access denied to album');
}
return {
// Client metadata (validated)
albumId: metadata?.albumId,
tags: metadata?.tags || [],
featured: metadata?.featured || false,
// Server metadata (trusted)
userId: user.id,
uploadedAt: new Date().toISOString()
};
})
.paths({
generateKey: (ctx) => {
const { metadata, file } = ctx;
// Use metadata in path generation
return `albums/${metadata.albumId}/${metadata.featured ? 'featured/' : ''}${file.name}`;
}
})
```
**Security:** Always validate client metadata in middleware. Never trust client-provided user IDs or permissions.
## Image Validation & Processing
### Format Validation
```typescript
const s3Router = s3.createRouter({
productImages: s3.image()
.maxFileSize('8MB')
.maxFiles(5)
.formats(['jpeg', 'png', 'webp'])
.dimensions({
minWidth: 800,
maxWidth: 4000,
minHeight: 600,
maxHeight: 3000,
})
.aspectRatio(16 / 9, { tolerance: 0.1 })
.middleware(async ({ req, file, metadata }) => {
// Custom validation
const imageMetadata = await getImageMetadata(file);
if (
imageMetadata.hasTransparency &&
!["png", "webp"].includes(imageMetadata.format)
) {
throw new Error("Transparent images must be PNG or WebP format");
}
if (imageMetadata.colorProfile !== "sRGB") {
console.warn(
`Image ${file.name} uses ${imageMetadata.colorProfile} color profile`
);
}
return {
...metadata,
userId: await getUserId(req),
...imageMetadata
};
}),
});
```
## Image Processing Integration
**β οΈ Important:** Pushduck handles **file uploads only**. Image processing (resizing, optimization, format conversion) requires external tools. The examples below show **integration patterns** using `.onUploadComplete()` hooks.
**Popular image processing tools:**
* **Sharp** - Fast Node.js processing (recommended)
* **Cloudinary** - Full-service image CDN with transformations
* **Imgix** - Real-time URL-based transformations
See [Philosophy](/docs/philosophy#what-pushduck-doesnt-do) for why we don't include processing.
### Integration Example: Sharp for Resizing
**β οΈ Bandwidth Tradeoff:** Server-side processing requires **downloading** the file from S3 to your server (inbound bandwidth), processing it, then **uploading** variants back (outbound bandwidth).
**This negates Pushduck's "server never touches files" benefit!**
**Better Alternatives:**
* **Client-side preprocessing** - Resize before upload (see below)
* **URL-based processing** - Cloudinary/Imgix transform via URL (no download)
* **Async queue** - Process in background worker, not blocking upload
```typescript
// First: npm install sharp
import sharp from 'sharp';
const s3Router = s3.createRouter({
optimizedImages: s3.image()
.maxFileSize('15MB')
.maxFiles(10)
.formats(['jpeg', 'png', 'webp'])
.dimensions({ maxWidth: 1920, maxHeight: 1080 })
.onUploadComplete(async ({ file, url, metadata }) => {
// β οΈ This downloads the file from S3 (inbound bandwidth)
const imageBuffer = await fetch(url).then(r => r.arrayBuffer());
// Process with Sharp (external tool)
const variants = await Promise.all([
sharp(imageBuffer).resize(150, 150, { fit: 'cover' }).toBuffer(),
sharp(imageBuffer).resize(800, 600, { fit: 'inside' }).toBuffer(),
sharp(imageBuffer).resize(1920, 1080, { fit: 'inside' }).toBuffer(),
]);
// β οΈ Upload variants back to S3 (outbound bandwidth)
// await uploadVariantsToS3(variants);
}),
});
```
***
## Better Alternative: Client-Side Preprocessing
**β
Recommended Approach:** Process images **before** upload on the client side. This maintains Pushduck's "server never touches files" architecture and saves bandwidth.
### Client-Side Resize Before Upload
```typescript
// Use browser-image-compression for client-side resizing
// npm install browser-image-compression
import imageCompression from 'browser-image-compression';
import { upload } from '@/lib/upload-client';
export function ClientSideImageUpload() {
const { uploadFiles, isUploading } = upload.profilePicture();
const handleImageSelect = async (e: React.ChangeEvent) => {
const file = e.target.files?.[0];
if (!file) return;
// β
Resize on client BEFORE upload (no server bandwidth)
const options = {
maxSizeMB: 1,
maxWidthOrHeight: 1920,
useWebWorker: true,
};
try {
const compressedFile = await imageCompression(file, options);
// Upload the already-processed file
await uploadFiles([compressedFile]);
console.log('Original:', file.size / 1024, 'KB');
console.log('Compressed:', compressedFile.size / 1024, 'KB');
} catch (error) {
console.error('Compression error:', error);
}
};
return (
);
}
```
**Benefits:**
* β
**No server bandwidth** - file never touches your server
* β
**Faster uploads** - smaller files upload quicker
* β
**Lower S3 costs** - store smaller files
* β
**Edge compatible** - no Node.js processing
* β
**Better UX** - instant preview of processed image
***
## Alternative: URL-Based Processing (Cloudinary/Imgix)
**β
Best of Both Worlds:** Upload original to S3, transform via URL without downloading. Zero server bandwidth!
```typescript
const s3Router = s3.createRouter({
images: s3.image()
.maxFileSize('10MB')
.onUploadComplete(async ({ file, url, metadata }) => {
// Save original URL - NO download needed
await db.images.create({
userId: metadata.userId,
originalUrl: url,
s3Key: file.key,
});
// β
Cloudinary can fetch from your S3 URL and transform
// No bandwidth on your server!
}),
});
// Client: Use Cloudinary URLs for transformations
function ImageDisplay({ s3Url }: { s3Url: string }) {
// Cloudinary fetches from S3 and transforms (their bandwidth, not yours)
const cloudinaryUrl = `https://res.cloudinary.com/your-cloud/image/fetch/w_400,h_400,c_fill/${encodeURIComponent(s3Url)}`;
return ;
}
```
***
## Advanced Patterns (Optional)
### Responsive Image Generation
```typescript
interface ImageVariant {
name: string;
width: number;
height?: number;
quality?: number;
format?: "jpeg" | "png" | "webp";
}
const imageVariants: ImageVariant[] = [
{ name: "thumbnail", width: 150, height: 150, quality: 80 },
{ name: "small", width: 400, quality: 85 },
{ name: "medium", width: 800, quality: 85 },
{ name: "large", width: 1200, quality: 85 },
{ name: "xlarge", width: 1920, quality: 90 },
];
const s3Router = s3.createRouter({
responsiveImages: s3.image()
.maxFileSize('20MB')
.maxFiles(5)
.formats(['jpeg', 'png', 'webp'])
.onUploadComplete(async ({ file, url, metadata }) => {
// Generate responsive variants
const variants = await Promise.all(
imageVariants.map((variant) => generateImageVariant(file, variant))
);
// Save variant information to database
await saveImageVariants(file.key, variants, metadata.userId);
}),
});
// Client-side responsive image component
export function ResponsiveImage({
src,
alt,
sizes = "(max-width: 768px) 100vw, (max-width: 1200px) 50vw, 33vw",
}: {
src: string;
alt: string;
sizes?: string;
}) {
const variants = useImageVariants(src);
if (!variants) return ;
const srcSet = [
`${variants.small} 400w`,
`${variants.medium} 800w`,
`${variants.large} 1200w`,
`${variants.xlarge} 1920w`,
].join(", ");
return (
);
}
```
### Image Upload with Crop & Preview
```typescript
import { useState } from 'react'
import { ImageCropper } from './image-cropper'
import { upload } from '@/lib/upload-client'
export function ImageUploadWithCrop() {
const [selectedFile, setSelectedFile] = useState(null)
const [croppedImage, setCroppedImage] = useState(null)
const { uploadFiles, isUploading } = upload.profilePicture
const handleFileSelect = (e: React.ChangeEvent) => {
const file = e.target.files?.[0]
if (file) setSelectedFile(file)
}
const handleCropComplete = (croppedBlob: Blob) => {
setCroppedImage(croppedBlob)
}
const handleUpload = async () => {
if (!croppedImage) return
const file = new File([croppedImage], 'cropped-image.jpg', {
type: 'image/jpeg'
})
await uploadFiles([file])
// Reset state
setSelectedFile(null)
setCroppedImage(null)
}
return (
{!selectedFile && (
)}
{selectedFile && !croppedImage && (
)}
{croppedImage && (
setCroppedImage(null)}>
Recrop
{isUploading ? 'Uploading...' : 'Upload'}
)}
)
}
```
```typescript
import { useRef, useCallback } from 'react'
import ReactCrop, { Crop, PixelCrop } from 'react-image-crop'
import 'react-image-crop/dist/ReactCrop.css'
interface ImageCropperProps {
image: File
aspectRatio?: number
onCropComplete: (croppedBlob: Blob) => void
}
export function ImageCropper({
image,
aspectRatio = 1,
onCropComplete
}: ImageCropperProps) {
const imgRef = useRef(null)
const [crop, setCrop] = useState({
unit: '%',
x: 25,
y: 25,
width: 50,
height: 50
})
const imageUrl = URL.createObjectURL(image)
const getCroppedImage = useCallback(async (
image: HTMLImageElement,
crop: PixelCrop
): Promise => {
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')!
const scaleX = image.naturalWidth / image.width
const scaleY = image.naturalHeight / image.height
canvas.width = crop.width * scaleX
canvas.height = crop.height * scaleY
ctx.imageSmoothingQuality = 'high'
ctx.drawImage(
image,
crop.x * scaleX,
crop.y * scaleY,
crop.width * scaleX,
crop.height * scaleY,
0,
0,
canvas.width,
canvas.height
)
return new Promise(resolve => {
canvas.toBlob(blob => resolve(blob!), 'image/jpeg', 0.9)
})
}, [])
const handleCropComplete = useCallback(async (crop: PixelCrop) => {
if (imgRef.current && crop.width && crop.height) {
const croppedBlob = await getCroppedImage(imgRef.current, crop)
onCropComplete(croppedBlob)
}
}, [getCroppedImage, onCropComplete])
return (
)
}
```
```typescript
// Server-side image processing after upload
const s3Router = s3.createRouter({
profilePicture: s3.image()
.maxFileSize('10MB')
.maxFiles(1)
.formats(['jpeg', 'png', 'webp'])
.onUploadComplete(async ({ file, url, metadata }) => {
// Generate avatar sizes
await Promise.all([
generateImageVariant(file, {
name: 'avatar-small',
width: 32,
height: 32,
fit: 'cover',
quality: 90
}),
generateImageVariant(file, {
name: 'avatar-medium',
width: 64,
height: 64,
fit: 'cover',
quality: 90
}),
generateImageVariant(file, {
name: 'avatar-large',
width: 128,
height: 128,
fit: 'cover',
quality: 95
})
])
// Update user profile with new avatar
await updateUserAvatar(metadata.userId, {
original: url,
small: getVariantUrl(file.key, 'avatar-small'),
medium: getVariantUrl(file.key, 'avatar-medium'),
large: getVariantUrl(file.key, 'avatar-large')
})
})
})
```
## Image Upload Patterns
### Drag & Drop Image Gallery
```typescript
import { useDropzone } from "react-dropzone";
import { upload } from "@/lib/upload-client";
export function ImageGalleryUploader() {
const { uploadFiles, files, isUploading } = upload.galleryImages;
const { getRootProps, getInputProps, isDragActive } = useDropzone({
accept: {
"image/*": [".jpeg", ".jpg", ".png", ".webp", ".gif"],
},
maxFiles: 10,
onDrop: (acceptedFiles) => {
uploadFiles(acceptedFiles);
},
});
const removeFile = (fileId: string) => {
// Implementation to remove file from gallery
};
return (
{isDragActive ? (
Drop the images here...
) : (
Drag & drop images here, or click to select
Up to 10 images, max 10MB each
)}
{files.length > 0 && (
{files.map((file) => (
{file.status === "success" && (
removeFile(file.id)}
>
Γ
)}
{file.status === "uploading" && (
)}
{file.status === "error" && (
β οΈ
Upload failed
uploadFiles([file.originalFile])}>
Retry
)}
))}
)}
);
}
```
### Image Upload with Metadata
```typescript
const s3Router = s3.createRouter({
portfolioImages: s3.image()
.maxFileSize('15MB')
.maxFiles(20)
.formats(['jpeg', 'png', 'webp'])
.middleware(async ({ req, file, metadata }) => {
const { userId } = await authenticateUser(req);
// Extract and validate metadata
const imageMetadata = await extractImageMetadata(file);
// Return enriched metadata
return {
...metadata,
userId,
uploadedBy: userId,
uploadedAt: new Date(),
originalFilename: file.name,
fileHash: await calculateFileHash(file),
...imageMetadata,
};
})
.onUploadComplete(async ({ file, url, metadata }) => {
// Save detailed image information
await saveImageToDatabase({
userId: metadata.userId,
s3Key: file.key,
url: url,
filename: metadata.originalFilename,
size: file.size,
dimensions: {
width: metadata.width,
height: metadata.height,
},
format: metadata.format,
colorProfile: metadata.colorProfile,
hasTransparency: metadata.hasTransparency,
exifData: metadata.exif,
hash: metadata.fileHash,
});
}),
});
```
## Performance Best Practices
```typescript
import { compress } from 'image-conversion'
export function optimizeImage(file: File): Promise {
return compress(file, {
quality: 0.8,
type: 'image/webp',
width: 1920,
height: 1080,
orientation: true // Auto-rotate based on EXIF
})
}
// Usage in upload component
const handleFileSelect = async (files: File[]) => {
const optimizedFiles = await Promise.all(
files.map(file => optimizeImage(file))
)
uploadFiles(optimizedFiles)
}
```
```typescript
export function ProgressiveImage({
src,
blurDataURL,
alt
}: {
src: string
blurDataURL: string
alt: string
}) {
const [isLoaded, setIsLoaded] = useState(false)
return (
setIsLoaded(true)}
/>
)
}
```
```typescript
import { useIntersectionObserver } from '@/hooks/use-intersection-observer'
export function LazyImage({ src, alt, ...props }) {
const [ref, isIntersecting] = useIntersectionObserver({
threshold: 0.1,
rootMargin: '50px'
})
return (
{isIntersecting ? (
) : (
Loading...
)}
)
}
```
## Project Structure
***
**Image Excellence**: With proper optimization, validation, and processing,
your image uploads will provide an excellent user experience while maintaining
performance and quality.
# Guides & Tutorials (/docs/guides)
import { Card, Cards } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Learning Path & Tutorials
Learn how to build robust file upload features with pushduck through comprehensive guides covering everything from basic uploads to advanced production patterns.
**Progressive Learning**: These guides are organized from basic concepts to advanced patterns. Start with client approaches and work your way up to production deployment.
## Getting Started
* Hook-based vs Property-based clients
* When to use each approach
* Migration strategies
* Performance considerations
**Perfect for**: Understanding client patterns
## Upload Patterns
* Image validation and processing
* Automatic resizing and optimization
* Format conversion
* Progressive loading patterns
**Perfect for**: Photo sharing, profile pictures, galleries
## Security & Authentication
* User authentication strategies
* Role-based access control
* JWT integration
* Session management
**Essential for**: Secure applications
* CORS setup for different providers
* Access Control Lists (ACL)
* Public vs private uploads
* Security best practices
**Essential for**: Production deployments
## Migration & Upgrades
* Step-by-step migration process
* Breaking changes and compatibility
* Performance improvements
* Type safety enhancements
**Perfect for**: Upgrading existing projects
## Production Deployment
* Environment configuration
* Security considerations
* Performance optimization
* Monitoring and logging
**Essential for**: Going live safely
## Common Patterns
### Basic Upload Flow
**Configure Server Router**
Set up your upload routes with validation:
```typescript
const uploadRouter = createS3Router({
routes: {
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB"),
},
});
```
**Implement Client Upload**
Use hooks or client for reactive uploads:
```typescript
const { upload, uploading, progress } = useUpload({
endpoint: '/api/upload',
route: 'imageUpload',
});
```
**Handle Upload Results**
Process successful uploads and errors:
```typescript
const result = await upload(file);
console.log('File uploaded:', result.url);
```
### Authentication Pattern
```typescript
// Server: Add authentication middleware
const uploadRouter = createS3Router({
middleware: [
async (req) => {
const user = await authenticate(req);
if (!user) throw new Error('Unauthorized');
return { user };
}
],
routes: {
userAvatar: s3.image()
.maxFileSize("2MB")
.path(({ metadata }) => `avatars/${metadata.user.id}`),
},
});
// Client: Include auth headers
const { upload } = useUpload({
endpoint: '/api/upload',
route: 'userAvatar',
headers: {
Authorization: `Bearer ${token}`,
},
});
```
## Architecture Patterns
### Multi-Provider Setup
```typescript
// Support multiple storage providers
const uploadRouter = createS3Router({
storage: process.env.NODE_ENV === 'production'
? { provider: 'aws-s3', ... }
: { provider: 'minio', ... },
routes: {
// Your routes remain the same
},
});
```
### Route-Based Organization
```typescript
const uploadRouter = createS3Router({
routes: {
// Public uploads
publicImages: s3.image().maxFileSize("5MB").public(),
// User-specific uploads
userDocuments: s3.file()
.maxFileSize("10MB")
.path(({ metadata }) => `users/${metadata.userId}/documents`),
// Admin uploads
adminAssets: s3.file()
.maxFileSize("50MB")
.middleware([requireAdmin]),
},
});
```
## Performance Tips
**Optimization Strategies**:
* Use appropriate file size limits for your use case
* Implement client-side validation before upload
* Consider using presigned URLs for large files
* Enable CDN for frequently accessed files
* Implement progressive upload for large files
## Troubleshooting Quick Links
| Issue | Solution |
| ----------------- | -------------------------------------------------------------------------- |
| **CORS errors** | Check [CORS Configuration](/docs/guides/security/cors-and-acl) |
| **Auth failures** | Review [Authentication Guide](/docs/guides/security/authentication) |
| **Slow uploads** | See [Production Checklist](/docs/guides/production-checklist) |
| **Type errors** | Check [Enhanced Client Migration](/docs/guides/migrate-to-enhanced-client) |
## What's Next?
1. **New to pushduck?** β Start with [Client Approaches](/docs/guides/client-approaches)
2. **Building image features?** β Check [Image Uploads](/docs/guides/image-uploads)
3. **Adding security?** β Review [Authentication](/docs/guides/security/authentication)
4. **Going to production?** β Use [Production Checklist](/docs/guides/production-checklist)
5. **Need help?** β Visit our [troubleshooting guide](/docs/api/troubleshooting)
**Community Guides**: Have a useful pattern or solution? Consider contributing to our documentation to help other developers!
# Enhanced Client Migration (/docs/guides/migrate-to-enhanced-client)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Migrating to Enhanced Client
Upgrade to the new property-based client API for enhanced type safety, better developer experience, and elimination of string literals.
The enhanced client API is **100% backward compatible**. You can migrate
gradually without breaking existing code.
## Why Migrate?
```typescript
// β Old: String literals, no type safety
const {uploadFiles} = useUploadRoute("imageUpload")
// β
New: Property-based, full type inference
const {uploadFiles} = upload.imageUpload
```
```typescript
// β
Autocomplete shows all your endpoints upload.
// imageUpload, documentUpload, videoUpload...
// ^ No more guessing endpoint names
```
```typescript
// When you rename routes in your router,
// TypeScript shows errors everywhere they're used
// Making refactoring safe and easy
```
## Migration Steps
**Install Latest Version**
Ensure you're using the latest version of pushduck:
```bash
npm install pushduck@latest
```
```bash
yarn add pushduck@latest
```
```bash
pnpm add pushduck@latest
```
```bash
bun add pushduck@latest
```
**Create Upload Client**
Set up your typed upload client:
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from './upload' // Your router type
export const upload = createUploadClient({
endpoint: '/api/upload'
})
```
**Migrate Components Gradually**
Update your components one by one:
```typescript
import { useUploadRoute } from 'pushduck/client'
export function ImageUploader() {
const { uploadFiles, files, isUploading } = useUploadRoute('imageUpload')
return (
uploadFiles(e.target.files)} />
{/* Upload UI */}
)
}
```
```typescript
import { upload } from '@/lib/upload-client'
export function ImageUploader() {
const { uploadFiles, files, isUploading } = upload.imageUpload
return (
uploadFiles(e.target.files)} />
{/* Same upload UI */}
)
}
```
**Update Imports**
Once migrated, you can remove old hook imports:
```typescript
// Remove old imports
// import { useUploadRoute } from 'pushduck/client'
// Use new client import
import { upload } from '@/lib/upload-client'
```
## Migration Examples
### Basic Component Migration
```typescript
import { useUploadRoute } from 'pushduck/client'
export function DocumentUploader() {
const {
uploadFiles,
files,
isUploading,
error,
reset
} = useUploadRoute('documentUpload', {
onSuccess: (results) => {
console.log('Uploaded:', results)
},
onError: (error) => {
console.error('Error:', error)
}
})
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map(file => (
))}
{error &&
Error: {error.message}
}
Reset
)
}
```
```typescript
import { upload } from '@/lib/upload-client'
export function DocumentUploader() {
const {
uploadFiles,
files,
isUploading,
error,
reset
} = upload.documentUpload
// Handle callbacks with upload options
const handleUpload = async (selectedFiles: File[]) => {
try {
const results = await uploadFiles(selectedFiles)
console.log('Uploaded:', results)
} catch (error) {
console.error('Error:', error)
}
}
return (
handleUpload(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map(file => (
))}
{error &&
Error: {error.message}
}
Reset
)
}
```
### Form Integration Migration
```typescript
import { useForm } from 'react-hook-form'
import { useUploadRoute } from 'pushduck/client'
export function ProductForm() {
const { register, handleSubmit, setValue } = useForm()
const { uploadFiles, uploadedFiles } = useUploadRoute('productImages', {
onSuccess: (results) => {
setValue('images', results.map(r => r.url))
}
})
return (
)
}
```
```typescript
import { useForm } from 'react-hook-form'
import { upload } from '@/lib/upload-client'
export function ProductForm() {
const { register, handleSubmit, setValue } = useForm()
const { uploadFiles } = upload.productImages
const handleImageUpload = async (files: File[]) => {
const results = await uploadFiles(files)
setValue('images', results.map(r => r.url))
}
return (
)
}
```
### Multiple Upload Types Migration
```typescript
export function MediaUploader() {
const images = useUploadRoute('imageUpload')
const videos = useUploadRoute('videoUpload')
const documents = useUploadRoute('documentUpload')
return (
)
}
```
```typescript
import { upload } from '@/lib/upload-client'
export function MediaUploader() {
const images = upload.imageUpload
const videos = upload.videoUpload
const documents = upload.documentUpload
return (
)
}
```
## Key Differences
### API Comparison
| Feature | Hook-Based API | Property-Based API |
| ------------------ | ------------------------- | --------------------------- |
| **Type Safety** | Runtime string validation | Compile-time type checking |
| **IntelliSense** | Limited autocomplete | Full endpoint autocomplete |
| **Refactoring** | Manual find/replace | Automatic TypeScript errors |
| **Bundle Size** | Slightly larger | Optimized tree-shaking |
| **Learning Curve** | Familiar React pattern | New property-based pattern |
### Callback Handling
```typescript
const { uploadFiles } = useUploadRoute('images', {
onSuccess: (results) => console.log('Success:', results),
onError: (error) => console.error('Error:', error),
onProgress: (progress) => console.log('Progress:', progress)
})
```
```typescript
const { uploadFiles } = upload.images
await uploadFiles(files, {
onSuccess: (results) => console.log('Success:', results),
onError: (error) => console.error('Error:', error),
onProgress: (progress) => console.log('Progress:', progress)
})
```
## Troubleshooting
### Common Migration Issues
**Type Errors:** If you see TypeScript errors after migration, ensure your
router type is properly exported and imported.
```typescript
// β Missing router type
export const upload = createUploadClient({
endpoint: "/api/upload",
});
// β
With proper typing
export const upload = createUploadClient({
endpoint: "/api/upload",
});
```
### Gradual Migration Strategy
You can use both APIs simultaneously during migration:
```typescript
// Keep existing hook-based components working
const hookUpload = useUploadRoute("imageUpload");
// Use new property-based API for new components
const propertyUpload = upload.imageUpload;
// Both work with the same backend!
```
## Benefits After Migration
* **π― Enhanced Type Safety**: Catch errors at compile time, not runtime
* **π Better Performance**: Optimized bundle size with tree-shaking
* **π‘ Improved DX**: Full IntelliSense support for all endpoints
* **π§ Safe Refactoring**: Rename endpoints without breaking your app
* **π¦ Future-Proof**: Built for the next generation of pushduck features
***
**Migration Complete!** You now have enhanced type safety and a better
developer experience. Need help? Join our [Discord
community](https://pushduck.dev/discord) for support.
# Production Checklist (/docs/guides/production-checklist)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Going Live Checklist
Get your file uploads production-ready. Start with the 8 essentials belowβmost apps don't need more than this.
**Quick Path to Production:** Complete the essential checklist below (8 items) and you're ready to deploy. Advanced optimizations can be added later as you scale.
## β
Essential Checklist (Required)
These 8 items are critical for safe production deployment:
**1. Authentication**
* [ ] Auth middleware on all upload routes
* [ ] Unauthenticated requests are blocked
```typescript
const router = s3.createRouter({
userFiles: s3.image()
.middleware(async ({ req }) => {
const session = await getServerSession(req);
if (!session) throw new Error("Auth required");
return { userId: session.user.id };
})
});
```
**2. Environment Variables**
* [ ] S3 credentials in `.env` (not in code)
* [ ] Secrets are strong and unique
```bash
AWS_ACCESS_KEY_ID=xxx
AWS_SECRET_ACCESS_KEY=xxx
AWS_REGION=us-east-1
S3_BUCKET_NAME=your-bucket
```
**3. File Validation**
* [ ] File type restrictions (`.formats()`)
* [ ] File size limits (`.maxFileSize()`)
```typescript
userPhotos: s3.image()
.maxFileSize("10MB")
.maxFiles(5)
.formats(["jpeg", "png", "webp"])
```
**4. CORS Configuration**
* [ ] CORS set up on S3 bucket
* [ ] Only your domain is allowed
See [CORS Setup Guide](/docs/guides/security/cors-and-acl)
**5. Error Monitoring**
* [ ] Error tracking enabled (Sentry/LogRocket)
* [ ] Upload failures are logged
```typescript
.onUploadError(async ({ error }) => {
console.error('Upload failed:', error);
// Sentry.captureException(error);
})
```
**6. Basic Rate Limiting** (Optional but recommended)
* [ ] Prevent abuse with upload limits
Use Upstash or Vercel KV for simple rate limiting.
**7. Test Uploads**
* [ ] Upload works in production environment
* [ ] Files appear in S3 bucket correctly
* [ ] URLs are accessible
**8. Backup Strategy**
* [ ] S3 versioning enabled (optional)
* [ ] Know how to restore deleted files
**β
Done!** If you've completed these 8 items, your upload system is production-ready.
***
## π When You Need More
**Most apps are production-ready with the 8 essentials above.** As you scale, consider:
* **CDN integration** - For global audience or high traffic
* **Advanced auth** - RBAC/ABAC for enterprise permissions (see [Authentication Guide](/docs/guides/security/authentication))
* **Redis caching** - For 10k+ requests/minute
* **Multi-region** - For mission-critical redundancy
***
## Next Steps
Deep dive into authentication patterns
Configure CORS for your provider
Common issues and solutions
# Astro (/docs/integrations/astro)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
**π§ Client-Side In Development**: Astro server-side integration is fully functional with Web Standards APIs. However, Astro-specific client-side components and hooks are still in development. You can use the standard pushduck client APIs for now.
## Using pushduck with Astro
Astro is a modern web framework for building fast, content-focused websites with islands architecture. It uses Web Standards APIs and provides excellent performance with minimal JavaScript. Since Astro uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Astro API routes use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
**Configure upload router**
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: import.meta.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: import.meta.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: import.meta.env.AWS_ENDPOINT_URL!,
bucket: import.meta.env.S3_BUCKET_NAME!,
accountId: import.meta.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="src/pages/api/upload/[...path].ts"
import type { APIRoute } from 'astro';
import { uploadRouter } from '../../../lib/upload';
// Direct usage - no adapter needed!
export const ALL: APIRoute = async ({ request }) => {
return uploadRouter.handlers(request);
};
```
## Basic Integration
### Simple Upload Route
```typescript title="src/pages/api/upload/[...path].ts"
import type { APIRoute } from 'astro';
import { uploadRouter } from '../../../lib/upload';
// Method 1: Combined handler (recommended)
export const ALL: APIRoute = async ({ request }) => {
return uploadRouter.handlers(request);
};
// Method 2: Separate handlers (if you need method-specific logic)
export const GET: APIRoute = async ({ request }) => {
return uploadRouter.handlers.GET(request);
};
export const POST: APIRoute = async ({ request }) => {
return uploadRouter.handlers.POST(request);
};
```
### With CORS Support
```typescript title="src/pages/api/upload/[...path].ts"
import type { APIRoute } from 'astro';
import { uploadRouter } from '../../../lib/upload';
export const ALL: APIRoute = async ({ request }) => {
// Handle CORS preflight
if (request.method === 'OPTIONS') {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
},
});
}
const response = await uploadRouter.handlers(request);
// Add CORS headers to actual response
response.headers.set('Access-Control-Allow-Origin', '*');
return response;
};
```
## Advanced Configuration
### Authentication with Astro
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: import.meta.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: import.meta.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: import.meta.env.AWS_ENDPOINT_URL!,
bucket: import.meta.env.S3_BUCKET_NAME!,
accountId: import.meta.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with cookie-based authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookies = req.headers.get('Cookie');
const sessionId = parseCookie(cookies)?.sessionId;
if (!sessionId) {
throw new Error('Authentication required');
}
const user = await getUserFromSession(sessionId);
if (!user) {
throw new Error('Invalid session');
}
return {
userId: user.id,
username: user.username,
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
// Helper functions
function parseCookie(cookieString: string | null) {
if (!cookieString) return {};
return Object.fromEntries(
cookieString.split('; ').map(c => {
const [key, ...v] = c.split('=');
return [key, v.join('=')];
})
);
}
async function getUserFromSession(sessionId: string) {
// Implement your session validation logic
// This could connect to a database, Redis, etc.
return { id: 'user-123', username: 'demo-user' };
}
```
## Client-Side Usage
### Upload Component (React)
```tsx title="src/components/FileUpload.tsx"
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "../lib/upload";
const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
export default function FileUpload() {
function handleUploadComplete(files: any[]) {
console.log("Files uploaded:", files);
alert("Upload completed!");
}
function handleUploadError(error: Error) {
console.error("Upload error:", error);
alert(`Upload failed: ${error.message}`);
}
return (
Image Upload
Document Upload
);
}
```
### Upload Component (Vue)
```vue title="src/components/FileUpload.vue"
Image Upload
Document Upload
```
### Using in Astro Pages
```astro title="src/pages/index.astro"
---
// Server-side code (runs at build time)
---
File Upload Demo
File Upload Demo
```
## File Management
### Server-Side File API
```typescript title="src/pages/api/files.ts"
import type { APIRoute } from 'astro';
export const GET: APIRoute = async ({ request, url }) => {
const searchParams = url.searchParams;
const userId = searchParams.get('userId');
if (!userId) {
return new Response(JSON.stringify({ error: 'User ID required' }), {
status: 400,
headers: { 'Content-Type': 'application/json' }
});
}
// Fetch files from database
const files = await getFilesForUser(userId);
return new Response(JSON.stringify({
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
}), {
headers: { 'Content-Type': 'application/json' }
});
};
async function getFilesForUser(userId: string) {
// Implement your database query logic
return [];
}
```
### File Management Page
```astro title="src/pages/files.astro"
---
// This runs on the server at build time or request time
const files = await fetch(`${Astro.url.origin}/api/files?userId=current-user`)
.then(res => res.json())
.catch(() => ({ files: [] }));
---
My Files
My Files
Uploaded Files
{files.files.length === 0 ? (
No files uploaded yet.
) : (
{files.files.map((file: any) => (
{file.name}
{formatFileSize(file.size)}
{new Date(file.uploadedAt).toLocaleDateString()}
View File
))}
)}
```
## Deployment Options
```javascript title="astro.config.mjs"
import { defineConfig } from 'astro/config';
import vercel from '@astrojs/vercel/serverless';
export default defineConfig({
output: 'server',
adapter: vercel({
runtime: 'nodejs18.x',
}),
});
```
```javascript title="astro.config.mjs"
import { defineConfig } from 'astro/config';
import netlify from '@astrojs/netlify/functions';
export default defineConfig({
output: 'server',
adapter: netlify(),
});
```
```javascript title="astro.config.mjs"
import { defineConfig } from 'astro/config';
import node from '@astrojs/node';
export default defineConfig({
output: 'server',
adapter: node({
mode: 'standalone',
}),
});
```
```javascript title="astro.config.mjs"
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
export default defineConfig({
output: 'server',
adapter: cloudflare(),
});
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# Astro
PUBLIC_UPLOAD_ENDPOINT=http://localhost:3000/api/upload
```
## Performance Benefits
## Real-Time Upload Progress
```tsx title="src/components/AdvancedUpload.tsx"
import { useState } from 'react';
export default function AdvancedUpload() {
const [uploadProgress, setUploadProgress] = useState(0);
const [isUploading, setIsUploading] = useState(false);
async function handleFileUpload(event: React.ChangeEvent) {
const files = event.target.files;
if (!files || files.length === 0) return;
setIsUploading(true);
setUploadProgress(0);
try {
// Simulate upload progress
for (let i = 0; i <= 100; i += 10) {
setUploadProgress(i);
await new Promise(resolve => setTimeout(resolve, 100));
}
alert('Upload completed!');
} catch (error) {
console.error('Upload failed:', error);
alert('Upload failed!');
} finally {
setIsUploading(false);
setUploadProgress(0);
}
}
return (
{isUploading && (
{uploadProgress}% uploaded
)}
);
}
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `src/pages/api/upload/[...path].ts`
2. **Build errors**: Check that pushduck is properly installed and configured
3. **Environment variables**: Use `import.meta.env` instead of `process.env`
4. **Client components**: Remember to add `client:load` directive for interactive components
### Debug Mode
Enable debug logging:
```typescript title="src/lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (import.meta.env.DEV) {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Astro Configuration
```javascript title="astro.config.mjs"
import { defineConfig } from 'astro/config';
import react from '@astrojs/react';
import vue from '@astrojs/vue';
export default defineConfig({
integrations: [
react(), // For React components
vue(), // For Vue components
],
output: 'server', // Required for API routes
vite: {
define: {
// Make environment variables available
'import.meta.env.AWS_ACCESS_KEY_ID': JSON.stringify(process.env.AWS_ACCESS_KEY_ID),
}
}
});
```
Astro provides an excellent foundation for building fast, content-focused websites with pushduck, combining the power of islands architecture with Web Standards APIs for optimal performance and developer experience.
# Bun Runtime (/docs/integrations/bun)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Using pushduck with Bun
Bun is an ultra-fast JavaScript runtime with native Web Standards support. Since Bun uses Web Standard `Request` and `Response` objects natively, pushduck handlers work directly without any adapters!
**Web Standards Native**: Bun's `Bun.serve()` uses Web Standard `Request` objects directly, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
bun add pushduck
```
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create Bun server with upload routes**
```typescript title="server.ts"
import { uploadRouter } from './lib/upload';
// Direct usage - no adapter needed!
Bun.serve({
port: 3000,
fetch(request) {
const url = new URL(request.url);
if (url.pathname.startsWith('/api/upload/')) {
return uploadRouter.handlers(request);
}
return new Response('Not found', { status: 404 });
},
});
console.log('π Bun server running on http://localhost:3000');
```
## Basic Integration
### Simple Upload Server
```typescript title="server.ts"
import { uploadRouter } from './lib/upload';
Bun.serve({
port: 3000,
fetch(request) {
const url = new URL(request.url);
// Method 1: Combined handler (recommended)
if (url.pathname.startsWith('/api/upload/')) {
return uploadRouter.handlers(request);
}
// Health check
if (url.pathname === '/health') {
return new Response(JSON.stringify({ status: 'ok' }), {
headers: { 'Content-Type': 'application/json' }
});
}
return new Response('Not found', { status: 404 });
},
});
console.log('π Bun server running on http://localhost:3000');
```
### With CORS and Routing
```typescript title="server.ts"
import { uploadRouter } from './lib/upload';
function handleCORS(request: Request) {
const origin = request.headers.get('origin');
const allowedOrigins = ['http://localhost:3000', 'https://your-domain.com'];
const headers = new Headers();
if (origin && allowedOrigins.includes(origin)) {
headers.set('Access-Control-Allow-Origin', origin);
}
headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
headers.set('Access-Control-Allow-Headers', 'Content-Type, Authorization');
return headers;
}
Bun.serve({
port: 3000,
fetch(request) {
const url = new URL(request.url);
const corsHeaders = handleCORS(request);
// Handle preflight requests
if (request.method === 'OPTIONS') {
return new Response(null, { status: 200, headers: corsHeaders });
}
// Upload routes
if (url.pathname.startsWith('/api/upload/')) {
return uploadRouter.handlers(request).then(response => {
// Add CORS headers to response
corsHeaders.forEach((value, key) => {
response.headers.set(key, value);
});
return response;
});
}
// Health check
if (url.pathname === '/health') {
return new Response(JSON.stringify({
status: 'ok',
runtime: 'Bun',
timestamp: new Date().toISOString()
}), {
headers: {
'Content-Type': 'application/json',
...Object.fromEntries(corsHeaders)
}
});
}
return new Response('Not found', { status: 404 });
},
});
console.log('π Bun server running on http://localhost:3000');
```
## Advanced Configuration
### Authentication and Rate Limiting
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const payload = await verifyJWT(token);
return {
userId: payload.sub as string,
userRole: payload.role as string
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
async function verifyJWT(token: string) {
// Your JWT verification logic here
// Using Bun's built-in crypto or a JWT library
return { sub: 'user-123', role: 'user' };
}
export type AppUploadRouter = typeof uploadRouter;
```
### Production Server with Full Features
```typescript title="server.ts"
import { uploadRouter } from './lib/upload';
// Simple rate limiting store
const rateLimitStore = new Map();
function rateLimit(ip: string, maxRequests = 100, windowMs = 15 * 60 * 1000) {
const now = Date.now();
const key = ip;
const record = rateLimitStore.get(key);
if (!record || now > record.resetTime) {
rateLimitStore.set(key, { count: 1, resetTime: now + windowMs });
return true;
}
if (record.count >= maxRequests) {
return false;
}
record.count++;
return true;
}
function getClientIP(request: Request): string {
// In production, you might get this from headers like X-Forwarded-For
return request.headers.get('x-forwarded-for') ||
request.headers.get('x-real-ip') ||
'unknown';
}
Bun.serve({
port: process.env.PORT ? parseInt(process.env.PORT) : 3000,
fetch(request) {
const url = new URL(request.url);
const clientIP = getClientIP(request);
// Rate limiting
if (!rateLimit(clientIP)) {
return new Response(JSON.stringify({
error: 'Too many requests'
}), {
status: 429,
headers: { 'Content-Type': 'application/json' }
});
}
// CORS
const corsHeaders = {
'Access-Control-Allow-Origin': process.env.NODE_ENV === 'production'
? 'https://your-domain.com'
: '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
};
// Handle preflight
if (request.method === 'OPTIONS') {
return new Response(null, { status: 200, headers: corsHeaders });
}
// Upload routes
if (url.pathname.startsWith('/api/upload/')) {
return uploadRouter.handlers(request).then(response => {
Object.entries(corsHeaders).forEach(([key, value]) => {
response.headers.set(key, value);
});
return response;
}).catch(error => {
console.error('Upload error:', error);
return new Response(JSON.stringify({
error: 'Upload failed',
message: process.env.NODE_ENV === 'development' ? error.message : 'Internal server error'
}), {
status: 500,
headers: {
'Content-Type': 'application/json',
...corsHeaders
}
});
});
}
// API info
if (url.pathname === '/api') {
return new Response(JSON.stringify({
name: 'Bun Upload API',
version: '1.0.0',
runtime: 'Bun',
endpoints: {
health: '/health',
upload: '/api/upload/*'
}
}), {
headers: {
'Content-Type': 'application/json',
...corsHeaders
}
});
}
// Health check
if (url.pathname === '/health') {
return new Response(JSON.stringify({
status: 'ok',
runtime: 'Bun',
version: Bun.version,
timestamp: new Date().toISOString(),
uptime: process.uptime()
}), {
headers: {
'Content-Type': 'application/json',
...corsHeaders
}
});
}
return new Response('Not found', { status: 404, headers: corsHeaders });
},
});
console.log(`π Bun server running on http://localhost:${process.env.PORT || 3000}`);
console.log(`π Environment: ${process.env.NODE_ENV || 'development'}`);
```
## File-based Routing
### Structured Application
```typescript title="routes/upload.ts"
import { uploadRouter } from '../lib/upload';
export function handleUpload(request: Request) {
return uploadRouter.handlers(request);
}
```
```typescript title="routes/api.ts"
export function handleAPI(request: Request) {
return new Response(JSON.stringify({
name: 'Bun Upload API',
version: '1.0.0',
runtime: 'Bun'
}), {
headers: { 'Content-Type': 'application/json' }
});
}
```
```typescript title="server.ts"
import { handleUpload } from './routes/upload';
import { handleAPI } from './routes/api';
const routes = {
'/api/upload': handleUpload,
'/api': handleAPI,
'/health': () => new Response(JSON.stringify({ status: 'ok' }), {
headers: { 'Content-Type': 'application/json' }
})
};
Bun.serve({
port: 3000,
fetch(request) {
const url = new URL(request.url);
for (const [path, handler] of Object.entries(routes)) {
if (url.pathname.startsWith(path)) {
return handler(request);
}
}
return new Response('Not found', { status: 404 });
},
});
```
## Performance Benefits
Bun is 3x faster than Node.js, providing incredible performance for file upload operations.
No adapter layer means zero performance overhead - pushduck handlers run directly in Bun.
Built-in bundler, test runner, package manager, and more - no extra tooling needed.
Run TypeScript directly without compilation, perfect for rapid development.
## Deployment
### Docker Deployment
```dockerfile title="Dockerfile"
FROM oven/bun:1 as base
WORKDIR /usr/src/app
# Install dependencies
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
# Copy source code
COPY . .
# Expose port
EXPOSE 3000
# Run the app
CMD ["bun", "run", "server.ts"]
```
### Production Scripts
```json title="package.json"
{
"name": "bun-upload-server",
"version": "1.0.0",
"scripts": {
"dev": "bun run --watch server.ts",
"start": "bun run server.ts",
"build": "bun build server.ts --outdir ./dist --target bun",
"test": "bun test"
},
"dependencies": {
"pushduck": "latest"
},
"devDependencies": {
"bun-types": "latest"
}
}
```
***
**Bun + Pushduck**: The perfect combination for ultra-fast file uploads with zero configuration overhead and exceptional developer experience.
# Elysia (/docs/integrations/elysia)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Using pushduck with Elysia
Elysia is a TypeScript-first web framework designed for Bun. Since Elysia uses Web Standard `Request` objects natively, pushduck handlers work directly without any adapters!
**Web Standards Native**: Elysia exposes `context.request` as a Web Standard `Request` object, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
bun add pushduck
```
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create Elysia app with upload routes**
```typescript title="server.ts"
import { Elysia } from 'elysia';
import { uploadRouter } from './lib/upload';
const app = new Elysia();
// Direct usage - no adapter needed!
app.all('/api/upload/*', (context) => {
return uploadRouter.handlers(context.request);
});
app.listen(3000);
```
## Basic Integration
### Simple Upload Route
```typescript title="server.ts"
import { Elysia } from 'elysia';
import { uploadRouter } from './lib/upload';
const app = new Elysia();
// Method 1: Combined handler (recommended)
app.all('/api/upload/*', (context) => {
return uploadRouter.handlers(context.request);
});
// Method 2: Separate handlers (if you need method-specific logic)
app.get('/api/upload/*', (context) => uploadRouter.handlers.GET(context.request));
app.post('/api/upload/*', (context) => uploadRouter.handlers.POST(context.request));
app.listen(3000);
```
### With Middleware and CORS
```typescript title="server.ts"
import { Elysia } from 'elysia';
import { cors } from '@elysiajs/cors';
import { uploadRouter } from './lib/upload';
const app = new Elysia()
.use(cors({
origin: ['http://localhost:3000', 'https://your-domain.com'],
allowedHeaders: ['Content-Type', 'Authorization'],
methods: ['GET', 'POST']
}))
// Upload routes
.all('/api/upload/*', (context) => uploadRouter.handlers(context.request))
// Health check
.get('/health', () => ({ status: 'ok' }))
.listen(3000);
console.log(`π¦ Elysia is running at http://localhost:3000`);
```
## Advanced Configuration
### Authentication with JWT
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import jwt from '@elysiajs/jwt';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with JWT authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
// Use your JWT verification logic here
const payload = jwt.verify(token, process.env.JWT_SECRET!);
return {
userId: payload.sub as string,
userRole: payload.role as string
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
### Full Production Setup
```typescript title="server.ts"
import { Elysia } from 'elysia';
import { cors } from '@elysiajs/cors';
import { rateLimit } from '@elysiajs/rate-limit';
import { swagger } from '@elysiajs/swagger';
import { uploadRouter } from './lib/upload';
const app = new Elysia()
// Swagger documentation
.use(swagger({
documentation: {
info: {
title: 'Upload API',
version: '1.0.0'
}
}
}))
// CORS
.use(cors({
origin: process.env.NODE_ENV === 'production'
? ['https://your-domain.com']
: true,
allowedHeaders: ['Content-Type', 'Authorization'],
methods: ['GET', 'POST']
}))
// Rate limiting
.use(rateLimit({
max: 100,
windowMs: 15 * 60 * 1000, // 15 minutes
}))
// Upload routes
.all('/api/upload/*', (context) => uploadRouter.handlers(context.request))
// Health check
.get('/health', () => ({
status: 'ok',
timestamp: new Date().toISOString()
}))
.listen(process.env.PORT || 3000);
console.log(`π¦ Elysia is running at http://localhost:${process.env.PORT || 3000}`);
```
## TypeScript Integration
### Type-Safe Client
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from './upload';
export const uploadClient = createUploadClient({
baseUrl: process.env.NEXT_PUBLIC_API_URL || 'http://localhost:3000'
});
```
### Client Usage
```typescript title="components/upload.tsx"
import { uploadClient } from '../lib/upload-client';
export function UploadComponent() {
const handleUpload = async (files: File[]) => {
try {
const results = await uploadClient.upload('imageUpload', {
files,
// Type-safe metadata based on your router configuration
metadata: { userId: 'user-123' }
});
console.log('Upload successful:', results);
} catch (error) {
console.error('Upload failed:', error);
}
};
return (
{
if (e.target.files) {
handleUpload(Array.from(e.target.files));
}
}}
/>
);
}
```
## Performance Benefits
No adapter layer means zero performance overhead - pushduck handlers run directly in Elysia.
Built for Bun's exceptional performance, perfect for high-throughput upload APIs.
Full TypeScript support from server to client with compile-time safety.
Extensive plugin ecosystem for authentication, validation, rate limiting, and more.
## Deployment
### Production Deployment
```dockerfile title="Dockerfile"
FROM oven/bun:1 as base
WORKDIR /usr/src/app
# Install dependencies
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile
# Copy source code
COPY . .
# Expose port
EXPOSE 3000
# Run the app
CMD ["bun", "run", "server.ts"]
```
```bash
# Build and run
docker build -t my-upload-api .
docker run -p 3000:3000 my-upload-api
```
***
**Perfect TypeScript Integration**: Elysia's TypeScript-first approach combined with pushduck's type-safe design creates an exceptional developer experience with full end-to-end type safety.
# Expo Router (/docs/integrations/expo)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with Expo Router
Expo Router is a file-based router for React Native and web applications that enables full-stack development with API routes. Since Expo Router uses Web Standards APIs, pushduck handlers work directly without any adapters!
**Web Standards Native**: Expo Router API routes use standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead. Perfect for universal React Native apps!
## Quick Setup
**Install dependencies**
```bash
npx expo install expo-router pushduck
# For file uploads on mobile
npx expo install expo-document-picker expo-image-picker
# For file system operations
npx expo install expo-file-system
```
```bash
yarn expo install expo-router pushduck
# For file uploads on mobile
yarn expo install expo-document-picker expo-image-picker
# For file system operations
yarn expo install expo-file-system
```
```bash
pnpm expo install expo-router pushduck
# For file uploads on mobile
pnpm expo install expo-document-picker expo-image-picker
# For file system operations
pnpm expo install expo-file-system
```
```bash
bun expo install expo-router pushduck
# For file uploads on mobile
bun expo install expo-document-picker expo-image-picker
# For file system operations
bun expo install expo-file-system
```
**Configure server output**
Enable server-side rendering in your `app.json`:
```json title="app.json"
{
"expo": {
"web": {
"output": "server"
},
"plugins": [
[
"expo-router",
{
"origin": "https://your-domain.com"
}
]
]
}
}
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = s3.createRouter({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="app/api/upload/[...slug]+api.ts"
import { uploadRouter } from '../../../lib/upload';
// Direct usage - no adapter needed!
export async function GET(request: Request) {
return uploadRouter.handlers(request);
}
export async function POST(request: Request) {
return uploadRouter.handlers(request);
}
```
## Basic Integration
### Simple Upload Route
```typescript title="app/api/upload/[...slug]+api.ts"
import { uploadRouter } from '../../../lib/upload';
// Method 1: Combined handler (recommended)
export async function GET(request: Request) {
return uploadRouter.handlers(request);
}
export async function POST(request: Request) {
return uploadRouter.handlers(request);
}
// Method 2: Individual methods (if you need method-specific logic)
export async function PUT(request: Request) {
return uploadRouter.handlers(request);
}
export async function DELETE(request: Request) {
return uploadRouter.handlers(request);
}
```
### With CORS Headers
```typescript title="app/api/upload/[...slug]+api.ts"
import { uploadRouter } from '../../../lib/upload';
function addCorsHeaders(response: Response) {
response.headers.set('Access-Control-Allow-Origin', '*');
response.headers.set('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS');
response.headers.set('Access-Control-Allow-Headers', 'Content-Type, Authorization');
return response;
}
export async function OPTIONS() {
return addCorsHeaders(new Response(null, { status: 200 }));
}
export async function GET(request: Request) {
const response = await uploadRouter.handlers(request);
return addCorsHeaders(response);
}
export async function POST(request: Request) {
const response = await uploadRouter.handlers(request);
return addCorsHeaders(response);
}
```
## Advanced Configuration
### Authentication with Expo Auth
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { jwtVerify } from 'jose';
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = s3.createRouter({
// Private uploads with JWT authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.formats(['jpeg', 'jpg', 'png', 'webp'])
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const secret = new TextEncoder().encode(process.env.JWT_SECRET!);
const { payload } = await jwtVerify(token, secret);
return {
userId: payload.sub as string,
platform: 'mobile'
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// User profile pictures
profilePicture: s3
.image()
.maxFileSize("2MB")
.maxFiles(1)
.formats(['jpeg', 'jpg', 'png', 'webp'])
.middleware(async ({ req }) => {
const userId = await authenticateUser(req);
return { userId, category: 'profile' };
})
.paths({
generateKey: ({ metadata, file }) => {
return `profiles/${metadata.userId}/avatar.${file.name.split('.').pop()}`;
}
}),
// Document uploads
documents: s3
.file()
.maxFileSize("10MB")
.types(['application/pdf', 'text/plain'])
.maxFiles(5)
.middleware(async ({ req }) => {
const userId = await authenticateUser(req);
return { userId, category: 'documents' };
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
async function authenticateUser(req: Request): Promise {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
const secret = new TextEncoder().encode(process.env.JWT_SECRET!);
const { payload } = await jwtVerify(token, secret);
return payload.sub as string;
}
export type AppUploadRouter = typeof uploadRouter;
```
## Client-Side Usage (React Native)
### Upload Hook
```typescript title="hooks/useUpload.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from '../lib/upload';
export const upload = createUploadClient({
endpoint: '/api/upload'
});
```
### Image Upload Component
```typescript title="components/ImageUploader.tsx"
import React, { useState } from 'react';
import { View, Text, TouchableOpacity, Image, Alert, Platform } from 'react-native';
import * as ImagePicker from 'expo-image-picker';
import { upload } from '../hooks/useUpload';
export default function ImageUploader() {
const [selectedImage, setSelectedImage] = useState(null);
const { uploadFiles, files, isUploading, error } = upload.imageUpload();
const pickImage = async () => {
// Request permission
if (Platform.OS !== 'web') {
const { status } = await ImagePicker.requestMediaLibraryPermissionsAsync();
if (status !== 'granted') {
Alert.alert('Permission needed', 'Camera roll permission is required');
return;
}
}
const result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ImagePicker.MediaTypeOptions.Images,
allowsEditing: true,
aspect: [4, 3],
quality: 1,
});
if (!result.canceled) {
const asset = result.assets[0];
setSelectedImage(asset.uri);
// Create File object for upload
const file = {
uri: asset.uri,
name: asset.fileName || 'image.jpg',
type: asset.type || 'image/jpeg',
} as any;
uploadFiles([file]);
}
};
return (
{isUploading ? 'Uploading...' : 'Pick Image'}
{error && (
Error: {error.message}
)}
{selectedImage && (
)}
{files.length > 0 && (
{files.map((file) => (
{file.name}
{file.status === 'success' ? 'Complete' : `${file.progress}%`}
{file.status === 'success' && file.url && (
β Uploaded
)}
))}
)}
);
}
```
### Document Upload Component
```typescript title="components/DocumentUploader.tsx"
import React, { useState } from 'react';
import { View, Text, TouchableOpacity, Alert, FlatList } from 'react-native';
import * as DocumentPicker from 'expo-document-picker';
import { upload } from '../hooks/useUpload';
interface UploadedFile {
name: string;
size: number;
url: string;
}
export default function DocumentUploader() {
const [uploadedFiles, setUploadedFiles] = useState([]);
const { uploadFiles, isUploading, error } = upload.documents();
const pickDocument = async () => {
try {
const result = await DocumentPicker.getDocumentAsync({
type: ['application/pdf', 'text/plain'],
multiple: true,
});
if (!result.canceled) {
const files = result.assets.map(asset => ({
uri: asset.uri,
name: asset.name,
type: asset.mimeType || 'application/octet-stream',
})) as any[];
const uploadResult = await uploadFiles(files);
if (uploadResult.success) {
const newFiles = uploadResult.results.map(file => ({
name: file.name,
size: file.size,
url: file.url, // Permanent URL
downloadUrl: file.presignedUrl, // Temporary download URL (1 hour)
}));
setUploadedFiles(prev => [...prev, ...newFiles]);
Alert.alert('Success', `${files.length} file(s) uploaded successfully!`);
}
}
} catch (error) {
Alert.alert('Error', 'Failed to pick document');
}
};
return (
{isUploading ? 'Uploading...' : 'Pick Documents'}
{error && (
Error: {error.message}
)}
index.toString()}
renderItem={({ item }) => (
{item.name}
{(item.size / 1024).toFixed(1)} KB
)}
/>
);
}
```
## Project Structure
Here's a recommended project structure for Expo Router with pushduck:
## Complete Example
### Main Upload Screen
```typescript title="app/(tabs)/upload.tsx"
import React from 'react';
import { View, Text, ScrollView, StyleSheet } from 'react-native';
import ImageUploader from '../../components/ImageUploader';
import DocumentUploader from '../../components/DocumentUploader';
export default function UploadScreen() {
return (
File Upload Demo
Image Upload
Document Upload
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#fff',
},
title: {
fontSize: 24,
fontWeight: 'bold',
textAlign: 'center',
marginVertical: 20,
},
section: {
padding: 20,
borderBottomWidth: 1,
borderBottomColor: '#eee',
},
sectionTitle: {
fontSize: 18,
fontWeight: '600',
marginBottom: 15,
},
});
```
### Tab Layout
```typescript title="app/(tabs)/_layout.tsx"
import { Tabs } from 'expo-router';
import { Ionicons } from '@expo/vector-icons';
export default function TabLayout() {
return (
(
),
}}
/>
(
),
}}
/>
);
}
```
## Deployment Options
### EAS Build Configuration
Configure automatic server deployment in your `eas.json`:
```json title="eas.json"
{
"cli": {
"version": ">= 5.0.0"
},
"build": {
"development": {
"developmentClient": true,
"distribution": "internal",
"env": {
"EXPO_UNSTABLE_DEPLOY_SERVER": "1"
}
},
"preview": {
"distribution": "internal",
"env": {
"EXPO_UNSTABLE_DEPLOY_SERVER": "1"
}
},
"production": {
"env": {
"EXPO_UNSTABLE_DEPLOY_SERVER": "1"
}
}
}
}
```
Deploy with automatic server:
```bash
# Build for all platforms
eas build --platform all
# Deploy server only
npx expo export --platform web
eas deploy
```
### Development Build Setup
```bash
# Install dev client
npx expo install expo-dev-client
# Create development build
eas build --profile development
# Or run locally
npx expo run:ios --configuration Release
npx expo run:android --variant release
```
Configure local server origin:
```json title="app.json"
{
"expo": {
"plugins": [
[
"expo-router",
{
"origin": "http://localhost:8081"
}
]
]
}
}
```
### Local Development Server
```bash
# Start Expo development server
npx expo start
# Test API routes
curl http://localhost:8081/api/upload/presigned-url
# Clear cache if needed
npx expo start --clear
```
For production testing:
```bash
# Export for production
npx expo export
# Serve locally
npx expo serve
```
## Environment Variables
```bash title=".env"
# AWS/Cloudflare R2 Configuration
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=auto
AWS_ENDPOINT_URL=https://your-account.r2.cloudflarestorage.com
S3_BUCKET_NAME=your-bucket-name
R2_ACCOUNT_ID=your-cloudflare-account-id
# JWT Authentication
JWT_SECRET=your-jwt-secret
# Expo Configuration (for client-side, use EXPO_PUBLIC_ prefix)
EXPO_PUBLIC_API_URL=https://your-domain.com
```
**Important**: Server environment variables (without `EXPO_PUBLIC_` prefix) are only available in API routes, not in client code. Client-side variables must use the `EXPO_PUBLIC_` prefix.
## Performance Benefits
Share upload logic between web and native platforms with a single codebase.
Direct access to native file system APIs for optimal performance on mobile.
Built-in support for upload progress tracking and real-time status updates.
Deploy to iOS, Android, and web with the same upload infrastructure.
## Troubleshooting
**File Permissions**: Always request proper permissions for camera and photo library access on mobile devices before file operations.
**Server Bundle**: Expo Router API routes require server output to be enabled in your `app.json` configuration.
### Common Issues
**Metro bundler errors:**
```bash
# Clear Metro cache
npx expo start --clear
# Reset Expo cache
npx expo r -c
```
**Permission denied errors:**
```typescript
// Always check permissions before file operations
import * as ImagePicker from 'expo-image-picker';
const { status } = await ImagePicker.requestMediaLibraryPermissionsAsync();
if (status !== 'granted') {
Alert.alert('Permission needed', 'Camera roll permission is required');
return;
}
```
**Network errors in development:**
```typescript
// Make sure your development server is accessible
const { upload } = useUpload('/api/upload', {
endpoint: __DEV__ ? 'http://localhost:8081' : 'https://your-domain.com',
});
```
**File upload timeout:**
```typescript
const { upload } = useUpload('/api/upload', {
timeout: 60000, // 60 seconds
});
```
### Debug Mode
Enable debug logging for development:
```typescript title="lib/upload.ts"
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{ /* config */ })
.defaults({
debug: __DEV__, // Only in development
})
.build();
```
This will log detailed information about upload requests, file processing, and S3 operations to help diagnose issues during development.
## Framework-Specific Notes
1. **File System Access**: Use `expo-file-system` for advanced file operations
2. **Permissions**: Always request permissions before accessing camera or photo library
3. **Web Compatibility**: Components work on web out of the box with Expo Router
4. **Platform Detection**: Use `Platform.OS` to handle platform-specific logic
5. **Environment Variables**: Server variables don't need `EXPO_PUBLIC_` prefix in API routes
# Express (/docs/integrations/express)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with Express
Express uses the traditional Node.js `req`/`res` API pattern. Pushduck provides a simple adapter that converts Web Standard handlers to Express middleware format.
**Custom Request/Response API**: Express uses `req`/`res` objects instead of Web Standards, so pushduck provides the `toExpressHandler` adapter for seamless integration.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create Express server with upload routes**
```typescript title="server.ts"
import express from 'express';
import { uploadRouter } from './lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
// Convert pushduck handlers to Express middleware
app.all('/api/upload/*', toExpressHandler(uploadRouter.handlers));
app.listen(3000, () => {
console.log('Server running on http://localhost:3000');
});
```
## Basic Integration
### Simple Upload Route
```typescript title="server.ts"
import express from 'express';
import cors from 'cors';
import { uploadRouter } from './lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
// Middleware
app.use(cors());
app.use(express.json());
// Upload routes using adapter
app.all('/api/upload/*', toExpressHandler(uploadRouter.handlers));
// Health check
app.get('/health', (req, res) => {
res.json({ status: 'healthy', timestamp: new Date().toISOString() });
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`π Server running on http://localhost:${port}`);
});
```
### With Authentication Middleware
```typescript title="server.ts"
import express from 'express';
import jwt from 'jsonwebtoken';
import { uploadRouter } from './lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
app.use(express.json());
// Authentication middleware
const authenticateToken = (req: express.Request, res: express.Response, next: express.NextFunction) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.sendStatus(401);
}
jwt.verify(token, process.env.JWT_SECRET!, (err, user) => {
if (err) return res.sendStatus(403);
req.user = user;
next();
});
};
// Public upload route (no auth)
app.all('/api/upload/public/*', toExpressHandler(uploadRouter.handlers));
// Private upload route (with auth)
app.all('/api/upload/private/*', authenticateToken, toExpressHandler(uploadRouter.handlers));
app.listen(3000);
```
## Advanced Configuration
### Upload Configuration with Express Context
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Profile pictures with authentication
profilePicture: s3
.image()
.maxFileSize("2MB")
.maxFiles(1)
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
// Extract user from JWT token in Authorization header
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authentication required');
}
const token = authHeader.substring(7);
const user = await verifyJWT(token);
return {
userId: user.id,
userRole: user.role,
category: "profile"
};
}),
// Document uploads for authenticated users
documents: s3
.file()
.maxFileSize("10MB")
.maxFiles(5)
.types([
"application/pdf",
"application/msword",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
])
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authentication required');
}
const token = authHeader.substring(7);
const user = await verifyJWT(token);
return {
userId: user.id,
category: "documents"
};
}),
// Public uploads (no authentication)
publicImages: s3
.image()
.maxFileSize("1MB")
.maxFiles(1)
.formats(["jpeg", "png"])
// No middleware = public access
});
async function verifyJWT(token: string) {
// Your JWT verification logic
const jwt = await import('jsonwebtoken');
return jwt.verify(token, process.env.JWT_SECRET!) as any;
}
export type AppUploadRouter = typeof uploadRouter;
```
### Complete Express Application
```typescript title="server.ts"
import express from 'express';
import cors from 'cors';
import helmet from 'helmet';
import rateLimit from 'express-rate-limit';
import { uploadRouter } from './lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
// Security middleware
app.use(helmet());
app.use(cors({
origin: process.env.NODE_ENV === 'production'
? ['https://your-domain.com']
: ['http://localhost:3000'],
credentials: true
}));
// Rate limiting
const uploadLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
message: 'Too many upload requests from this IP, please try again later.',
standardHeaders: true,
legacyHeaders: false,
});
// Body parsing middleware
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ extended: true, limit: '50mb' }));
// Logging middleware
app.use((req, res, next) => {
console.log(`${new Date().toISOString()} - ${req.method} ${req.path}`);
next();
});
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: process.memoryUsage(),
version: process.env.npm_package_version || '1.0.0'
});
});
// API info endpoint
app.get('/api', (req, res) => {
res.json({
name: 'Express Upload API',
version: '1.0.0',
endpoints: {
health: '/health',
upload: '/api/upload/*'
},
uploadTypes: [
'profilePicture - Single profile picture (2MB max)',
'documents - PDF, Word, text files (10MB max, 5 files)',
'publicImages - Public images (1MB max)'
]
});
});
// Upload routes with rate limiting
app.all('/api/upload/*', uploadLimiter, toExpressHandler(uploadRouter.handlers));
// 404 handler
app.use('*', (req, res) => {
res.status(404).json({
error: 'Not Found',
message: `Route ${req.originalUrl} not found`,
timestamp: new Date().toISOString()
});
});
// Error handler
app.use((err: Error, req: express.Request, res: express.Response, next: express.NextFunction) => {
console.error('Express error:', err);
res.status(500).json({
error: 'Internal Server Error',
message: process.env.NODE_ENV === 'development' ? err.message : 'Something went wrong',
timestamp: new Date().toISOString()
});
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`π Express server running on http://localhost:${port}`);
console.log(`π Upload endpoint: http://localhost:${port}/api/upload`);
});
```
## Project Structure
## Modular Route Organization
### Separate Upload Routes
```typescript title="routes/uploads.ts"
import { Router } from 'express';
import { uploadRouter } from '../lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
import { authenticateToken } from '../middleware/auth';
const router = Router();
// Public uploads
router.all('/public/*', toExpressHandler(uploadRouter.handlers));
// Private uploads (requires authentication)
router.all('/private/*', authenticateToken, toExpressHandler(uploadRouter.handlers));
export default router;
```
```typescript title="middleware/auth.ts"
import { Request, Response, NextFunction } from 'express';
import jwt from 'jsonwebtoken';
export const authenticateToken = (req: Request, res: Response, next: NextFunction) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'Access token required' });
}
jwt.verify(token, process.env.JWT_SECRET!, (err, user) => {
if (err) {
return res.status(403).json({ error: 'Invalid or expired token' });
}
req.user = user;
next();
});
};
```
# Fastify (/docs/integrations/fastify)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Using pushduck with Fastify
Fastify is a high-performance Node.js web framework that uses custom `request`/`reply` objects. Pushduck provides a simple adapter that converts Web Standard handlers to Fastify handler format.
**Custom Request/Response API**: Fastify uses `request`/`reply` objects instead of Web Standards, so pushduck provides the `toFastifyHandler` adapter for seamless integration.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create Fastify server with upload routes**
```typescript title="server.ts"
import Fastify from 'fastify';
import { uploadRouter } from './lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const fastify = Fastify({ logger: true });
// Convert pushduck handlers to Fastify handler
fastify.all('/api/upload/*', toFastifyHandler(uploadRouter.handlers));
const start = async () => {
try {
await fastify.listen({ port: 3000 });
console.log('π Fastify server running on http://localhost:3000');
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
```
## Basic Integration
### Simple Upload Route
```typescript title="server.ts"
import Fastify from 'fastify';
import cors from '@fastify/cors';
import { uploadRouter } from './lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const fastify = Fastify({
logger: {
level: 'info',
transport: {
target: 'pino-pretty'
}
}
});
// Register CORS
await fastify.register(cors, {
origin: ['http://localhost:3000', 'https://your-domain.com']
});
// Upload routes using adapter
fastify.all('/api/upload/*', toFastifyHandler(uploadRouter.handlers));
// Health check
fastify.get('/health', async (request, reply) => {
return {
status: 'healthy',
timestamp: new Date().toISOString(),
framework: 'Fastify'
};
});
const start = async () => {
try {
await fastify.listen({ port: 3000, host: '0.0.0.0' });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
```
### With Authentication Hook
```typescript title="server.ts"
import Fastify from 'fastify';
import jwt from '@fastify/jwt';
import { uploadRouter } from './lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const fastify = Fastify({ logger: true });
// Register JWT
await fastify.register(jwt, {
secret: process.env.JWT_SECRET!
});
// Authentication hook
fastify.addHook('preHandler', async (request, reply) => {
// Only protect upload routes
if (request.url.startsWith('/api/upload/private/')) {
try {
await request.jwtVerify();
} catch (err) {
reply.send(err);
}
}
});
// Public upload routes
fastify.all('/api/upload/public/*', toFastifyHandler(uploadRouter.handlers));
// Private upload routes (protected by hook)
fastify.all('/api/upload/private/*', toFastifyHandler(uploadRouter.handlers));
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
```
## Advanced Configuration
### Upload Configuration with Fastify Context
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Profile pictures with authentication
profilePicture: s3
.image()
.maxFileSize("2MB")
.maxFiles(1)
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authentication required');
}
const token = authHeader.substring(7);
const user = await verifyJWT(token);
return {
userId: user.id,
userRole: user.role,
category: "profile"
};
}),
// Document uploads for authenticated users
documents: s3
.file()
.maxFileSize("10MB")
.maxFiles(5)
.types([
"application/pdf",
"application/msword",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
])
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authentication required');
}
const token = authHeader.substring(7);
const user = await verifyJWT(token);
return {
userId: user.id,
category: "documents"
};
}),
// Public uploads (no authentication)
publicImages: s3
.image()
.maxFileSize("1MB")
.maxFiles(1)
.formats(["jpeg", "png"])
// No middleware = public access
});
async function verifyJWT(token: string) {
// Your JWT verification logic
const jwt = await import('jsonwebtoken');
return jwt.verify(token, process.env.JWT_SECRET!) as any;
}
export type AppUploadRouter = typeof uploadRouter;
```
### Complete Fastify Application
```typescript title="server.ts"
import Fastify from 'fastify';
import cors from '@fastify/cors';
import helmet from '@fastify/helmet';
import rateLimit from '@fastify/rate-limit';
import { uploadRouter } from './lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const fastify = Fastify({
logger: {
level: process.env.NODE_ENV === 'production' ? 'warn' : 'info',
transport: process.env.NODE_ENV !== 'production' ? {
target: 'pino-pretty'
} : undefined
}
});
// Security middleware
await fastify.register(helmet, {
contentSecurityPolicy: false
});
// CORS configuration
await fastify.register(cors, {
origin: process.env.NODE_ENV === 'production'
? ['https://your-domain.com']
: true,
credentials: true
});
// Rate limiting
await fastify.register(rateLimit, {
max: 100,
timeWindow: '15 minutes',
errorResponseBuilder: (request, context) => ({
error: 'Rate limit exceeded',
message: `Too many requests from ${request.ip}. Try again later.`,
retryAfter: Math.round(context.ttl / 1000)
})
});
// Request logging
fastify.addHook('onRequest', async (request, reply) => {
request.log.info({ url: request.url, method: request.method }, 'incoming request');
});
// Health check endpoint
fastify.get('/health', async (request, reply) => {
return {
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
memory: process.memoryUsage(),
version: process.env.npm_package_version || '1.0.0',
framework: 'Fastify'
};
});
// API info endpoint
fastify.get('/api', async (request, reply) => {
return {
name: 'Fastify Upload API',
version: '1.0.0',
endpoints: {
health: '/health',
upload: '/api/upload/*'
},
uploadTypes: [
'profilePicture - Single profile picture (2MB max)',
'documents - PDF, Word, text files (10MB max, 5 files)',
'publicImages - Public images (1MB max)'
]
};
});
// Upload routes with rate limiting
fastify.register(async function (fastify) {
await fastify.register(rateLimit, {
max: 50,
timeWindow: '15 minutes'
});
fastify.all('/api/upload/*', toFastifyHandler(uploadRouter.handlers));
});
// 404 handler
fastify.setNotFoundHandler(async (request, reply) => {
reply.status(404).send({
error: 'Not Found',
message: `Route ${request.method} ${request.url} not found`,
timestamp: new Date().toISOString()
});
});
// Error handler
fastify.setErrorHandler(async (error, request, reply) => {
request.log.error(error, 'Fastify error');
reply.status(500).send({
error: 'Internal Server Error',
message: process.env.NODE_ENV === 'development' ? error.message : 'Something went wrong',
timestamp: new Date().toISOString()
});
});
// Graceful shutdown
const gracefulShutdown = () => {
fastify.log.info('Shutting down gracefully...');
fastify.close().then(() => {
fastify.log.info('Server closed');
process.exit(0);
}).catch((err) => {
fastify.log.error(err, 'Error during shutdown');
process.exit(1);
});
};
process.on('SIGTERM', gracefulShutdown);
process.on('SIGINT', gracefulShutdown);
const start = async () => {
try {
const port = Number(process.env.PORT) || 3000;
const host = process.env.HOST || '0.0.0.0';
await fastify.listen({ port, host });
fastify.log.info(`π Fastify server running on http://${host}:${port}`);
fastify.log.info(`π Upload endpoint: http://${host}:${port}/api/upload`);
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
```
## Plugin-Based Architecture
### Upload Plugin
```typescript title="plugins/upload.ts"
import { FastifyPluginAsync } from 'fastify';
import { uploadRouter } from '../lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const uploadPlugin: FastifyPluginAsync = async (fastify) => {
// Upload routes
fastify.all('/upload/*', toFastifyHandler(uploadRouter.handlers));
// Upload status endpoint
fastify.get('/upload-status', async (request, reply) => {
return {
status: 'ready',
supportedTypes: ['images', 'documents', 'publicImages'],
maxSizes: {
profilePicture: '2MB',
documents: '10MB',
publicImages: '1MB'
}
};
});
};
export default uploadPlugin;
```
### Main Server with Plugins
```typescript title="server.ts"
import Fastify from 'fastify';
import uploadPlugin from './plugins/upload';
const fastify = Fastify({ logger: true });
// Register upload plugin
await fastify.register(uploadPlugin, { prefix: '/api' });
const start = async () => {
try {
await fastify.listen({ port: 3000 });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
start();
```
## Client Usage
The client-side integration is identical regardless of your backend framework:
```typescript title="client/upload-client.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from '../lib/upload';
export const upload = createUploadClient({
endpoint: 'http://localhost:3000/api/upload',
headers: {
'Authorization': `Bearer ${getAuthToken()}`
}
});
function getAuthToken(): string {
return localStorage.getItem('auth-token') || '';
}
```
```typescript title="client/upload-form.tsx"
import { upload } from './upload-client';
export function DocumentUploader() {
const { uploadFiles, files, isUploading, error } = upload.documents();
const handleFileSelect = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
uploadFiles(selectedFiles);
};
return (
{error && (
Error: {error.message}
)}
{files.map((file) => (
{file.name}
{file.status === 'success' && (
Download
)}
))}
);
}
```
## Deployment
### Docker Deployment
```dockerfile title="Dockerfile"
FROM node:18-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
RUN npm ci --only=production
# Copy source code
COPY . .
# Build TypeScript
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
```
### Package Configuration
```json title="package.json"
{
"name": "fastify-upload-api",
"version": "1.0.0",
"scripts": {
"dev": "tsx watch src/server.ts",
"build": "tsc",
"start": "node dist/server.js"
},
"dependencies": {
"fastify": "^4.24.0",
"pushduck": "latest",
"@fastify/cors": "^8.4.0",
"@fastify/helmet": "^11.1.0",
"@fastify/rate-limit": "^8.0.0",
"@fastify/jwt": "^7.2.0"
},
"devDependencies": {
"@types/node": "^20.0.0",
"tsx": "^3.12.7",
"typescript": "^5.0.0",
"pino-pretty": "^10.2.0"
}
}
```
### Environment Variables
```bash title=".env"
# Server Configuration
PORT=3000
HOST=0.0.0.0
NODE_ENV=development
JWT_SECRET=your-super-secret-jwt-key
# Cloudflare R2 Configuration
AWS_ACCESS_KEY_ID=your_r2_access_key
AWS_SECRET_ACCESS_KEY=your_r2_secret_key
AWS_ENDPOINT_URL=https://your-account-id.r2.cloudflarestorage.com
S3_BUCKET_NAME=your-bucket-name
R2_ACCOUNT_ID=your-account-id
```
## Performance Benefits
Fastify is one of the fastest Node.js frameworks, perfect for high-throughput upload APIs.
Leverage Fastify's extensive plugin ecosystem alongside pushduck's upload capabilities.
Excellent TypeScript support with full type safety for both Fastify and pushduck.
Built-in schema validation, logging, and error handling for production deployments.
***
**Fastify + Pushduck**: High-performance file uploads with Fastify's speed and pushduck's universal design, connected through a simple adapter.
# Fresh (/docs/integrations/fresh)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
**π§ Client-Side In Development**: Fresh server-side integration is fully functional with Web Standards APIs. However, Fresh-specific client-side components and hooks are still in development. You can use the standard pushduck client APIs for now.
## Using pushduck with Fresh
Fresh is a modern web framework for Deno that uses islands architecture for optimal performance. It uses Web Standards APIs and provides server-side rendering with minimal client-side JavaScript. Since Fresh uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Fresh API routes use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install Fresh and pushduck**
```bash
# Create a new Fresh project
deno run -A -r https://fresh.deno.dev my-app
cd my-app
# Add pushduck to import_map.json
```
```json title="import_map.json"
{
"imports": {
"$fresh/": "https://deno.land/x/fresh@1.6.1/",
"preact": "https://esm.sh/preact@10.19.2",
"preact/": "https://esm.sh/preact@10.19.2/",
"pushduck/server": "https://esm.sh/pushduck@latest/server",
"pushduck/client": "https://esm.sh/pushduck@latest/client"
}
}
```
```bash
# Create a new Fresh project
deno run -A -r https://fresh.deno.dev my-app
cd my-app
# Install pushduck via npm (requires Node.js compatibility)
npm install pushduck
```
```bash
# Create a new Fresh project
deno run -A -r https://fresh.deno.dev my-app
cd my-app
# Install pushduck via yarn (requires Node.js compatibility)
yarn add pushduck
```
```bash
# Create a new Fresh project
deno run -A -r https://fresh.deno.dev my-app
cd my-app
# Install pushduck via pnpm (requires Node.js compatibility)
pnpm add pushduck
```
```bash
# Create a new Fresh project
deno run -A -r https://fresh.deno.dev my-app
cd my-app
# Install pushduck via bun (requires Node.js compatibility)
bun add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: Deno.env.get("AWS_ACCESS_KEY_ID")!,
secretAccessKey: Deno.env.get("AWS_SECRET_ACCESS_KEY")!,
region: 'auto',
endpoint: Deno.env.get("AWS_ENDPOINT_URL")!,
bucket: Deno.env.get("S3_BUCKET_NAME")!,
accountId: Deno.env.get("R2_ACCOUNT_ID")!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="routes/api/upload/[...path].ts"
import { Handlers } from "$fresh/server.ts";
import { uploadRouter } from "../../../lib/upload.ts";
// Direct usage - no adapter needed!
export const handler: Handlers = {
async GET(req) {
return uploadRouter.handlers(req);
},
async POST(req) {
return uploadRouter.handlers(req);
},
};
```
## Basic Integration
### Simple Upload Route
```typescript title="routes/api/upload/[...path].ts"
import { Handlers } from "$fresh/server.ts";
import { uploadRouter } from "../../../lib/upload.ts";
// Method 1: Combined handler (recommended)
export const handler: Handlers = {
async GET(req) {
return uploadRouter.handlers(req);
},
async POST(req) {
return uploadRouter.handlers(req);
},
};
// Method 2: Universal handler
export const handler: Handlers = {
async GET(req) {
return uploadRouter.handlers(req);
},
async POST(req) {
return uploadRouter.handlers(req);
},
async OPTIONS(req) {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
},
});
},
};
```
### With Middleware
```typescript title="routes/_middleware.ts"
import { MiddlewareHandlerContext } from "$fresh/server.ts";
export async function handler(
req: Request,
ctx: MiddlewareHandlerContext,
) {
// Add CORS headers for upload routes
if (ctx.destination === "route" && req.url.includes("/api/upload")) {
const response = await ctx.next();
response.headers.set("Access-Control-Allow-Origin", "*");
response.headers.set("Access-Control-Allow-Methods", "GET, POST, OPTIONS");
response.headers.set("Access-Control-Allow-Headers", "Content-Type");
return response;
}
return ctx.next();
}
```
## Advanced Configuration
### Authentication with Fresh
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { getCookies } from "https://deno.land/std@0.208.0/http/cookie.ts";
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: Deno.env.get("AWS_ACCESS_KEY_ID")!,
secretAccessKey: Deno.env.get("AWS_SECRET_ACCESS_KEY")!,
region: 'auto',
endpoint: Deno.env.get("AWS_ENDPOINT_URL")!,
bucket: Deno.env.get("S3_BUCKET_NAME")!,
accountId: Deno.env.get("R2_ACCOUNT_ID")!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with cookie-based authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookies = getCookies(req.headers);
const sessionId = cookies.sessionId;
if (!sessionId) {
throw new Error('Authentication required');
}
const user = await getUserFromSession(sessionId);
if (!user) {
throw new Error('Invalid session');
}
return {
userId: user.id,
username: user.username,
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
// Helper function
async function getUserFromSession(sessionId: string) {
// Implement your session validation logic
// This could connect to a database, Deno KV, etc.
return { id: 'user-123', username: 'demo-user' };
}
```
## Client-Side Usage
### Upload Island Component
```tsx title="islands/FileUpload.tsx"
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "../lib/upload.ts";
const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
export default function FileUpload() {
function handleUploadComplete(files: any[]) {
console.log("Files uploaded:", files);
alert("Upload completed!");
}
function handleUploadError(error: Error) {
console.error("Upload error:", error);
alert(`Upload failed: ${error.message}`);
}
return (
Image Upload
Document Upload
);
}
```
### Using in Pages
```tsx title="routes/index.tsx"
import { Head } from "$fresh/runtime.ts";
import FileUpload from "../islands/FileUpload.tsx";
export default function Home() {
return (
<>
File Upload Demo
File Upload Demo
>
);
}
```
## File Management
### Server-Side File API
```typescript title="routes/api/files.ts"
import { Handlers } from "$fresh/server.ts";
export const handler: Handlers = {
async GET(req) {
const url = new URL(req.url);
const userId = url.searchParams.get('userId');
if (!userId) {
return new Response(JSON.stringify({ error: 'User ID required' }), {
status: 400,
headers: { 'Content-Type': 'application/json' }
});
}
// Fetch files from database/Deno KV
const files = await getFilesForUser(userId);
return new Response(JSON.stringify({
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
}), {
headers: { 'Content-Type': 'application/json' }
});
},
};
async function getFilesForUser(userId: string) {
// Example using Deno KV
const kv = await Deno.openKv();
const files = [];
for await (const entry of kv.list({ prefix: ["files", userId] })) {
files.push(entry.value);
}
return files;
}
```
### File Management Page
```tsx title="routes/files.tsx"
import { Head } from "$fresh/runtime.ts";
import { Handlers, PageProps } from "$fresh/server.ts";
import FileUpload from "../islands/FileUpload.tsx";
interface FileData {
id: string;
name: string;
url: string;
size: number;
uploadedAt: string;
}
interface PageData {
files: FileData[];
}
export const handler: Handlers = {
async GET(req, ctx) {
// Fetch files for current user
const files = await getFilesForUser("current-user");
return ctx.render({ files });
},
};
export default function FilesPage({ data }: PageProps) {
function formatFileSize(bytes: number): string {
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
if (bytes === 0) return '0 Bytes';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
}
return (
<>
My Files
My Files
Uploaded Files
{data.files.length === 0 ? (
No files uploaded yet.
) : (
{data.files.map((file) => (
{file.name}
{formatFileSize(file.size)}
{new Date(file.uploadedAt).toLocaleDateString()}
View File
))}
)}
>
);
}
async function getFilesForUser(userId: string) {
// Implementation depends on your storage solution
return [];
}
```
## Deployment Options
```bash
# Deploy to Deno Deploy
deno task build
deployctl deploy --project=my-app --include=. --exclude=node_modules
```
```json title="deno.json"
{
"tasks": {
"build": "deno run -A dev.ts build",
"preview": "deno run -A main.ts",
"start": "deno run -A --watch=static/,routes/ dev.ts",
"deploy": "deployctl deploy --project=my-app --include=. --exclude=node_modules"
}
}
```
```dockerfile title="Dockerfile"
FROM denoland/deno:1.38.0
WORKDIR /app
# Copy dependency files
COPY deno.json deno.lock import_map.json ./
# Cache dependencies
RUN deno cache --import-map=import_map.json main.ts
# Copy source code
COPY . .
# Build the application
RUN deno task build
EXPOSE 8000
CMD ["deno", "run", "-A", "main.ts"]
```
```bash
# Install Deno
curl -fsSL https://deno.land/install.sh | sh
# Clone and run your app
git clone
cd
deno task start
```
```systemd title="/etc/systemd/system/fresh-app.service"
[Unit]
Description=Fresh App
After=network.target
[Service]
Type=simple
User=deno
WorkingDirectory=/opt/fresh-app
ExecStart=/home/deno/.deno/bin/deno run -A main.ts
Restart=always
[Install]
WantedBy=multi-user.target
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# Fresh
PORT=8000
```
## Performance Benefits
## Real-Time Upload Progress
```tsx title="islands/AdvancedUpload.tsx"
import { useState } from "preact/hooks";
export default function AdvancedUpload() {
const [uploadProgress, setUploadProgress] = useState(0);
const [isUploading, setIsUploading] = useState(false);
async function handleFileUpload(event: Event) {
const target = event.target as HTMLInputElement;
const files = target.files;
if (!files || files.length === 0) return;
setIsUploading(true);
setUploadProgress(0);
try {
// Simulate upload progress
for (let i = 0; i <= 100; i += 10) {
setUploadProgress(i);
await new Promise(resolve => setTimeout(resolve, 100));
}
alert('Upload completed!');
} catch (error) {
console.error('Upload failed:', error);
alert('Upload failed!');
} finally {
setIsUploading(false);
setUploadProgress(0);
}
}
return (
{isUploading && (
{uploadProgress}% uploaded
)}
);
}
```
## Deno KV Integration
```typescript title="lib/storage.ts"
// Example using Deno KV for file metadata storage
export class FileStorage {
private kv: Deno.Kv;
constructor() {
this.kv = await Deno.openKv();
}
async saveFileMetadata(userId: string, file: {
id: string;
name: string;
url: string;
size: number;
type: string;
}) {
const key = ["files", userId, file.id];
await this.kv.set(key, {
...file,
createdAt: new Date().toISOString(),
});
}
async getFilesForUser(userId: string) {
const files = [];
for await (const entry of this.kv.list({ prefix: ["files", userId] })) {
files.push(entry.value);
}
return files;
}
async deleteFile(userId: string, fileId: string) {
const key = ["files", userId, fileId];
await this.kv.delete(key);
}
}
export const fileStorage = new FileStorage();
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `routes/api/upload/[...path].ts`
2. **Import errors**: Check your `import_map.json` configuration
3. **Permissions**: Deno requires explicit permissions (`-A` flag for all permissions)
4. **Environment variables**: Use `Deno.env.get()` instead of `process.env`
### Debug Mode
Enable debug logging:
```typescript title="lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (Deno.env.get("DENO_ENV") === "development") {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Fresh Configuration
```typescript title="fresh.config.ts"
import { defineConfig } from "$fresh/server.ts";
export default defineConfig({
plugins: [],
// Enable static file serving
staticDir: "./static",
// Custom build options
build: {
target: ["chrome99", "firefox99", "safari15"],
},
});
```
Fresh provides an excellent foundation for building modern web applications with Deno and pushduck, combining the power of islands architecture with Web Standards APIs and Deno's secure runtime environment.
# Hono (/docs/integrations/hono)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with Hono
Hono is a fast, lightweight web framework built on Web Standards. Since Hono uses `Request` and `Response` objects natively, pushduck handlers work directly without any adapters!
**Web Standards Native**: Hono exposes `c.req.raw` as a Web Standard `Request` object, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create Hono app with upload routes**
```typescript title="app.ts"
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload';
const app = new Hono();
// Direct usage - no adapter needed!
app.all('/api/upload/*', (c) => {
return uploadRouter.handlers(c.req.raw);
});
export default app;
```
## Basic Integration
### Simple Upload Route
```typescript title="app.ts"
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload';
const app = new Hono();
// Method 1: Combined handler (recommended)
app.all('/api/upload/*', (c) => {
return uploadRouter.handlers(c.req.raw);
});
// Method 2: Separate handlers (if you need method-specific logic)
app.get('/api/upload/*', (c) => uploadRouter.handlers.GET(c.req.raw));
app.post('/api/upload/*', (c) => uploadRouter.handlers.POST(c.req.raw));
export default app;
```
### With Middleware
```typescript title="app.ts"
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { logger } from 'hono/logger';
import { uploadRouter } from './lib/upload';
const app = new Hono();
// Global middleware
app.use('*', logger());
app.use('*', cors({
origin: ['http://localhost:3000', 'https://your-domain.com'],
allowMethods: ['GET', 'POST'],
allowHeaders: ['Content-Type'],
}));
// Upload routes
app.all('/api/upload/*', (c) => uploadRouter.handlers(c.req.raw));
// Health check
app.get('/health', (c) => c.json({ status: 'ok' }));
export default app;
```
## Advanced Configuration
### Authentication with Hono
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { verify } from 'hono/jwt';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with JWT authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const payload = await verify(token, process.env.JWT_SECRET!);
return {
userId: payload.sub as string,
userRole: payload.role as string
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
## Deployment Options
```typescript title="src/index.ts"
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload';
const app = new Hono();
app.all('/api/upload/*', (c) => uploadRouter.handlers(c.req.raw));
export default app;
```
```toml title="wrangler.toml"
name = "my-upload-api"
main = "src/index.ts"
compatibility_date = "2023-12-01"
[env.production]
vars = { NODE_ENV = "production" }
```
```bash
# Deploy to Cloudflare Workers
npx wrangler deploy
```
```typescript title="server.ts"
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload';
const app = new Hono();
app.all('/api/upload/*', (c) => uploadRouter.handlers(c.req.raw));
export default {
port: 3000,
fetch: app.fetch,
};
```
```bash
# Run with Bun
bun run server.ts
```
```typescript title="server.ts"
import { serve } from '@hono/node-server';
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload';
const app = new Hono();
app.all('/api/upload/*', (c) => uploadRouter.handlers(c.req.raw));
const port = 3000;
console.log(`Server is running on port ${port}`);
serve({
fetch: app.fetch,
port
});
```
```bash
# Run with Node.js
npm run dev
```
```typescript title="server.ts"
import { Hono } from 'hono';
import { uploadRouter } from './lib/upload.ts';
const app = new Hono();
app.all('/api/upload/*', (c) => uploadRouter.handlers(c.req.raw));
Deno.serve(app.fetch);
```
```bash
# Run with Deno
deno run --allow-net --allow-env server.ts
```
## Performance Benefits
No adapter layer means zero performance overhead - pushduck handlers run directly in Hono.
Hono is one of the fastest web frameworks, perfect for high-performance upload APIs.
Works on Cloudflare Workers, Bun, Node.js, and Deno with the same code.
Hono + pushduck creates incredibly lightweight upload services.
***
**Perfect Match**: Hono's Web Standards foundation and pushduck's universal design create a powerful, fast, and lightweight file upload solution that works everywhere.
# Framework Integrations (/docs/integrations)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Supported Frameworks
Pushduck provides **universal file upload handlers** that work with any web framework through a single, consistent API. Write your upload logic once and deploy it anywhere!
**Universal Design**: Pushduck uses Web Standards (Request/Response) at its core, making it compatible with both Web Standards frameworks and those with custom request/response APIs without framework-specific code.
## π Universal API
All frameworks use the same core API:
```typescript
import { createS3Router, s3 } from 'pushduck/server';
const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB"),
videoUpload: s3.file().maxFileSize("100MB").types(["video/*"])
});
// Universal handlers - work with ANY framework
export const { GET, POST } = uploadRouter.handlers;
```
## Framework Categories
Pushduck supports frameworks in two categories:
**No adapter needed!** Use `uploadRouter.handlers` directly.
* Hono
* Elysia
* Bun Runtime
* TanStack Start
* SolidJS Start
**Simple adapters provided** for seamless integration.
* Next.js (App & Pages Router)
* Express
* Fastify
## Quick Start by Framework
```typescript
// Works with: Hono, Elysia, Bun, TanStack Start, SolidJS Start
import { uploadRouter } from '@/lib/upload';
// Direct usage - no adapter needed!
app.all('/api/upload/*', (ctx) => {
return uploadRouter.handlers(ctx.request); // or c.req.raw
});
```
```typescript
// app/api/upload/route.ts
import { uploadRouter } from '@/lib/upload';
// Direct usage (recommended)
export const { GET, POST } = uploadRouter.handlers;
// Or with explicit adapter for extra type safety
import { toNextJsHandler } from 'pushduck/adapters';
export const { GET, POST } = toNextJsHandler(uploadRouter.handlers);
```
```typescript
import express from 'express';
import { uploadRouter } from '@/lib/upload';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
app.all("/api/upload/*", toExpressHandler(uploadRouter.handlers));
```
```typescript
import Fastify from 'fastify';
import { uploadRouter } from '@/lib/upload';
import { toFastifyHandler } from 'pushduck/adapters';
const fastify = Fastify();
fastify.all('/api/upload/*', toFastifyHandler(uploadRouter.handlers));
```
## Why Universal Handlers Work
**Web Standards Foundation**
Pushduck is built on Web Standards (`Request` and `Response` objects) that are supported by all modern JavaScript runtimes.
```typescript
// Core handler signature
type Handler = (request: Request) => Promise
```
**Framework Compatibility**
Modern frameworks expose Web Standard objects directly:
* **Hono**: `c.req.raw` is a Web `Request`
* **Elysia**: `context.request` is a Web `Request`
* **Bun**: Native Web `Request` support
* **TanStack Start**: `{ request }` is a Web `Request`
* **SolidJS Start**: `event.request` is a Web `Request`
**Framework Adapters**
For frameworks with custom request/response APIs, simple adapters convert between formats:
```typescript
// Express adapter example
export function toExpressHandler(handlers: UniversalHandlers) {
return async (req: Request, res: Response, next: NextFunction) => {
const webRequest = convertExpressToWebRequest(req);
const webResponse = await handlers[req.method](webRequest);
convertWebResponseToExpress(webResponse, res);
};
}
```
## Configuration (Same for All Frameworks)
Your upload configuration is identical across all frameworks:
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Image uploads with validation
imageUpload: s3
.image()
.maxFileSize("5MB")
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const userId = await getUserId(req);
return { userId, category: "images" };
}),
// Document uploads
documentUpload: s3
.file()
.maxFileSize("10MB")
.types(["application/pdf", "text/plain"])
.middleware(async ({ req }) => {
const userId = await getUserId(req);
return { userId, category: "documents" };
}),
// Video uploads
videoUpload: s3
.file()
.maxFileSize("100MB")
.types(["video/mp4", "video/quicktime"])
.middleware(async ({ req }) => {
const userId = await getUserId(req);
return { userId, category: "videos" };
})
});
export type AppUploadRouter = typeof uploadRouter;
```
## Client Usage (Framework Independent)
The client-side code is identical regardless of your backend framework:
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from './upload';
export const upload = createUploadClient({
endpoint: '/api/upload'
});
```
```typescript title="components/upload-form.tsx"
import { upload } from '@/lib/upload-client';
export function UploadForm() {
// Property-based access with full type safety
const { uploadFiles, files, isUploading } = upload.imageUpload();
const handleUpload = async (selectedFiles: File[]) => {
await uploadFiles(selectedFiles);
};
return (
handleUpload(Array.from(e.target.files || []))}
/>
{files.map(file => (
{file.name}
{file.url &&
View }
))}
);
}
```
## Benefits of Universal Design
Migrate from Express to Hono or Next.js to Bun without changing your upload implementation.
Web Standards native frameworks get direct handler access with no adapter overhead.
Master pushduck once and use it with any framework in your toolkit.
As more frameworks adopt Web Standards, they automatically work with pushduck.
## Next Steps
Choose your framework integration guide:
Complete guide for Next.js App Router and Pages Router
Fast, lightweight, built on Web Standards
TypeScript-first framework with Bun
Classic Node.js framework integration
***
**Universal by Design**: Write once, run anywhere. Pushduck's universal handlers make file uploads work seamlessly across the entire JavaScript ecosystem.
# Next.js (/docs/integrations/nextjs)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Next.js Integration
Pushduck provides seamless integration with both Next.js App Router and Pages Router through universal handlers that work with Next.js's Web Standards-based API.
**Next.js 13+**: App Router uses Web Standards (Request/Response), so pushduck handlers work directly. Pages Router requires a simple adapter for the legacy req/res API.
## Quick Setup
**Install pushduck**
npm
pnpm
yarn
bun
```bash
npm install pushduck
```
```bash
pnpm add pushduck
```
```bash
yarn add pushduck
```
```bash
bun add pushduck
```
**Configure your upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{ // [!code highlight]
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!, // [!code highlight]
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"), // [!code highlight]
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
App Router
Pages Router
```typescript title="app/api/upload/route.ts"
import { uploadRouter } from '@/lib/upload';
// Direct usage (recommended)
export const { GET, POST } = uploadRouter.handlers;
```
```typescript title="pages/api/upload/[...path].ts"
import { uploadRouter } from '@/lib/upload';
import { toNextJsPagesHandler } from 'pushduck/server';
export default toNextJsPagesHandler(uploadRouter.handlers);
```
## App Router Integration
Next.js App Router uses Web Standards, making integration seamless:
### Basic API Route
```typescript title="app/api/upload/route.ts"
import { uploadRouter } from '@/lib/upload';
// Direct usage - works because Next.js App Router uses Web Standards
export const { GET, POST } = uploadRouter.handlers;
```
### With Type Safety Adapter
For extra type safety and better IDE support:
```typescript title="app/api/upload/route.ts"
import { uploadRouter } from '@/lib/upload';
import { toNextJsHandler } from 'pushduck/adapters';
// Explicit adapter for enhanced type safety
export const { GET, POST } = toNextJsHandler(uploadRouter.handlers);
```
### Advanced Configuration
```typescript title="app/api/upload/route.ts"
import { createUploadConfig } from 'pushduck/server';
import { getServerSession } from 'next-auth';
import { authOptions } from '@/lib/auth';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
const uploadRouter = createS3Router({
// Profile pictures with authentication
profilePicture: s3
.image()
.maxFileSize("2MB")
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
throw new Error("Authentication required");
}
return {
userId: session.user.id,
category: "profile"
};
}),
// Document uploads for authenticated users
documents: s3
.file()
.maxFileSize("10MB")
.types(["application/pdf", "text/plain", "application/msword"])
.middleware(async ({ req }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
throw new Error("Authentication required");
}
return {
userId: session.user.id,
category: "documents"
};
}),
// Public image uploads (no auth required)
publicImages: s3
.image()
.maxFileSize("5MB")
.formats(["jpeg", "png", "webp"])
// No middleware = publicly accessible
});
export type AppUploadRouter = typeof uploadRouter;
export const { GET, POST } = uploadRouter.handlers;
```
## Pages Router Integration
Pages Router uses the legacy req/res API, so we provide a simple adapter:
### Basic API Route
```typescript title="pages/api/upload/[...path].ts"
import { uploadRouter } from '@/lib/upload';
import { toNextJsPagesHandler } from 'pushduck/adapters';
export default toNextJsPagesHandler(uploadRouter.handlers);
```
### With Authentication
```typescript title="pages/api/upload/[...path].ts"
import { createUploadConfig } from 'pushduck/server';
import { toNextJsPagesHandler } from 'pushduck/adapters';
import { getSession } from 'next-auth/react';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
// ... your config
})
.build();
const uploadRouter = createS3Router({
imageUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
// Convert Web Request to get session
const session = await getSession({ req: req as any });
if (!session?.user?.id) {
throw new Error("Authentication required");
}
return {
userId: session.user.id
};
})
});
export default toNextJsPagesHandler(uploadRouter.handlers);
```
## Client-Side Usage
The client-side code is identical for both App Router and Pages Router:
### Setup Upload Client
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from './upload';
export const upload = createUploadClient({
endpoint: '/api/upload'
});
```
### React Component
```typescript title="components/upload-form.tsx"
'use client'; // App Router
// or just regular component for Pages Router
import { upload } from '@/lib/upload-client';
import { useState } from 'react';
export function UploadForm() {
const { uploadFiles, files, isUploading, error } = upload.imageUpload();
const handleFileSelect = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
uploadFiles(selectedFiles);
};
return (
{error && (
Error: {error.message}
)}
{files.length > 0 && (
{files.map((file) => (
{file.name}
{(file.size / 1024 / 1024).toFixed(2)} MB
{file.status === 'success' ? 'Complete' : `${file.progress}%`}
{file.status === 'success' && file.url && (
View
)}
))}
)}
);
}
```
## Project Structure
Here's a recommended project structure for Next.js with pushduck:
## Complete Example
### Upload Configuration
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { getServerSession } from 'next-auth';
import { authOptions } from './auth';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
const timestamp = Date.now();
const randomId = Math.random().toString(36).substring(2, 8);
return `${metadata.userId}/${timestamp}/${randomId}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Profile pictures - single image, authenticated
profilePicture: s3
.image()
.maxFileSize("2MB")
.maxFiles(1)
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) throw new Error("Authentication required");
return { userId: session.user.id, type: "profile" };
}),
// Gallery images - multiple images, authenticated
gallery: s3
.image()
.maxFileSize("5MB")
.maxFiles(10)
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) throw new Error("Authentication required");
return { userId: session.user.id, type: "gallery" };
}),
// Documents - various file types, authenticated
documents: s3
.file()
.maxFileSize("10MB")
.maxFiles(5)
.types([
"application/pdf",
"application/msword",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
])
.middleware(async ({ req }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) throw new Error("Authentication required");
return { userId: session.user.id, type: "documents" };
}),
// Public uploads - no authentication required
public: s3
.image()
.maxFileSize("1MB")
.maxFiles(1)
.formats(["jpeg", "png"])
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
### API Route (App Router)
```typescript title="app/api/upload/route.ts"
import { uploadRouter } from '@/lib/upload';
export const { GET, POST } = uploadRouter.handlers;
```
### Upload Page
```typescript title="app/upload/page.tsx"
'use client';
import { upload } from '@/lib/upload-client';
import { useState } from 'react';
export default function UploadPage() {
const [activeTab, setActiveTab] = useState<'profile' | 'gallery' | 'documents'>('profile');
const profileUpload = upload.profilePicture();
const galleryUpload = upload.gallery();
const documentsUpload = upload.documents();
const currentUpload = {
profile: profileUpload,
gallery: galleryUpload,
documents: documentsUpload
}[activeTab];
return (
File Upload Demo
{/* Tab Navigation */}
{[
{ key: 'profile', label: 'Profile Picture', icon: 'π€' },
{ key: 'gallery', label: 'Gallery', icon: 'πΌοΈ' },
{ key: 'documents', label: 'Documents', icon: 'π' }
].map(tab => (
setActiveTab(tab.key as any)}
className={`px-4 py-2 text-sm font-medium border-b-2 ${
activeTab === tab.key
? 'border-blue-500 text-blue-600'
: 'border-transparent text-gray-500 hover:text-gray-700'
}`}
>
{tab.icon} {tab.label}
))}
{/* Upload Interface */}
{
const files = Array.from(e.target.files || []);
currentUpload.uploadFiles(files);
}}
disabled={currentUpload.isUploading}
className="block w-full text-sm text-gray-500 file:mr-4 file:py-2 file:px-4 file:rounded-full file:border-0 file:text-sm file:font-semibold file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"
/>
{/* File List */}
{currentUpload.files.length > 0 && (
{currentUpload.files.map((file) => (
{file.name}
{(file.size / 1024 / 1024).toFixed(2)} MB
{file.status === 'success' && 'β
'}
{file.status === 'error' && 'β'}
{file.status === 'uploading' && 'β³'}
{file.status === 'pending' && 'βΈοΈ'}
{file.status === 'success' && file.url && (
View
)}
))}
)}
);
}
```
## Environment Variables
```bash title=".env.local"
# Cloudflare R2 Configuration
AWS_ACCESS_KEY_ID=your_r2_access_key
AWS_SECRET_ACCESS_KEY=your_r2_secret_key
AWS_ENDPOINT_URL=https://your-account-id.r2.cloudflarestorage.com
S3_BUCKET_NAME=your-bucket-name
R2_ACCOUNT_ID=your-account-id
# Next.js Configuration
NEXTAUTH_SECRET=your-nextauth-secret
NEXTAUTH_URL=http://localhost:3000
```
## Deployment Considerations
* Environment variables configured in dashboard
* Edge Runtime compatible
* Automatic HTTPS
* Configure environment variables
* Works with Netlify Functions
* CDN integration available
* Complete Next.js compatibility
* Environment variable management
* Automatic deployments
***
**Next.js Ready**: Pushduck works seamlessly with both Next.js App Router and Pages Router, providing the same great developer experience across all Next.js versions.
# Nitro/H3 (/docs/integrations/nitro-h3)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with Nitro/H3
Nitro is a universal web server framework that powers Nuxt.js, built on top of H3 (HTTP framework). It uses Web Standards APIs and provides excellent performance with universal deployment. Since Nitro/H3 uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Nitro/H3 uses Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead and universal deployment capabilities.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="routes/api/upload/[...path].ts"
import { uploadRouter } from '~/lib/upload';
// Direct usage - no adapter needed!
export default defineEventHandler(async (event) => {
return uploadRouter.handlers(event.node.req);
});
```
## Basic Integration
### Simple Upload Route
```typescript title="routes/api/upload/[...path].ts"
import { uploadRouter } from '~/lib/upload';
// Method 1: Combined handler (recommended)
export default defineEventHandler(async (event) => {
return uploadRouter.handlers(event.node.req);
});
// Method 2: Method-specific handlers
export default defineEventHandler(async (event) => {
const method = getMethod(event);
if (method === 'GET') {
return uploadRouter.handlers.GET(event.node.req);
}
if (method === 'POST') {
return uploadRouter.handlers.POST(event.node.req);
}
throw createError({
statusCode: 405,
statusMessage: 'Method Not Allowed'
});
});
```
### With H3 Utilities
```typescript title="routes/api/upload/[...path].ts"
import { uploadRouter } from '~/lib/upload';
import {
defineEventHandler,
getMethod,
setHeader,
createError
} from 'h3';
export default defineEventHandler(async (event) => {
// Handle CORS
setHeader(event, 'Access-Control-Allow-Origin', '*');
setHeader(event, 'Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
setHeader(event, 'Access-Control-Allow-Headers', 'Content-Type');
// Handle preflight requests
if (getMethod(event) === 'OPTIONS') {
return '';
}
try {
return await uploadRouter.handlers(event.node.req);
} catch (error) {
throw createError({
statusCode: 500,
statusMessage: 'Upload failed',
data: error
});
}
});
```
## Advanced Configuration
### Authentication with H3
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { getCookie } from 'h3';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with session authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookies = req.headers.cookie;
const sessionId = parseCookie(cookies)?.sessionId;
if (!sessionId) {
throw new Error('Authentication required');
}
const user = await getUserFromSession(sessionId);
if (!user) {
throw new Error('Invalid session');
}
return {
userId: user.id,
username: user.username,
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
// Helper functions
function parseCookie(cookieString: string | undefined) {
if (!cookieString) return {};
return Object.fromEntries(
cookieString.split('; ').map(c => {
const [key, ...v] = c.split('=');
return [key, v.join('=')];
})
);
}
async function getUserFromSession(sessionId: string) {
// Implement your session validation logic
return { id: 'user-123', username: 'demo-user' };
}
```
## Standalone Nitro App
### Basic Nitro Setup
```typescript title="nitro.config.ts"
export default defineNitroConfig({
srcDir: 'server',
routeRules: {
'/api/upload/**': {
cors: true,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type'
}
}
},
experimental: {
wasm: true
}
});
```
### Server Entry Point
```typescript title="server/index.ts"
import { createApp, toNodeListener } from 'h3';
import { uploadRouter } from './lib/upload';
const app = createApp();
// Upload routes
app.use('/api/upload/**', defineEventHandler(async (event) => {
return uploadRouter.handlers(event.node.req);
}));
// Health check
app.use('/health', defineEventHandler(() => ({ status: 'ok' })));
export default toNodeListener(app);
```
## Client-Side Usage
### HTML with Vanilla JavaScript
```html title="public/index.html"
File Upload Demo
```
### With Framework Integration
```typescript title="plugins/upload.client.ts"
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "~/lib/upload";
export const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
```
## File Management
### File API Route
```typescript title="routes/api/files.get.ts"
import { defineEventHandler, getQuery, createError } from 'h3';
export default defineEventHandler(async (event) => {
const query = getQuery(event);
const userId = query.userId as string;
if (!userId) {
throw createError({
statusCode: 400,
statusMessage: 'User ID required'
});
}
// Fetch files from database
const files = await getFilesForUser(userId);
return {
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
};
});
async function getFilesForUser(userId: string) {
// Implement your database query logic
return [];
}
```
### File Management Page
```html title="public/files.html"
My Files
```
## Deployment Options
```typescript title="nitro.config.ts"
export default defineNitroConfig({
preset: 'vercel-edge',
// or 'vercel' for Node.js runtime
});
```
```typescript title="nitro.config.ts"
export default defineNitroConfig({
preset: 'netlify-edge',
// or 'netlify' for Node.js runtime
});
```
```typescript title="nitro.config.ts"
export default defineNitroConfig({
preset: 'node-server',
});
```
```typescript title="nitro.config.ts"
export default defineNitroConfig({
preset: 'cloudflare-workers',
});
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# Nitro
NITRO_PORT=3000
NITRO_HOST=0.0.0.0
```
## Performance Benefits
## Middleware and Plugins
```typescript title="middleware/cors.ts"
export default defineEventHandler(async (event) => {
if (event.node.req.url?.startsWith('/api/upload')) {
setHeader(event, 'Access-Control-Allow-Origin', '*');
setHeader(event, 'Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
setHeader(event, 'Access-Control-Allow-Headers', 'Content-Type');
if (getMethod(event) === 'OPTIONS') {
return '';
}
}
});
```
```typescript title="plugins/database.ts"
export default async (nitroApp) => {
// Initialize database connection
console.log('Database plugin initialized');
// Add database to context
nitroApp.hooks.hook('request', async (event) => {
event.context.db = await getDatabase();
});
};
```
## Real-Time Upload Progress
```html title="public/advanced-upload.html"
Advanced Upload
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `routes/api/upload/[...path].ts`
2. **Build errors**: Check that pushduck and h3 are properly installed
3. **CORS issues**: Use Nitro's built-in CORS handling or middleware
4. **Environment variables**: Make sure they're accessible in your deployment environment
### Debug Mode
Enable debug logging:
```typescript title="lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (process.env.NODE_ENV === "development") {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Nitro Configuration
```typescript title="nitro.config.ts"
export default defineNitroConfig({
srcDir: 'server',
buildDir: '.nitro',
output: {
dir: '.output',
serverDir: '.output/server',
publicDir: '.output/public'
},
runtimeConfig: {
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
awsRegion: process.env.AWS_REGION,
s3BucketName: process.env.S3_BUCKET_NAME,
},
experimental: {
wasm: true
}
});
```
Nitro/H3 provides an excellent foundation for building universal web applications with pushduck, offering flexibility, performance, and deployment options across any platform while maintaining full compatibility with Web Standards APIs.
# Nuxt.js (/docs/integrations/nuxtjs)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
**π§ Client-Side In Development**: Nuxt.js server-side integration is fully functional with Web Standards APIs. However, Nuxt.js-specific client-side components and hooks are still in development. You can use the standard pushduck client APIs for now.
## Using pushduck with Nuxt.js
Nuxt.js is the intuitive Vue.js framework for building full-stack web applications. It uses Web Standards APIs and provides excellent performance with server-side rendering. Since Nuxt.js uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Nuxt.js server routes use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
**Configure upload router**
```typescript title="server/utils/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="server/api/upload/[...path].ts"
import { uploadRouter } from '~/server/utils/upload';
// Direct usage - no adapter needed!
export default defineEventHandler(async (event) => {
return uploadRouter.handlers(event.node.req);
});
```
## Basic Integration
### Simple Upload Route
```typescript title="server/api/upload/[...path].ts"
import { uploadRouter } from '~/server/utils/upload';
// Method 1: Combined handler (recommended)
export default defineEventHandler(async (event) => {
return uploadRouter.handlers(event.node.req);
});
// Method 2: Method-specific handlers
export default defineEventHandler({
onRequest: [
// Add middleware here if needed
],
handler: async (event) => {
if (event.node.req.method === 'GET') {
return uploadRouter.handlers.GET(event.node.req);
}
if (event.node.req.method === 'POST') {
return uploadRouter.handlers.POST(event.node.req);
}
}
});
```
### With Server Middleware
```typescript title="server/middleware/cors.ts"
export default defineEventHandler(async (event) => {
if (event.node.req.url?.startsWith('/api/upload')) {
// Handle CORS for upload routes
setHeader(event, 'Access-Control-Allow-Origin', '*');
setHeader(event, 'Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
setHeader(event, 'Access-Control-Allow-Headers', 'Content-Type');
if (event.node.req.method === 'OPTIONS') {
return '';
}
}
});
```
## Advanced Configuration
### Authentication with Nuxt
```typescript title="server/utils/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import jwt from 'jsonwebtoken';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with JWT authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.authorization;
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const payload = jwt.verify(token, process.env.JWT_SECRET!) as any;
return {
userId: payload.sub,
userRole: payload.role
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
## Client-Side Usage
### Upload Composable
```typescript title="composables/useUpload.ts"
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "~/server/utils/upload";
export const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
```
### Upload Component
```vue title="components/FileUpload.vue"
Image Upload
Document Upload
```
### Using in Pages
```vue title="pages/index.vue"
File Upload Demo
```
## File Management
### Server-Side File API
```typescript title="server/api/files.get.ts"
export default defineEventHandler(async (event) => {
const query = getQuery(event);
const userId = query.userId as string;
if (!userId) {
throw createError({
statusCode: 400,
statusMessage: 'User ID required'
});
}
// Fetch files from database
const files = await $fetch('/api/database/files', {
query: { userId }
});
return {
files: files.map((file: any) => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
};
});
```
### File Management Page
```vue title="pages/files.vue"
My Files
Uploaded Files
No files uploaded yet.
{{ file.name }}
{{ formatFileSize(file.size) }}
{{ new Date(file.uploadedAt).toLocaleDateString() }}
View File
```
## Deployment Options
```typescript title="nuxt.config.ts"
export default defineNuxtConfig({
nitro: {
preset: 'vercel-edge',
// or 'vercel' for Node.js runtime
},
runtimeConfig: {
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
awsRegion: process.env.AWS_REGION,
s3BucketName: process.env.S3_BUCKET_NAME,
}
});
```
```typescript title="nuxt.config.ts"
export default defineNuxtConfig({
nitro: {
preset: 'netlify-edge',
// or 'netlify' for Node.js runtime
},
runtimeConfig: {
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
awsRegion: process.env.AWS_REGION,
s3BucketName: process.env.S3_BUCKET_NAME,
}
});
```
```typescript title="nuxt.config.ts"
export default defineNuxtConfig({
nitro: {
preset: 'node-server',
},
runtimeConfig: {
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
awsRegion: process.env.AWS_REGION,
s3BucketName: process.env.S3_BUCKET_NAME,
}
});
```
```typescript title="nuxt.config.ts"
export default defineNuxtConfig({
nitro: {
preset: 'cloudflare-pages',
},
runtimeConfig: {
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
awsRegion: process.env.AWS_REGION,
s3BucketName: process.env.S3_BUCKET_NAME,
}
});
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# JWT Secret (for authentication)
JWT_SECRET=your-jwt-secret
# Nuxt
NUXT_PUBLIC_UPLOAD_ENDPOINT=http://localhost:3000/api/upload
```
## Performance Benefits
## Real-Time Upload Progress
```vue title="components/AdvancedUpload.vue"
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `server/api/upload/[...path].ts`
2. **Build errors**: Check that pushduck is properly installed
3. **CORS issues**: Use server middleware for CORS configuration
4. **Runtime config**: Make sure environment variables are properly configured
### Debug Mode
Enable debug logging:
```typescript title="server/utils/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (process.dev) {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Nitro Configuration
```typescript title="nuxt.config.ts"
export default defineNuxtConfig({
nitro: {
experimental: {
wasm: true
},
// Enable debugging in development
devProxy: {
'/api/upload': {
target: 'http://localhost:3000/api/upload',
changeOrigin: true
}
}
}
});
```
Nuxt.js provides an excellent foundation for building full-stack Vue.js applications with pushduck, combining the power of Vue's reactive framework with Web Standards APIs and Nitro's universal deployment capabilities.
# Qwik (/docs/integrations/qwik)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
**π§ Client-Side In Development**: Qwik server-side integration is fully functional with Web Standards APIs. However, Qwik-specific client-side components and hooks are still in development. You can use the standard pushduck client APIs for now.
## Using pushduck with Qwik
Qwik is a revolutionary web framework focused on resumability and edge optimization. It uses Web Standards APIs and provides instant loading with minimal JavaScript. Since Qwik uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Qwik server endpoints use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead and perfect for edge deployment.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
**Configure upload router**
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: import.meta.env.VITE_AWS_ACCESS_KEY_ID!,
secretAccessKey: import.meta.env.VITE_AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: import.meta.env.VITE_AWS_ENDPOINT_URL!,
bucket: import.meta.env.VITE_S3_BUCKET_NAME!,
accountId: import.meta.env.VITE_R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="src/routes/api/upload/[...path]/index.ts"
import type { RequestHandler } from '@builder.io/qwik-city';
import { uploadRouter } from '~/lib/upload';
// Direct usage - no adapter needed!
export const onGet: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
export const onPost: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
```
## Basic Integration
### Simple Upload Route
```typescript title="src/routes/api/upload/[...path]/index.ts"
import type { RequestHandler } from '@builder.io/qwik-city';
import { uploadRouter } from '~/lib/upload';
// Method 1: Combined handler (recommended)
export const onRequest: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
// Method 2: Separate handlers (if you need method-specific logic)
export const onGet: RequestHandler = async ({ request }) => {
return uploadRouter.handlers.GET(request);
};
export const onPost: RequestHandler = async ({ request }) => {
return uploadRouter.handlers.POST(request);
};
```
### With CORS Support
```typescript title="src/routes/api/upload/[...path]/index.ts"
import type { RequestHandler } from '@builder.io/qwik-city';
import { uploadRouter } from '~/lib/upload';
export const onRequest: RequestHandler = async ({ request, headers }) => {
// Handle CORS preflight
if (request.method === 'OPTIONS') {
headers.set('Access-Control-Allow-Origin', '*');
headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
headers.set('Access-Control-Allow-Headers', 'Content-Type');
return new Response(null, { status: 200 });
}
const response = await uploadRouter.handlers(request);
// Add CORS headers to actual response
headers.set('Access-Control-Allow-Origin', '*');
return response;
};
```
## Advanced Configuration
### Authentication with Qwik
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: import.meta.env.VITE_AWS_ACCESS_KEY_ID!,
secretAccessKey: import.meta.env.VITE_AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: import.meta.env.VITE_AWS_ENDPOINT_URL!,
bucket: import.meta.env.VITE_S3_BUCKET_NAME!,
accountId: import.meta.env.VITE_R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with cookie-based authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookies = req.headers.get('Cookie');
const sessionId = parseCookie(cookies)?.sessionId;
if (!sessionId) {
throw new Error('Authentication required');
}
const user = await getUserFromSession(sessionId);
if (!user) {
throw new Error('Invalid session');
}
return {
userId: user.id,
username: user.username,
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
// Helper functions
function parseCookie(cookieString: string | null) {
if (!cookieString) return {};
return Object.fromEntries(
cookieString.split('; ').map(c => {
const [key, ...v] = c.split('=');
return [key, v.join('=')];
})
);
}
async function getUserFromSession(sessionId: string) {
// Implement your session validation logic
return { id: 'user-123', username: 'demo-user' };
}
```
## Client-Side Usage
### Upload Component
```tsx title="src/components/file-upload.tsx"
import { component$, useSignal } from '@builder.io/qwik';
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "~/lib/upload";
export const FileUpload = component$(() => {
const uploadProgress = useSignal(0);
const isUploading = useSignal(false);
const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
const handleUploadComplete = $((files: any[]) => {
console.log("Files uploaded:", files);
alert("Upload completed!");
});
const handleUploadError = $((error: Error) => {
console.error("Upload error:", error);
alert(`Upload failed: ${error.message}`);
});
return (
Image Upload
Document Upload
);
});
```
### Using in Routes
```tsx title="src/routes/index.tsx"
import { component$ } from '@builder.io/qwik';
import type { DocumentHead } from '@builder.io/qwik-city';
import { FileUpload } from '~/components/file-upload';
export default component$(() => {
return (
File Upload Demo
);
});
export const head: DocumentHead = {
title: 'File Upload Demo',
meta: [
{
name: 'description',
content: 'Qwik file upload demo with pushduck',
},
],
};
```
## File Management
### Server-Side File Loader
```typescript title="src/routes/files/index.tsx"
import { component$ } from '@builder.io/qwik';
import type { DocumentHead } from '@builder.io/qwik-city';
import { routeLoader$ } from '@builder.io/qwik-city';
import { FileUpload } from '~/components/file-upload';
export const useFiles = routeLoader$(async (requestEvent) => {
const userId = 'current-user'; // Get from session/auth
// Fetch files from database
const files = await getFilesForUser(userId);
return {
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
};
});
export default component$(() => {
const filesData = useFiles();
const formatFileSize = (bytes: number): string => {
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
if (bytes === 0) return '0 Bytes';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
};
return (
My Files
Uploaded Files
{filesData.value.files.length === 0 ? (
No files uploaded yet.
) : (
{filesData.value.files.map((file) => (
{file.name}
{formatFileSize(file.size)}
{new Date(file.uploadedAt).toLocaleDateString()}
View File
))}
)}
);
});
export const head: DocumentHead = {
title: 'My Files',
};
async function getFilesForUser(userId: string) {
// Implement your database query logic
return [];
}
```
## Deployment Options
```typescript title="vite.config.ts"
import { defineConfig } from 'vite';
import { qwikVite } from '@builder.io/qwik/optimizer';
import { qwikCity } from '@builder.io/qwik-city/vite';
import { qwikCloudflarePages } from '@builder.io/qwik-city/adapters/cloudflare-pages/vite';
export default defineConfig(() => {
return {
plugins: [
qwikCity({
adapter: qwikCloudflarePages(),
}),
qwikVite(),
],
};
});
```
```typescript title="vite.config.ts"
import { defineConfig } from 'vite';
import { qwikVite } from '@builder.io/qwik/optimizer';
import { qwikCity } from '@builder.io/qwik-city/vite';
import { qwikVercel } from '@builder.io/qwik-city/adapters/vercel-edge/vite';
export default defineConfig(() => {
return {
plugins: [
qwikCity({
adapter: qwikVercel(),
}),
qwikVite(),
],
};
});
```
```typescript title="vite.config.ts"
import { defineConfig } from 'vite';
import { qwikVite } from '@builder.io/qwik/optimizer';
import { qwikCity } from '@builder.io/qwik-city/vite';
import { qwikNetlifyEdge } from '@builder.io/qwik-city/adapters/netlify-edge/vite';
export default defineConfig(() => {
return {
plugins: [
qwikCity({
adapter: qwikNetlifyEdge(),
}),
qwikVite(),
],
};
});
```
```typescript title="vite.config.ts"
import { defineConfig } from 'vite';
import { qwikVite } from '@builder.io/qwik/optimizer';
import { qwikCity } from '@builder.io/qwik-city/vite';
import { qwikDeno } from '@builder.io/qwik-city/adapters/deno/vite';
export default defineConfig(() => {
return {
plugins: [
qwikCity({
adapter: qwikDeno(),
}),
qwikVite(),
],
};
});
```
## Environment Variables
```bash title=".env"
# AWS Configuration
VITE_AWS_REGION=us-east-1
VITE_AWS_ACCESS_KEY_ID=your_access_key
VITE_AWS_SECRET_ACCESS_KEY=your_secret_key
VITE_AWS_S3_BUCKET=your-bucket-name
# Qwik
VITE_PUBLIC_UPLOAD_ENDPOINT=http://localhost:5173/api/upload
```
## Performance Benefits
## Real-Time Upload Progress
```tsx title="src/components/advanced-upload.tsx"
import { component$, useSignal, $ } from '@builder.io/qwik';
export const AdvancedUpload = component$(() => {
const uploadProgress = useSignal(0);
const isUploading = useSignal(false);
const handleFileUpload = $(async (event: Event) => {
const target = event.target as HTMLInputElement;
const files = target.files;
if (!files || files.length === 0) return;
isUploading.value = true;
uploadProgress.value = 0;
try {
// Simulate upload progress
for (let i = 0; i <= 100; i += 10) {
uploadProgress.value = i;
await new Promise(resolve => setTimeout(resolve, 100));
}
alert('Upload completed!');
} catch (error) {
console.error('Upload failed:', error);
alert('Upload failed!');
} finally {
isUploading.value = false;
uploadProgress.value = 0;
}
});
return (
{isUploading.value && (
{uploadProgress.value}% uploaded
)}
);
});
```
## Qwik City Form Integration
```tsx title="src/routes/upload-form/index.tsx"
import { component$ } from '@builder.io/qwik';
import type { DocumentHead } from '@builder.io/qwik-city';
import { routeAction$, Form, zod$, z } from '@builder.io/qwik-city';
import { FileUpload } from '~/components/file-upload';
export const useUploadAction = routeAction$(async (data, requestEvent) => {
// Handle form submission
// Files are already uploaded via pushduck, just save metadata
console.log('Form data:', data);
// Redirect to files page
throw requestEvent.redirect(302, '/files');
}, zod$({
title: z.string().min(1),
description: z.string().optional(),
}));
export default component$(() => {
const uploadAction = useUploadAction();
return (
Upload Files
);
});
export const head: DocumentHead = {
title: 'Upload Form',
};
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `src/routes/api/upload/[...path]/index.ts`
2. **Build errors**: Check that pushduck is properly installed and configured
3. **Environment variables**: Use `import.meta.env.VITE_` prefix for client-side variables
4. **Resumability**: Remember to use `$` suffix for event handlers and functions
### Debug Mode
Enable debug logging:
```typescript title="src/lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (import.meta.env.DEV) {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Qwik Configuration
```typescript title="vite.config.ts"
import { defineConfig } from 'vite';
import { qwikVite } from '@builder.io/qwik/optimizer';
import { qwikCity } from '@builder.io/qwik-city/vite';
export default defineConfig(() => {
return {
plugins: [qwikCity(), qwikVite()],
preview: {
headers: {
'Cache-Control': 'public, max-age=600',
},
},
// Environment variables configuration
define: {
'import.meta.env.VITE_AWS_ACCESS_KEY_ID': JSON.stringify(process.env.VITE_AWS_ACCESS_KEY_ID),
}
};
});
```
Qwik provides a revolutionary approach to web development with pushduck, offering instant loading and resumability while maintaining full compatibility with Web Standards APIs for optimal edge performance.
# Remix (/docs/integrations/remix)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with Remix
Remix is a full-stack React framework focused on web standards and modern UX. It uses Web Standards APIs and provides server-side rendering with client-side hydration. Since Remix uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: Remix loader and action functions use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
**Configure upload router**
```typescript title="app/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="app/routes/api.upload.$.tsx"
import type { ActionFunctionArgs, LoaderFunctionArgs } from "@remix-run/node";
import { uploadRouter } from "~/lib/upload";
// Direct usage - no adapter needed!
export async function loader({ request }: LoaderFunctionArgs) {
return uploadRouter.handlers(request);
}
export async function action({ request }: ActionFunctionArgs) {
return uploadRouter.handlers(request);
}
```
## Basic Integration
### Simple Upload Route
```typescript title="app/routes/api.upload.$.tsx"
import type { ActionFunctionArgs, LoaderFunctionArgs } from "@remix-run/node";
import { uploadRouter } from "~/lib/upload";
// Method 1: Combined handler (recommended)
export async function loader({ request }: LoaderFunctionArgs) {
return uploadRouter.handlers(request);
}
export async function action({ request }: ActionFunctionArgs) {
return uploadRouter.handlers(request);
}
// Method 2: Method-specific handlers (if you need different logic)
export async function loader({ request }: LoaderFunctionArgs) {
if (request.method === 'GET') {
return uploadRouter.handlers.GET(request);
}
throw new Response("Method not allowed", { status: 405 });
}
export async function action({ request }: ActionFunctionArgs) {
if (request.method === 'POST') {
return uploadRouter.handlers.POST(request);
}
throw new Response("Method not allowed", { status: 405 });
}
```
### With Resource Route
```typescript title="app/routes/api.upload.$.tsx"
import type { ActionFunctionArgs, LoaderFunctionArgs } from "@remix-run/node";
import { uploadRouter } from "~/lib/upload";
// Handle CORS for cross-origin requests
export async function loader({ request }: LoaderFunctionArgs) {
// Handle preflight requests
if (request.method === 'OPTIONS') {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
},
});
}
const response = await uploadRouter.handlers(request);
// Add CORS headers
response.headers.set('Access-Control-Allow-Origin', '*');
return response;
}
export async function action({ request }: ActionFunctionArgs) {
const response = await uploadRouter.handlers(request);
// Add CORS headers
response.headers.set('Access-Control-Allow-Origin', '*');
return response;
}
```
## Advanced Configuration
### Authentication with Remix
```typescript title="app/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { getSession } from '~/sessions';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with session authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookie = req.headers.get("Cookie");
const session = await getSession(cookie);
if (!session.has("userId")) {
throw new Error("Authentication required");
}
return {
userId: session.get("userId"),
username: session.get("username"),
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
## Client-Side Usage
### Remix Upload Hook
```typescript title="app/hooks/useUpload.ts"
import { useUpload } from "pushduck/client";
import type { AppUploadRouter } from "~/lib/upload";
export const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
```
### Upload Component
```tsx title="app/components/FileUpload.tsx"
import { UploadButton, UploadDropzone } from "~/hooks/useUpload";
export function FileUpload() {
function handleUploadComplete(files: any[]) {
console.log("Files uploaded:", files);
alert("Upload completed!");
}
function handleUploadError(error: Error) {
console.error("Upload error:", error);
alert(`Upload failed: ${error.message}`);
}
return (
Image Upload
Document Upload
);
}
```
### Using in Routes
```tsx title="app/routes/_index.tsx"
import { FileUpload } from "~/components/FileUpload";
export default function Index() {
return (
File Upload Demo
);
}
```
## File Management
### Server-Side File Loader
```typescript title="app/routes/files.tsx"
import type { LoaderFunctionArgs } from "@remix-run/node";
import { json } from "@remix-run/node";
import { useLoaderData } from "@remix-run/react";
import { FileUpload } from "~/components/FileUpload";
import { getSession } from "~/sessions";
export async function loader({ request }: LoaderFunctionArgs) {
const cookie = request.headers.get("Cookie");
const session = await getSession(cookie);
if (!session.has("userId")) {
throw new Response("Unauthorized", { status: 401 });
}
const userId = session.get("userId");
// Fetch files from database
const files = await db.file.findMany({
where: { userId },
orderBy: { createdAt: 'desc' },
});
return json({
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
});
}
export default function FilesPage() {
const { files } = useLoaderData();
function formatFileSize(bytes: number): string {
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
if (bytes === 0) return '0 Bytes';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
}
return (
My Files
Uploaded Files
{files.length === 0 ? (
No files uploaded yet.
) : (
{files.map((file) => (
{file.name}
{formatFileSize(file.size)}
{new Date(file.uploadedAt).toLocaleDateString()}
View File
))}
)}
);
}
```
## Deployment Options
```typescript title="remix.config.js"
/** @type {import('@remix-run/dev').AppConfig} */
export default {
ignoredRouteFiles: ["**/.*"],
server: "./server.ts",
serverBuildPath: "api/index.js",
// Vercel configuration
serverConditions: ["workerd", "worker", "browser"],
serverDependenciesToBundle: "all",
serverMainFields: ["browser", "module", "main"],
serverMinify: true,
serverModuleFormat: "esm",
serverPlatform: "neutral",
};
```
```typescript title="remix.config.js"
/** @type {import('@remix-run/dev').AppConfig} */
export default {
ignoredRouteFiles: ["**/.*"],
server: "./server.ts",
serverBuildPath: ".netlify/functions-internal/server.js",
// Netlify configuration
serverConditions: ["deno", "worker", "browser"],
serverDependenciesToBundle: "all",
serverMainFields: ["browser", "module", "main"],
serverMinify: true,
serverModuleFormat: "esm",
serverPlatform: "neutral",
};
```
```dockerfile title="Dockerfile"
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
```
```json title="package.json"
{
"scripts": {
"build": "remix build",
"dev": "remix dev",
"start": "remix-serve build",
"typecheck": "tsc"
}
}
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# Session Secret
SESSION_SECRET=your-session-secret
# Database
DATABASE_URL=your-database-url
```
## Performance Benefits
## Real-Time Upload Progress
```tsx title="app/components/AdvancedUpload.tsx"
import { useState } from "react";
export function AdvancedUpload() {
const [uploadProgress, setUploadProgress] = useState(0);
const [isUploading, setIsUploading] = useState(false);
async function handleFileUpload(event: React.ChangeEvent) {
const files = event.target.files;
if (!files || files.length === 0) return;
setIsUploading(true);
setUploadProgress(0);
try {
// Simulate upload progress
for (let i = 0; i <= 100; i += 10) {
setUploadProgress(i);
await new Promise(resolve => setTimeout(resolve, 100));
}
alert('Upload completed!');
} catch (error) {
console.error('Upload failed:', error);
alert('Upload failed!');
} finally {
setIsUploading(false);
setUploadProgress(0);
}
}
return (
{isUploading && (
{uploadProgress}% uploaded
)}
);
}
```
## Form Integration
```tsx title="app/routes/upload-form.tsx"
import type { ActionFunctionArgs } from "@remix-run/node";
import { json, redirect } from "@remix-run/node";
import { Form, useActionData, useNavigation } from "@remix-run/react";
export async function action({ request }: ActionFunctionArgs) {
const formData = await request.formData();
const title = formData.get("title") as string;
const description = formData.get("description") as string;
// Handle form submission with file uploads
// Files are already uploaded via pushduck, just save metadata
return redirect("/files");
}
export default function UploadForm() {
const actionData = useActionData();
const navigation = useNavigation();
const isSubmitting = navigation.state === "submitting";
return (
Upload Files
Title
Description
Files
{isSubmitting ? "Uploading..." : "Upload Files"}
);
}
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `app/routes/api.upload.$.tsx`
2. **Build errors**: Check that pushduck is properly installed
3. **Session issues**: Make sure your session configuration is correct
4. **CORS errors**: Add proper CORS headers in your resource routes
### Debug Mode
Enable debug logging:
```typescript title="app/lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (process.env.NODE_ENV === "development") {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
### Session Configuration
```typescript title="app/sessions.ts"
import { createCookieSessionStorage } from "@remix-run/node";
export const { getSession, commitSession, destroySession } =
createCookieSessionStorage({
cookie: {
name: "__session",
httpOnly: true,
maxAge: 60 * 60 * 24 * 30, // 30 days
path: "/",
sameSite: "lax",
secrets: [process.env.SESSION_SECRET!],
secure: process.env.NODE_ENV === "production",
},
});
```
Remix provides an excellent foundation for building full-stack React applications with pushduck, combining the power of React with Web Standards APIs and progressive enhancement for optimal user experience.
# SolidJS Start (/docs/integrations/solidjs-start)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Using pushduck with SolidJS Start
SolidJS Start is a full-stack SolidJS framework with built-in Web Standards support. Since SolidJS Start provides `event.request` as a Web Standard `Request` object, pushduck handlers work directly without any adapters!
**Web Standards Native**: SolidJS Start's API handlers receive `event.request` as a Web Standard `Request` object, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="src/routes/api/upload/[...path].ts"
import { APIEvent } from '@solidjs/start/server';
import { uploadRouter } from '~/lib/upload';
export async function GET(event: APIEvent) {
return uploadRouter.handlers(event.request);
}
export async function POST(event: APIEvent) {
return uploadRouter.handlers(event.request);
}
```
## Basic Integration
### API Route Handler
```typescript title="src/routes/api/upload/[...path].ts"
import { APIEvent } from '@solidjs/start/server';
import { uploadRouter } from '~/lib/upload';
// Method 1: Individual handlers
export async function GET(event: APIEvent) {
return uploadRouter.handlers.GET(event.request);
}
export async function POST(event: APIEvent) {
return uploadRouter.handlers.POST(event.request);
}
// Method 2: Combined handler (alternative approach)
// export async function handler(event: APIEvent) {
// return uploadRouter.handlers(event.request);
// }
```
### With CORS Middleware
```typescript title="src/routes/api/upload/[...path].ts"
import { APIEvent } from '@solidjs/start/server';
import { uploadRouter } from '~/lib/upload';
function addCORSHeaders(response: Response) {
response.headers.set('Access-Control-Allow-Origin', '*');
response.headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
response.headers.set('Access-Control-Allow-Headers', 'Content-Type, Authorization');
return response;
}
export async function GET(event: APIEvent) {
const response = await uploadRouter.handlers.GET(event.request);
return addCORSHeaders(response);
}
export async function POST(event: APIEvent) {
const response = await uploadRouter.handlers.POST(event.request);
return addCORSHeaders(response);
}
export async function OPTIONS() {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
}
});
}
```
## Advanced Configuration
### Authentication with SolidJS Start
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const user = await verifyJWT(token);
return {
userId: user.id,
userRole: user.role
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
async function verifyJWT(token: string) {
// Your JWT verification logic here
return { id: 'user-123', role: 'user' };
}
export type AppUploadRouter = typeof uploadRouter;
```
### Protected API Routes
```typescript title="src/routes/api/upload/[...path].ts"
import { APIEvent } from '@solidjs/start/server';
import { uploadRouter } from '~/lib/upload';
async function requireAuth(request: Request) {
const authHeader = request.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Response(JSON.stringify({ error: 'Authorization required' }), {
status: 401,
headers: { 'Content-Type': 'application/json' }
});
}
const token = authHeader.substring(7);
// Verify token logic here
return { userId: 'user-123' };
}
export async function GET(event: APIEvent) {
await requireAuth(event.request);
return uploadRouter.handlers.GET(event.request);
}
export async function POST(event: APIEvent) {
await requireAuth(event.request);
return uploadRouter.handlers.POST(event.request);
}
```
## Client-Side Integration
### Upload Component
```typescript title="src/components/UploadDemo.tsx"
import { createSignal } from 'solid-js';
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from '~/lib/upload';
const uploadClient = createUploadClient({
baseUrl: import.meta.env.DEV
? 'http://localhost:3000'
: 'https://your-domain.com'
});
export function UploadDemo() {
const [uploading, setUploading] = createSignal(false);
const [results, setResults] = createSignal([]);
const handleUpload = async (files: File[]) => {
if (!files.length) return;
setUploading(true);
try {
const uploadResults = await uploadClient.upload('imageUpload', {
files,
metadata: { userId: 'demo-user' }
});
setResults(uploadResults);
console.log('Upload successful:', uploadResults);
} catch (error) {
console.error('Upload failed:', error);
} finally {
setUploading(false);
}
};
return (
{
const files = e.target.files;
if (files) {
handleUpload(Array.from(files));
}
}}
disabled={uploading()}
class="block w-full text-sm text-gray-500 file:mr-4 file:py-2 file:px-4 file:rounded-full file:border-0 file:text-sm file:font-semibold file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"
/>
{uploading() && (
Uploading...
)}
{results().length > 0 && (
Upload Results:
{results().map((result, index) => (
))}
)}
);
}
```
### Page Integration
```typescript title="src/routes/upload.tsx"
import { Title } from '@solidjs/meta';
import { UploadDemo } from '~/components/UploadDemo';
export default function UploadPage() {
return (
<>
File Upload Demo
>
);
}
```
## Full Application Example
### Project Structure
```
src/
βββ components/
β βββ UploadDemo.tsx
βββ lib/
β βββ upload.ts
βββ routes/
β βββ api/
β β βββ upload/
β β βββ [...path].ts
β βββ upload.tsx
β βββ (home).tsx
βββ app.tsx
βββ entry-server.tsx
```
### Root Layout
```typescript title="src/app.tsx"
import { Router } from '@solidjs/router';
import { FileRoutes } from '@solidjs/start/router';
import { Suspense } from 'solid-js';
import './app.css';
export default function App() {
return (
(
Upload Demo
{props.children}
)}
>
);
}
```
### Home Page
```typescript title="src/routes/(home).tsx"
import { A } from '@solidjs/router';
import { Title } from '@solidjs/meta';
export default function Home() {
return (
<>
SolidJS Start + Pushduck
SolidJS Start + Pushduck
High-performance file uploads with SolidJS
Try Upload Demo
>
);
}
```
## Performance Benefits
No adapter layer means zero performance overhead - pushduck handlers run directly in SolidJS Start.
SolidJS provides exceptional performance with fine-grained reactivity and no virtual DOM.
Complete type safety from server to client with shared types.
Clean, organized API routes with SolidJS Start's file-based routing system.
## Deployment
### Vercel Deployment
```typescript title="app.config.ts"
import { defineConfig } from '@solidjs/start/config';
export default defineConfig({
server: {
preset: 'vercel'
}
});
```
### Netlify Deployment
```typescript title="app.config.ts"
import { defineConfig } from '@solidjs/start/config';
export default defineConfig({
server: {
preset: 'netlify'
}
});
```
### Node.js Deployment
```typescript title="app.config.ts"
import { defineConfig } from '@solidjs/start/config';
export default defineConfig({
server: {
preset: 'node-server'
}
});
```
### Docker Deployment
```dockerfile title="Dockerfile"
FROM node:18-alpine as base
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy source code
COPY . .
# Build the app
RUN npm run build
# Expose port
EXPOSE 3000
# Start the app
CMD ["npm", "start"]
```
***
**SolidJS + Pushduck**: The perfect combination of SolidJS's exceptional performance and pushduck's universal design for lightning-fast file upload experiences.
# SvelteKit (/docs/integrations/sveltekit)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
**π§ Client-Side In Development**: SvelteKit server-side integration is fully functional with Web Standards APIs. However, SvelteKit-specific client-side components and hooks are still in development. You can use the standard pushduck client APIs for now.
## Using pushduck with SvelteKit
SvelteKit is the official application framework for Svelte. It uses Web Standards APIs and provides excellent performance with minimal JavaScript. Since SvelteKit uses standard `Request`/`Response` objects, pushduck handlers work directly without any adapters!
**Web Standards Native**: SvelteKit server endpoints use Web Standard `Request`/`Response` objects, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
**Configure upload router**
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="src/routes/api/upload/[...path]/+server.ts"
import type { RequestHandler } from './$types';
import { uploadRouter } from '$lib/upload';
// Direct usage - no adapter needed!
export const GET: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
export const POST: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
```
## Basic Integration
### Simple Upload Route
```typescript title="src/routes/api/upload/[...path]/+server.ts"
import type { RequestHandler } from './$types';
import { uploadRouter } from '$lib/upload';
// Method 1: Combined handler (recommended)
export const GET: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
export const POST: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
// Method 2: Separate handlers (if you need method-specific logic)
// export const { GET, POST } = uploadRouter.handlers;
```
### With SvelteKit Hooks
```typescript title="src/hooks.server.ts"
import type { Handle } from '@sveltejs/kit';
import { sequence } from '@sveltejs/kit/hooks';
const corsHandler: Handle = async ({ event, resolve }) => {
if (event.url.pathname.startsWith('/api/upload')) {
if (event.request.method === 'OPTIONS') {
return new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type',
},
});
}
}
const response = await resolve(event);
if (event.url.pathname.startsWith('/api/upload')) {
response.headers.set('Access-Control-Allow-Origin', '*');
}
return response;
};
export const handle = sequence(corsHandler);
```
## Advanced Configuration
### Authentication with SvelteKit
```typescript title="src/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
import { getUserFromSession } from '$lib/auth';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with session authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const cookies = req.headers.get('Cookie');
const sessionId = parseCookie(cookies)?.sessionId;
if (!sessionId) {
throw new Error('Authentication required');
}
const user = await getUserFromSession(sessionId);
if (!user) {
throw new Error('Invalid session');
}
return {
userId: user.id,
username: user.username,
};
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
export type AppUploadRouter = typeof uploadRouter;
```
## Client-Side Usage
### Upload Component
```svelte title="src/lib/components/FileUpload.svelte"
Image Upload
Document Upload
```
### Using in Pages
```svelte title="src/routes/+page.svelte"
File Upload Demo
File Upload Demo
```
## File Management
### Server-Side File Listing
```typescript title="src/routes/files/+page.server.ts"
import type { PageServerLoad } from './$types';
import { db } from '$lib/database';
export const load: PageServerLoad = async ({ locals }) => {
const files = await db.file.findMany({
where: { userId: locals.user?.id },
orderBy: { createdAt: 'desc' },
});
return {
files: files.map(file => ({
id: file.id,
name: file.name,
url: file.url,
size: file.size,
uploadedAt: file.createdAt,
})),
};
};
```
```svelte title="src/routes/files/+page.svelte"
My Files
My Files
Uploaded Files
{#if data.files.length === 0}
No files uploaded yet.
{:else}
{#each data.files as file}
{file.name}
{formatFileSize(file.size)}
{new Date(file.uploadedAt).toLocaleDateString()}
View File
{/each}
{/if}
```
## Deployment Options
```javascript title="svelte.config.js"
import adapter from '@sveltejs/adapter-vercel';
import { vitePreprocess } from '@sveltejs/kit/vite';
/** @type {import('@sveltejs/kit').Config} */
const config = {
preprocess: vitePreprocess(),
kit: {
adapter: adapter({
runtime: 'nodejs18.x',
regions: ['iad1'],
}),
}
};
export default config;
```
```javascript title="svelte.config.js"
import adapter from '@sveltejs/adapter-netlify';
/** @type {import('@sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter({
edge: false,
split: false
}),
}
};
export default config;
```
```javascript title="svelte.config.js"
import adapter from '@sveltejs/adapter-node';
/** @type {import('@sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter({
out: 'build'
}),
}
};
export default config;
```
```javascript title="svelte.config.js"
import adapter from '@sveltejs/adapter-cloudflare';
/** @type {import('@sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter({
routes: {
include: ['/*'],
exclude: ['']
}
}),
}
};
export default config;
```
## Environment Variables
```bash title=".env"
# AWS Configuration
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET=your-bucket-name
# SvelteKit
PUBLIC_UPLOAD_ENDPOINT=http://localhost:5173/api/upload
```
## Performance Benefits
## Real-Time Upload Progress
```svelte title="src/lib/components/AdvancedUpload.svelte"
{#if $isUploading}
{$uploadProgress}% uploaded
{/if}
```
## Troubleshooting
**Common Issues**
1. **Route not found**: Ensure your route is `src/routes/api/upload/[...path]/+server.ts`
2. **Build errors**: Check that pushduck is properly installed
3. **CORS issues**: SvelteKit handles CORS automatically for same-origin requests
### Debug Mode
Enable debug logging:
```typescript title="src/lib/upload.ts"
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (import.meta.env.DEV) {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
SvelteKit provides an excellent foundation for building fast, modern web applications with pushduck, combining the power of Svelte's reactive framework with Web Standards APIs.
# TanStack Start (/docs/integrations/tanstack-start)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
## Using pushduck with TanStack Start
TanStack Start is a full-stack React framework with built-in Web Standards support. Since TanStack Start provides `event.request` as a Web Standard `Request` object, pushduck handlers work directly without any adapters!
**Web Standards Native**: TanStack Start's API handlers receive `{ request }` as a Web Standard `Request` object, making pushduck integration seamless with zero overhead.
## Quick Setup
**Install dependencies**
```bash
npm install pushduck
```
```bash
yarn add pushduck
```
```bash
pnpm add pushduck
```
```bash
bun add pushduck
```
**Configure upload router**
```typescript title="app/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create API route**
```typescript title="app/routes/api.upload.$.ts"
import { createAPIFileRoute } from '@tanstack/start/api';
import { uploadRouter } from '../lib/upload';
export const Route = createAPIFileRoute('/api/upload/$')({
GET: ({ request }) => uploadRouter.handlers(request),
POST: ({ request }) => uploadRouter.handlers(request),
});
```
## Basic Integration
### API Route Handler
```typescript title="app/routes/api.upload.$.ts"
import { createAPIFileRoute } from '@tanstack/start/api';
import { uploadRouter } from '../lib/upload';
export const Route = createAPIFileRoute('/api/upload/$')({
// Method 1: Individual handlers
GET: ({ request }) => uploadRouter.handlers.GET(request),
POST: ({ request }) => uploadRouter.handlers.POST(request),
// Method 2: Universal handler (alternative approach)
// You could also create a single handler that delegates:
// handler: ({ request }) => uploadRouter.handlers(request)
});
```
### With Middleware
```typescript title="app/routes/api.upload.$.ts"
import { createAPIFileRoute } from '@tanstack/start/api';
import { uploadRouter } from '../lib/upload';
// Simple CORS middleware
function withCORS(handler: (ctx: any) => Promise) {
return async (ctx: any) => {
const response = await handler(ctx);
response.headers.set('Access-Control-Allow-Origin', '*');
response.headers.set('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
response.headers.set('Access-Control-Allow-Headers', 'Content-Type, Authorization');
return response;
};
}
export const Route = createAPIFileRoute('/api/upload/$')({
GET: withCORS(({ request }) => uploadRouter.handlers.GET(request)),
POST: withCORS(({ request }) => uploadRouter.handlers.POST(request)),
OPTIONS: () => new Response(null, {
status: 200,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
}
}),
});
```
## Advanced Configuration
### Authentication with TanStack Start
```typescript title="app/lib/upload.ts"
import { createUploadConfig } from 'pushduck/server';
const { s3, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
export const uploadRouter = createS3Router({
// Private uploads with authentication
privateUpload: s3
.image()
.maxFileSize("5MB")
.middleware(async ({ req }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Error('Authorization required');
}
const token = authHeader.substring(7);
try {
const user = await verifyJWT(token);
return {
userId: user.id,
userRole: user.role
};
} catch (error) {
throw new Error('Invalid token');
}
}),
// Public uploads (no auth)
publicUpload: s3
.image()
.maxFileSize("2MB")
// No middleware = public access
});
async function verifyJWT(token: string) {
// Your JWT verification logic here
return { id: 'user-123', role: 'user' };
}
export type AppUploadRouter = typeof uploadRouter;
```
### Protected API Routes
```typescript title="app/routes/api.upload.$.ts"
import { createAPIFileRoute } from '@tanstack/start/api';
import { uploadRouter } from '../lib/upload';
// Authentication middleware
async function requireAuth(request: Request) {
const authHeader = request.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
throw new Response(JSON.stringify({ error: 'Authorization required' }), {
status: 401,
headers: { 'Content-Type': 'application/json' }
});
}
const token = authHeader.substring(7);
// Verify token logic here
return { userId: 'user-123' };
}
export const Route = createAPIFileRoute('/api/upload/$')({
GET: async ({ request }) => {
await requireAuth(request);
return uploadRouter.handlers.GET(request);
},
POST: async ({ request }) => {
await requireAuth(request);
return uploadRouter.handlers.POST(request);
},
});
```
## Client-Side Integration
### Upload Component
```typescript title="app/components/UploadDemo.tsx"
import { useState } from 'react';
import { createUploadClient } from 'pushduck/client';
import type { AppUploadRouter } from '../lib/upload';
const uploadClient = createUploadClient({
baseUrl: process.env.NODE_ENV === 'development'
? 'http://localhost:3000'
: 'https://your-domain.com'
});
export function UploadDemo() {
const [uploading, setUploading] = useState(false);
const [results, setResults] = useState([]);
const handleUpload = async (files: File[]) => {
if (!files.length) return;
setUploading(true);
try {
const uploadResults = await uploadClient.upload('imageUpload', {
files,
metadata: { userId: 'demo-user' }
});
setResults(uploadResults);
console.log('Upload successful:', uploadResults);
} catch (error) {
console.error('Upload failed:', error);
} finally {
setUploading(false);
}
};
return (
{
if (e.target.files) {
handleUpload(Array.from(e.target.files));
}
}}
disabled={uploading}
className="block w-full text-sm text-gray-500 file:mr-4 file:py-2 file:px-4 file:rounded-full file:border-0 file:text-sm file:font-semibold file:bg-blue-50 file:text-blue-700 hover:file:bg-blue-100"
/>
{uploading && (
Uploading...
)}
{results.length > 0 && (
Upload Results:
{results.map((result, index) => (
File: {result.name}
{result.presignedUrl && (
)}
))}
)}
);
}
```
### Page Integration
```typescript title="app/routes/upload.tsx"
import { createFileRoute } from '@tanstack/start';
import { UploadDemo } from '../components/UploadDemo';
export const Route = createFileRoute('/upload')({
component: UploadPage,
});
function UploadPage() {
return (
);
}
```
## Full Application Example
### Project Structure
```
app/
βββ components/
β βββ UploadDemo.tsx
βββ lib/
β βββ upload.ts
βββ routes/
β βββ api.upload.$.ts
β βββ upload.tsx
β βββ __root.tsx
βββ router.ts
βββ main.tsx
```
### Root Layout
```typescript title="app/routes/__root.tsx"
import { createRootRoute, Outlet } from '@tanstack/start';
export const Route = createRootRoute({
component: RootComponent,
});
function RootComponent() {
return (
TanStack Start + Pushduck
);
}
```
## Performance Benefits
No adapter layer means zero performance overhead - pushduck handlers run directly in TanStack Start.
Built on the latest React features with streaming and concurrent rendering.
Complete type safety from server to client with shared types.
Organized API routes with TanStack Start's file-based routing system.
## Deployment
### Vercel Deployment
```typescript title="app.config.ts"
import { defineConfig } from '@tanstack/start/config';
export default defineConfig({
server: {
preset: 'vercel'
}
});
```
### Netlify Deployment
```typescript title="app.config.ts"
import { defineConfig } from '@tanstack/start/config';
export default defineConfig({
server: {
preset: 'netlify'
}
});
```
### Docker Deployment
```dockerfile title="Dockerfile"
FROM node:18-alpine as base
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy source code
COPY . .
# Build the app
RUN npm run build
# Expose port
EXPOSE 3000
# Start the app
CMD ["npm", "start"]
```
***
**Modern React + Pushduck**: TanStack Start's cutting-edge React architecture combined with pushduck's universal design creates a powerful, type-safe file upload solution.
# tRPC (/docs/integrations/trpc)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
## Using pushduck with tRPC
tRPC enables end-to-end typesafe APIs with excellent TypeScript integration. With pushduck, you use **tRPC for storage operations** (listing, deleting, metadata) while **file uploads happen through your framework's routes** using pushduck handlers. This gives you the best of both worlds: framework-native uploads with type-safe storage management.
**Separation of Concerns**: File uploads use your framework's native upload routes with pushduck handlers, while tRPC procedures handle all storage-related CRUD operations using pushduck's storage API for full type safety.
## Architecture Overview
B[Framework Upload Route]
A --> C[tRPC Storage Procedures]
B --> D[pushduck Upload Handler]
C --> E[pushduck Storage API]
D --> F[S3/R2 Storage]
E --> F
B --> G[File Upload Success]
G --> C
`}
/>
## Quick Setup
**Install dependencies**
```bash
npm install @trpc/server @trpc/client pushduck
# Framework-specific tRPC packages:
# @trpc/next (for Next.js)
# @trpc/react-query (for React)
```
```bash
yarn add @trpc/server @trpc/client pushduck
# Framework-specific tRPC packages:
# @trpc/next (for Next.js)
# @trpc/react-query (for React)
```
```bash
pnpm add @trpc/server @trpc/client pushduck
# Framework-specific tRPC packages:
# @trpc/next (for Next.js)
# @trpc/react-query (for React)
```
```bash
bun add @trpc/server @trpc/client pushduck
# Framework-specific tRPC packages:
# @trpc/next (for Next.js)
# @trpc/react-query (for React)
```
**Configure storage and upload router**
```typescript title="lib/storage.ts"
import { createUploadConfig } from 'pushduck/server';
// Configure storage
export const { s3, storage, createS3Router } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build();
// Upload router for framework routes
export const uploadRouter = createS3Router({
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB")
});
export type AppUploadRouter = typeof uploadRouter;
```
**Create tRPC router with storage operations**
```typescript title="server/trpc.ts"
import { initTRPC, TRPCError } from '@trpc/server';
import { z } from 'zod';
import { storage } from '~/lib/storage';
const t = initTRPC.create();
export const appRouter = t.router({
files: t.router({
// List files with pagination
list: t.procedure
.input(z.object({
prefix: z.string().optional(),
limit: z.number().min(1).max(100).default(20),
cursor: z.string().optional()
}))
.query(async ({ input }) => {
const result = await storage.list.paginated({
prefix: input.prefix,
maxKeys: input.limit,
continuationToken: input.cursor,
});
return {
files: result.files,
nextCursor: result.nextContinuationToken,
hasMore: result.isTruncated,
};
}),
// Get file metadata
getInfo: t.procedure
.input(z.object({ key: z.string() }))
.query(async ({ input }) => {
const info = await storage.metadata.getInfo(input.key);
if (!info.exists) {
throw new TRPCError({
code: 'NOT_FOUND',
message: 'File not found'
});
}
return info;
}),
// Get multiple files info
getBatch: t.procedure
.input(z.object({ keys: z.array(z.string()) }))
.query(async ({ input }) => {
return await storage.metadata.getBatch(input.keys);
}),
// Delete file
delete: t.procedure
.input(z.object({ key: z.string() }))
.mutation(async ({ input }) => {
const result = await storage.delete.file(input.key);
return { success: result.success };
}),
// Delete multiple files
deleteBatch: t.procedure
.input(z.object({ keys: z.array(z.string()) }))
.mutation(async ({ input }) => {
const result = await storage.delete.files(input.keys);
return {
success: result.success,
deleted: result.deleted,
errors: result.errors,
};
}),
// Generate download URL
getDownloadUrl: t.procedure
.input(z.object({
key: z.string(),
expiresIn: z.number().optional()
}))
.query(async ({ input }) => {
const result = await storage.download.presignedUrl(
input.key,
input.expiresIn
);
return { url: result.url, expiresAt: result.expiresAt };
}),
// Search files by extension
searchByExtension: t.procedure
.input(z.object({
extension: z.string(),
prefix: z.string().optional()
}))
.query(async ({ input }) => {
return await storage.list.byExtension(
input.extension,
input.prefix
);
}),
// Search files by size range
searchBySize: t.procedure
.input(z.object({
minSize: z.number().optional(),
maxSize: z.number().optional(),
prefix: z.string().optional()
}))
.query(async ({ input }) => {
return await storage.list.bySize(
input.minSize,
input.maxSize,
input.prefix
);
}),
// Get storage statistics
getStats: t.procedure
.input(z.object({ prefix: z.string().optional() }))
.query(async ({ input }) => {
const files = await storage.list.files({ prefix: input.prefix });
const stats = files.files.reduce((acc, file) => {
acc.totalSize += file.size;
acc.count += 1;
const ext = file.key.split('.').pop()?.toLowerCase() || 'unknown';
acc.byExtension[ext] = (acc.byExtension[ext] || 0) + 1;
return acc;
}, {
totalSize: 0,
count: 0,
byExtension: {} as Record
});
return stats;
}),
}),
});
export type AppRouter = typeof appRouter;
```
**Create framework upload route**
```typescript title="app/api/upload/[...path]/route.ts"
import { uploadRouter } from '~/lib/storage';
// Handle file uploads through framework route
export const { GET, POST } = uploadRouter.handlers;
```
```typescript title="app/routes/api.upload.$.tsx"
import type { ActionFunctionArgs, LoaderFunctionArgs } from "@remix-run/node";
import { uploadRouter } from "~/lib/storage";
export async function loader({ request }: LoaderFunctionArgs) {
return uploadRouter.handlers(request);
}
export async function action({ request }: ActionFunctionArgs) {
return uploadRouter.handlers(request);
}
```
```typescript title="src/routes/api/upload/[...path]/+server.ts"
import type { RequestHandler } from './$types';
import { uploadRouter } from '$lib/storage';
export const GET: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
export const POST: RequestHandler = async ({ request }) => {
return uploadRouter.handlers(request);
};
```
**Create tRPC API route**
```typescript title="app/api/trpc/[trpc]/route.ts"
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
import { appRouter } from '~/server/trpc';
const handler = (req: Request) =>
fetchRequestHandler({
endpoint: '/api/trpc',
req,
router: appRouter,
createContext: () => ({}),
});
export { handler as GET, handler as POST };
```
```typescript title="server/index.ts"
import express from 'express';
import * as trpcExpress from '@trpc/server/adapters';
import { appRouter } from './trpc';
import { uploadRouter } from './storage';
import { toExpressHandler } from 'pushduck/adapters';
const app = express();
// tRPC for storage operations
app.use('/api/trpc', trpcExpress.createExpressMiddleware({
router: appRouter,
createContext: () => ({}),
}));
// pushduck for file uploads
app.all('/api/upload/*', toExpressHandler(uploadRouter.handlers));
app.listen(3000);
```
```typescript title="server/index.ts"
import { createHTTPServer } from '@trpc/server/adapters/standalone';
import { appRouter } from './trpc';
// tRPC server for storage operations
const server = createHTTPServer({
router: appRouter,
createContext: () => ({}),
});
server.listen(3001); // Different port for tRPC
```
## Client-Side Integration
### React with tRPC and pushduck
```tsx title="components/FileManager.tsx"
import { trpc } from '~/lib/trpc';
import { useUpload } from 'pushduck/client';
import type { AppUploadRouter } from '~/lib/storage';
// Upload hooks are from pushduck (framework-native)
const { UploadButton, UploadDropzone } = useUpload({
endpoint: "/api/upload",
});
export function FileManager() {
// Storage operations are from tRPC (type-safe)
const {
data: files,
refetch,
fetchNextPage,
hasNextPage
} = trpc.files.list.useInfiniteQuery(
{ limit: 20 },
{
getNextPageParam: (lastPage) => lastPage.nextCursor,
}
);
const deleteFile = trpc.files.delete.useMutation({
onSuccess: () => refetch(),
});
const getDownloadUrl = trpc.files.getDownloadUrl.useMutation();
const fileStats = trpc.files.getStats.useQuery({});
const handleUploadComplete = async (uploadedFiles: any[]) => {
// Files are uploaded, refresh the list
await refetch();
console.log('Upload completed:', uploadedFiles);
};
const handleDelete = async (key: string) => {
if (confirm('Are you sure you want to delete this file?')) {
await deleteFile.mutateAsync({ key });
}
};
const handleDownload = async (key: string) => {
const result = await getDownloadUrl.mutateAsync({ key });
window.open(result.url, '_blank');
};
return (
{/* Upload Section - Uses pushduck hooks */}
Upload Files
Images
alert(`Upload failed: ${error.message}`)}
/>
Documents
alert(`Upload failed: ${error.message}`)}
/>
{/* Stats Section - Uses tRPC */}
Storage Statistics
{fileStats.data && (
Total Files: {fileStats.data.count}
Total Size: {formatFileSize(fileStats.data.totalSize)}
Extensions: {Object.keys(fileStats.data.byExtension).join(', ')}
)}
{/* File List Section - Uses tRPC */}
Your Files
{files?.pages[0]?.files.length === 0 ? (
No files uploaded yet.
) : (
{files?.pages.flatMap(page => page.files).map((file) => (
{file.key.split('/').pop()}
{formatFileSize(file.size)}
{new Date(file.lastModified).toLocaleDateString()}
handleDownload(file.key)}
className="text-blue-500 hover:underline text-sm"
disabled={getDownloadUrl.isLoading}
>
Download
handleDelete(file.key)}
className="text-red-500 hover:underline text-sm"
disabled={deleteFile.isLoading}
>
Delete
))}
{hasNextPage && (
fetchNextPage()}
className="w-full bg-blue-500 text-white p-2 rounded hover:bg-blue-600"
>
Load More Files
)}
)}
);
}
function formatFileSize(bytes: number): string {
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
if (bytes === 0) return '0 Bytes';
const i = Math.floor(Math.log(bytes) / Math.log(1024));
return Math.round(bytes / Math.pow(1024, i) * 100) / 100 + ' ' + sizes[i];
}
```
### Advanced File Search
```tsx title="components/FileSearch.tsx"
import { useState } from 'react';
import { trpc } from '~/lib/trpc';
export function FileSearch() {
const [searchType, setSearchType] = useState<'extension' | 'size'>('extension');
const [extension, setExtension] = useState('');
const [minSize, setMinSize] = useState();
const [maxSize, setMaxSize] = useState();
// Type-safe search operations via tRPC
const searchByExtension = trpc.files.searchByExtension.useQuery(
{ extension },
{ enabled: searchType === 'extension' && !!extension }
);
const searchBySize = trpc.files.searchBySize.useQuery(
{ minSize, maxSize },
{ enabled: searchType === 'size' && (!!minSize || !!maxSize) }
);
const results = searchType === 'extension' ? searchByExtension.data : searchBySize.data;
return (
{results && (
{results.files.map((file) => (
{file.key}
{formatFileSize(file.size)}
))}
)}
);
}
```
## Authentication Integration
```typescript title="server/trpc.ts"
import { initTRPC, TRPCError } from '@trpc/server';
import { storage } from '~/lib/storage';
// Create context with user authentication
export const createContext = async ({ req }: { req: Request }) => {
const authHeader = req.headers.get('authorization');
if (!authHeader?.startsWith('Bearer ')) {
return { user: null };
}
try {
const token = authHeader.substring(7);
const user = await validateToken(token); // Your auth logic
return { user };
} catch {
return { user: null };
}
};
type Context = Awaited>;
const t = initTRPC.context().create();
// Auth middleware
const isAuthenticated = t.middleware(({ next, ctx }) => {
if (!ctx.user) {
throw new TRPCError({ code: 'UNAUTHORIZED' });
}
return next({ ctx: { ...ctx, user: ctx.user } });
});
const protectedProcedure = t.procedure.use(isAuthenticated);
export const appRouter = t.router({
files: t.router({
// User's files only
list: protectedProcedure
.input(z.object({ prefix: z.string().optional() }))
.query(async ({ input, ctx }) => {
// Scope to user's folder
const userPrefix = `users/${ctx.user.id}/${input.prefix || ''}`;
return await storage.list.files({ prefix: userPrefix });
}),
// User can only delete their own files
delete: protectedProcedure
.input(z.object({ key: z.string() }))
.mutation(async ({ input, ctx }) => {
// Ensure user owns the file
if (!input.key.startsWith(`users/${ctx.user.id}/`)) {
throw new TRPCError({ code: 'FORBIDDEN' });
}
return await storage.delete.file(input.key);
}),
}),
});
```
## Real-time Updates with Subscriptions
```typescript title="server/trpc.ts"
import { observable } from '@trpc/server/observable';
import { EventEmitter } from 'events';
const fileEventEmitter = new EventEmitter();
export const appRouter = t.router({
files: t.router({
// ... other procedures
// Real-time file updates
onUpdate: protectedProcedure
.subscription(({ ctx }) => {
return observable<{ type: 'uploaded' | 'deleted'; file: any }>((emit) => {
const onFileEvent = (data: any) => {
// Only emit events for this user's files
if (data.userId === ctx.user.id) {
emit.next(data);
}
};
fileEventEmitter.on('file:event', onFileEvent);
return () => {
fileEventEmitter.off('file:event', onFileEvent);
};
});
}),
}),
});
// Emit events from upload completion
export const emitFileEvent = (type: 'uploaded' | 'deleted', file: any, userId: string) => {
fileEventEmitter.emit('file:event', { type, file, userId });
};
```
## Performance Benefits
## Key Advantages
### **Clear Separation**
* **Uploads**: Framework-native routes with pushduck handlers
* **Storage Operations**: Type-safe tRPC procedures with pushduck storage API
* **Client**: Framework upload hooks + tRPC queries/mutations
### **Best of Both Worlds**
* **Framework-optimized uploads** (progress, validation, middleware)
* **Type-safe storage management** (list, delete, search, metadata)
* **Unified developer experience** with consistent patterns
### **Scalable Architecture**
* **Independent scaling** of upload and API operations
* **Flexible deployment** (separate services if needed)
* **Framework agnostic** storage operations
## Troubleshooting
**Common Issues**
1. **Mixed responsibilities**: Don't try to handle uploads in tRPC procedures - use framework routes
2. **Type mismatches**: Ensure storage operations use the same config as upload routes
3. **Authentication**: Sync auth between upload middleware and tRPC context
4. **CORS**: Configure CORS for both `/api/trpc` and `/api/upload` endpoints
### Debug Mode
```typescript title="lib/debug.ts"
// Enable debug logging for both systems
export const debugConfig = {
trpc: process.env.NODE_ENV === 'development',
storage: process.env.NODE_ENV === 'development',
};
// Storage debug logging
export const storage = createStorage(config).middleware?.(async (operation, params) => {
if (debugConfig.storage) {
console.log("Storage operation:", operation, params);
}
});
// Upload debug logging
export const uploadRouter = createS3Router({
// ... routes
}).middleware(async ({ req, file }) => {
if (debugConfig.storage) {
console.log("Upload request:", req.url);
console.log("File:", file.name, file.size);
}
return {};
});
```
This architecture gives you **framework-native file uploads** with **type-safe storage management**, combining the strengths of both pushduck and tRPC for a superior developer experience.
# AWS S3 (/docs/providers/aws-s3)
import { Step, Steps } from "fumadocs-ui/components/steps";
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## AWS S3 Setup
Get **AWS S3** configured for production file uploads in under 5 minutes. This guide covers everything from bucket creation to security best practices.
**Why AWS S3?** The most trusted object storage service with 99.999999999%
durability, global CDN integration, and predictable pricing. Perfect for
applications that need reliable, scalable file storage.
## What You'll Accomplish
By the end of this guide, you'll have:
* β
A secure S3 bucket configured for web uploads
* β
IAM user with minimal required permissions
* β
CORS configuration for your domain
* β
Environment variables ready for production
* β
Cost optimization settings enabled
## Create AWS Account & S3 Bucket
If you don't have an AWS account, [sign up for free](https://aws.amazon.com/free/) - you get 5GB of S3 storage free for 12 months.
1. **Open S3 Console**: Go to [S3 Console](https://console.aws.amazon.com/s3/)
2. **Create Bucket**: Click "Create bucket"
3. **Configure Basic Settings**:
```bash
Bucket name: your-app-uploads-prod
Region: us-east-1 (or closest to your users)
```
**Bucket Naming**: Use a unique, descriptive name. Bucket names are global
across all AWS accounts and cannot be changed later.
4. **Block Public Access**: Keep all "Block public access" settings **enabled** (this is secure - we'll use presigned URLs)
5. **Enable Versioning**: Recommended for data protection
6. **Create Bucket**: Click "Create bucket"
## Configure CORS for Web Access
**Comprehensive CORS Guide**: For detailed CORS configuration, testing, and troubleshooting, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
Your web application needs permission to upload files directly to S3.
1. **Open Your Bucket**: Click on your newly created bucket
2. **Go to Permissions Tab**: Click "Permissions"
3. **Edit CORS Configuration**: Scroll to "Cross-origin resource sharing (CORS)" and click "Edit"
4. **Add CORS Rules**:
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": [
"http://localhost:3000",
"http://localhost:3001",
"https://localhost:3000"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]
```
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": [
"https://yourdomain.com",
"https://www.yourdomain.com",
"https://staging.yourdomain.com"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 86400
}
]
```
5. **Save Changes**: Click "Save changes"
**Security Note**: Only add origins you trust. Wildcards (`*`) should never be
used in production - they allow any website to upload to your bucket.
## Create IAM User with Minimal Permissions
Create a dedicated user for your application with only the permissions it needs.
1. **Open IAM Console**: Go to [IAM Console](https://console.aws.amazon.com/iam/)
2. **Create User**:
* Click "Users" β "Create user"
* Username: `your-app-s3-user`
* Select "Programmatic access" only
3. **Create Custom Policy**:
* Click "Attach policies directly"
* Click "Create policy"
* Use JSON editor and paste:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"],
"Resource": "arn:aws:s3:::your-app-uploads-prod/*"
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::your-app-uploads-prod"
}
]
}
```
4. **Name the Policy**: `YourApp-S3-Upload-Policy`
5. **Attach to User**: Go back to user creation and attach your new policy
6. **Create User**: Complete the user creation
**Replace Bucket Name**: Make sure to replace `your-app-uploads-prod` with
your actual bucket name in the policy JSON.
## Get Access Keys
Your application needs these credentials to generate presigned URLs.
1. **Select Your User**: In IAM Users, click on your newly created user
2. **Security Credentials Tab**: Click "Security credentials"
3. **Create Access Key**:
* Click "Create access key"
* Select "Application running outside AWS"
* Click "Next"
4. **Copy Credentials**:
* **Access Key ID**: Copy this value
* **Secret Access Key**: Copy this value (you'll only see it once!)
**Security Alert**: Never commit these keys to version control or share them
publicly. Use environment variables or secure key management services.
## Configure Environment Variables
Add your AWS credentials to your application.
```bash
# .env.local
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AWS_REGION=us-east-1
AWS_S3_BUCKET_NAME=your-app-uploads-prod
# Optional: Custom domain for public files (CDN, CloudFront, etc.)
S3_CUSTOM_DOMAIN=https://cdn.yourdomain.com
# Optional: Enable S3 debug logging
DEBUG=aws-sdk:\*
```
```bash
# Use your hosting platform's environment variable system
# Never store production keys in .env files
# Vercel:
# vercel env add AWS_ACCESS_KEY_ID
# vercel env add AWS_SECRET_ACCESS_KEY
# Netlify:
# Add in Site settings > Environment variables
# Railway:
# Add in Variables tab
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
AWS_REGION=us-east-1
AWS_S3_BUCKET_NAME=your-app-uploads-prod
# Optional: Custom domain for public files (CDN, CloudFront, etc.)
S3_CUSTOM_DOMAIN=https://cdn.yourdomain.com
```
## Configure Custom Domain (Optional)
For better performance and branding, you can use a custom domain for your files.
### Option 1: CloudFront CDN (Recommended)
1. **Create CloudFront Distribution**:
* Go to [CloudFront Console](https://console.aws.amazon.com/cloudfront/)
* Click "Create Distribution"
* Origin Domain: Select your S3 bucket
* Origin Access: Use "Origin access control settings"
* Default Cache Behavior: Allow all HTTP methods
* Alternate Domain Names: Add your custom domain (e.g., `cdn.yourdomain.com`)
* SSL Certificate: Request or import your certificate
2. **Add DNS Record**:
```bash
# Add CNAME record in your DNS
cdn.yourdomain.com -> your-cloudfront-distribution.cloudfront.net
```
3. **Update Environment Variables**:
```bash
# Add to your .env.local
S3_CUSTOM_DOMAIN=https://cdn.yourdomain.com
```
### Option 2: Public Bucket with Custom Domain
If your bucket is public (not recommended for production):
1. **Configure Bucket for Website Hosting**:
* Go to your S3 bucket β Properties
* Enable "Static website hosting"
* Set index document to `index.html`
* Note the website endpoint
2. **Add DNS Record**:
```bash
# Add CNAME record in your DNS
uploads.yourdomain.com -> your-bucket.s3-website-region.amazonaws.com
```
3. **Update Environment Variables**:
```bash
# Add to your .env.local
S3_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
**Security Note**: Public buckets are not recommended for production. Use CloudFront with proper access controls instead.
## Configure Your App
Set up pushduck with your AWS S3 configuration:
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, config } = createUploadConfig()
.provider("aws", {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: process.env.AWS_REGION!,
bucket: process.env.AWS_S3_BUCKET_NAME!,
// Optional: Custom domain for public files
customDomain: process.env.S3_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "10MB",
acl: "private", // Use 'public-read' for public files
})
.build();
export { s3 };
```
**Custom Domain**: When `customDomain` is configured, public URLs will use your custom domain instead of the S3 URL. Internal operations (upload, delete) still use S3 endpoints.
## Test Your Configuration
Verify everything works by testing an upload.
1. **Start Your App**: Run your development server
2. **Test Upload**: Try uploading a file using your upload component
3. **Check S3**: Verify the file appears in your S3 bucket
4. **Check Access**: Verify you can access the uploaded file via its URL
If something's not working:
* β
Check CORS configuration matches your domain
* β
Verify IAM policy has correct bucket name
* β
Confirm environment variables are loaded
* β
Check browser console for specific error messages
## π Congratulations!
Your AWS S3 bucket is now ready for production! Here's what you've accomplished:
* β
**Secure Storage**: Files are stored in AWS's enterprise-grade infrastructure
* β
**Cost Efficient**: Pay only for what you use, with free tier coverage
* β
**Globally Accessible**: Files available worldwide with low latency
* β
**Scalable**: Handles millions of files without configuration changes
* β
**Secure Access**: Minimal IAM permissions and proper CORS setup
## π° Cost Optimization
Keep your AWS bills low with these optimization tips:
### Storage Classes
```bash
# Standard: $0.023 per GB/month - for frequently accessed files
# Standard-IA: $0.0125 per GB/month - for infrequently accessed files
# Glacier: $0.004 per GB/month - for archival (retrieval takes hours)
```
### Lifecycle Policies
Set up automatic transitions to save money:
1. **Go to Your Bucket** β Management β Lifecycle rules
2. **Create Rule**:
* Transition to Standard-IA after 30 days
* Transition to Glacier after 90 days
* Delete incomplete multipart uploads after 1 day
### Request Optimization
* **Use CloudFront**: Cache files globally to reduce S3 requests
* **Batch Operations**: Group multiple operations when possible
* **Monitor Usage**: Set up billing alerts for unexpected costs
## π Security Best Practices
### Access Control
```json
// Example: User-specific upload paths
{
"prefix": "users/${user.id}/*",
"maxFileSize": "10MB",
"types": ["image/jpeg", "image/png"]
}
```
### Monitoring
1. **Enable CloudTrail**: Track all S3 API calls
2. **Set Up Alerts**: Monitor unusual access patterns
3. **Regular Audits**: Review IAM permissions quarterly
### Backup Strategy
```bash
# Cross-region replication for critical data
Source Bucket: us-east-1
Replica Bucket: us-west-2
Replication: Real-time
```
## π What's Next?
Now that AWS S3 is configured, explore these advanced features:
πΌοΈ Image Uploads
Handle image uploads with optimization and validation
Image Guide β
π οΈ API Reference
Complete API documentation and examples
API Docs β
## π‘ Pro Tips
**Naming Convention**: Use consistent bucket naming like `{company}-{app}- {environment}-uploads` for easy management across multiple projects.
**Cost Alert**: Set up AWS billing alerts at $5, $20, and $50 to avoid
surprise charges during development.
**Performance**: Place your S3 bucket in the same region as your application
server for fastest presigned URL generation.
***
**Need help with AWS S3 setup?** Join our [Discord community](https://pushduck.dev/discord) or check out the [troubleshooting guide](/docs/api/troubleshooting) for common issues.
# Cloudflare R2 (/docs/providers/cloudflare-r2)
## Using Cloudflare R2
Set up Cloudflare R2 for lightning-fast file uploads with zero egress fees.
## Why Choose Cloudflare R2?
* **π Global Performance**: Cloudflare's edge network for fast uploads worldwide
* **π° Cost Effective**: 10x cheaper than S3 with zero egress fees
* **π S3 Compatible**: Works with existing S3 tools and libraries
* **π Built-in CDN**: Automatic content distribution via Cloudflare's network
## 1. Create an R2 Bucket
1. Go to [Cloudflare Dashboard](https://dash.cloudflare.com/)
2. Click **"R2 Object Storage"** in the sidebar
3. Click **"Create bucket"**
4. Choose a unique bucket name (e.g., `my-app-uploads`)
5. Select your preferred location (Auto for global performance)
6. Click **"Create bucket"**
## 2. Configure Public Access (Optional)
### For Public Files (Images, Documents)
1. Go to your bucket settings
2. Click **"Settings"** tab
3. Under **"Public access"**, click **"Allow Access"**
4. Choose **"Custom domain"** or use the R2.dev subdomain
**Custom Domain Setup:**
```bash
# Add a CNAME record in your DNS:
# uploads.yourdomain.com -> your-bucket.r2.cloudflarestorage.com
```
### For Private Files
Keep public access disabled - files will only be accessible via presigned URLs.
## 3. Generate API Token
1. Go to **"Manage R2 API tokens"**
2. Click **"Create API token"**
3. Set permissions:
* **Object:Read** β
* **Object:Write** β
* **Bucket:Read** β
4. Choose **"Specify bucket"** and select your bucket
5. Click **"Create API token"**
6. **Save your Access Key ID and Secret Access Key**
## 4. Configure CORS (If Using Custom Domain)
**Comprehensive CORS Guide**: For detailed CORS configuration, testing, and troubleshooting, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
In your R2 bucket settings, add this CORS policy:
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": ["http://localhost:3000", "https://yourdomain.com"],
"ExposeHeaders": ["ETag", "Content-Length"]
}
]
```
## 5. Configure Your App
### Environment Variables
Add to your `.env.local`:
```bash
# Cloudflare R2 Configuration
AWS_ACCESS_KEY_ID=your_r2_access_key_id
AWS_SECRET_ACCESS_KEY=your_r2_secret_access_key
AWS_ENDPOINT_URL=https://account-id.r2.cloudflarestorage.com
AWS_REGION=auto
S3_BUCKET_NAME=your-bucket-name
# Optional: Custom domain for public files
CLOUDFLARE_R2_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
### Custom Domain Setup (Optional)
For better performance and branding, you can use a custom domain:
1. **Add Custom Domain in R2 Dashboard**:
* Go to your R2 bucket β Settings
* Click "Add Custom Domain"
* Enter your domain (e.g., `uploads.yourdomain.com`)
* Cloudflare will provide DNS records to add
2. **Add DNS Records**:
```bash
# Add CNAME record in your DNS
uploads.yourdomain.com -> your-bucket.r2.cloudflarestorage.com
```
3. **SSL Certificate**: Cloudflare automatically provides SSL certificates for custom domains
4. **Update Environment Variables**:
```bash
# Add to your .env.local
CLOUDFLARE_R2_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
**Built-in CDN**: Cloudflare R2 custom domains automatically include global CDN acceleration with 250+ locations worldwide.
## 6. Update Your Upload Configuration
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
accountId: process.env.R2_ACCOUNT_ID!, // Found in R2 dashboard
bucket: process.env.S3_BUCKET_NAME!,
// Optional: Custom domain for faster access
customDomain: process.env.CLOUDFLARE_R2_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "10MB",
acl: "public-read", // For public access
})
.build();
```
## 7. Test Your Setup
```bash
npx @pushduck/cli@latest test --provider r2
```
This will verify your R2 connection and upload a test file.
## β
You're Ready!
Your Cloudflare R2 is now configured! Files will be:
* **Uploaded globally** via Cloudflare's edge network
* **Served fast** with built-in CDN
* **Cost effective** with zero egress fees
## π Performance Benefits
### Global Upload Acceleration
R2 automatically routes uploads to the nearest Cloudflare data center:
```typescript
// Automatic performance optimization
const { uploadFiles } = upload.imageUpload();
// Uploads are automatically optimized for:
// - Nearest edge location
// - Fastest route to storage
// - Automatic retry on connection issues
```
### Built-in CDN
Your uploaded files are automatically cached globally:
```typescript
// Files are served from 250+ locations worldwide
const imageUrl = file.url; // Automatically CDN-accelerated
```
## π§ Advanced Configuration
### Worker Integration
Integrate with Cloudflare Workers for server-side processing:
```typescript
// Advanced R2 setup with Workers
export const { s3, } = createUploadConfig()
.provider("cloudflareR2",{
// ... basic config
workerScript: "image-transform", // Optional: Transform images on upload
webhookUrl: "https://api.yourdomain.com/webhook", // Optional: Post-upload webhook
})
.build();
```
### Analytics & Monitoring
Track upload performance:
```typescript
.hooks({
onUploadComplete: async ({ file, metadata }) => {
// Track successful uploads
await analytics.track("file_uploaded", {
provider: "cloudflare-r2",
size: file.size,
type: file.type,
location: metadata.cfRay, // Cloudflare location
});
}
})
```
## π Security Best Practices
* **Use scoped API tokens** - Only grant permissions to specific buckets
* **Enable custom domain** - Better security than r2.dev subdomain
* **Set up WAF rules** - Protect against abuse via Cloudflare dashboard
* **Monitor usage** - Set up billing alerts for unexpected usage
## π Common Issues
**CORS errors?** β Check your domain is in AllowedOrigins and verify custom domain setup. For detailed CORS configuration, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).\
**Access denied?** β Verify API token has Object:Read and Object:Write permissions\
**Slow uploads?** β Ensure you're using the correct endpoint URL with your account ID\
**Custom domain not working?** β Verify CNAME record and bucket public access settings
## π° Cost Comparison
| Provider | Storage | Egress | Requests |
| ----------------- | ------------ | ------------- | ------------- |
| **Cloudflare R2** | $0.015/GB | **FREE** π | $0.36/million |
| AWS S3 | $0.023/GB | $0.09/GB | $0.40/million |
| **Savings** | **35% less** | **100% less** | **10% less** |
***
**Next:** [Upload Your First Image](/docs/guides/image-uploads) or try [DigitalOcean Spaces](/docs/providers/digitalocean-spaces)
# DigitalOcean Spaces (/docs/providers/digitalocean-spaces)
## Using DigitalOcean Spaces
Set up DigitalOcean Spaces for fast, affordable file uploads with built-in CDN.
## Why Choose DigitalOcean Spaces?
* **π° Predictable Pricing**: Simple pricing with generous free tier
* **π Built-in CDN**: Free CDN with all Spaces for global performance
* **π S3 Compatible**: Works seamlessly with S3 tools and libraries
* **π Easy Setup**: Simple configuration with great developer experience
## 1. Create a Space
1. Go to [DigitalOcean Cloud Panel](https://cloud.digitalocean.com/spaces)
2. Click **"Create a Space"**
3. Choose a datacenter region (closest to your users)
4. Enter a unique Space name (e.g., `my-app-uploads`)
5. Choose **"Restrict File Listing"** for security
6. Enable **"CDN"** for global distribution
7. Click **"Create a Space"**
## 2. Configure CDN (Recommended)
DigitalOcean automatically creates a CDN endpoint:
* **Space URL**: `https://my-app-uploads.nyc3.digitaloceanspaces.com`
* **CDN URL**: `https://my-app-uploads.nyc3.cdn.digitaloceanspaces.com`
### Custom Domain (Optional)
Set up your own domain for branded URLs:
1. Go to **"Settings"** in your Space
2. Click **"Add Custom Domain"**
3. Enter your domain (e.g., `uploads.yourdomain.com`)
4. Add a CNAME record in your DNS:
```bash
# DNS Record:
uploads.yourdomain.com -> my-app-uploads.nyc3.cdn.digitaloceanspaces.com
```
## 3. Generate API Keys
1. Go to **"API"** in your DigitalOcean dashboard
2. Click **"Spaces access keys"**
3. Click **"Generate New Key"**
4. Enter a name (e.g., "My App Uploads")
5. **Save your Access Key and Secret Key**
## 4. Configure CORS
**Comprehensive CORS Guide**: For detailed CORS configuration, testing, and troubleshooting, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
In your Space settings, add this CORS configuration:
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": ["http://localhost:3000", "https://yourdomain.com"],
"ExposeHeaders": ["ETag", "Content-Length"],
"MaxAgeSeconds": 3000
}
]
```
## 5. Configure Your App
Add to your `.env.local`:
```bash
# DigitalOcean Spaces Configuration
AWS_ACCESS_KEY_ID=your_spaces_access_key
AWS_SECRET_ACCESS_KEY=your_spaces_secret_key
AWS_ENDPOINT_URL=https://nyc3.digitaloceanspaces.com
AWS_REGION=nyc3
S3_BUCKET_NAME=your-space-name
# CDN Configuration (recommended)
DO_SPACES_CUSTOM_DOMAIN=https://your-space-name.nyc3.cdn.digitaloceanspaces.com
# Or your custom domain:
# DO_SPACES_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
## 6. Update Your Upload Configuration
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, } = createUploadConfig()
.provider("digitalOceanSpaces",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: process.env.AWS_REGION!, // e.g., 'nyc3', 'sfo3', 'ams3'
bucket: process.env.S3_BUCKET_NAME!,
// Use CDN for faster file serving
customDomain: process.env.DO_SPACES_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "50MB", // Spaces supports larger files
acl: "public-read",
})
.build();
```
## 7. Test Your Setup
```bash
npx @pushduck/cli@latest test --provider digitalocean
```
This will verify your Spaces connection and upload a test file.
## β
You're Ready!
Your DigitalOcean Space is configured! Benefits:
* **Global CDN** - Files served from 12+ locations worldwide
* **Affordable pricing** - $5/month for 250GB + 1TB transfer
* **High performance** - Built-in CDN acceleration
## π CDN Benefits
### Automatic Global Distribution
Your files are automatically cached worldwide:
```typescript
// Files are served from the nearest CDN location
const imageUrl = file.url; // Automatically CDN-accelerated
// CDN locations include:
// - North America: NYC, SF, Toronto
// - Europe: Amsterdam, London, Frankfurt
// - Asia: Singapore, Bangalore
```
### Cache Control
Optimize caching for different file types:
```typescript
// Configure caching per upload type
const imageUpload = s3
.image()
.maxFileSize("10MB")
.onUploadComplete(async ({ file, key }) => {
// Set cache headers for images (long cache)
await setObjectMetadata(key, {
"Cache-Control": "public, max-age=31536000", // 1 year
"Content-Type": file.type,
});
});
const documentUpload = s3
.file()
.types(["pdf", "docx"])
.onUploadComplete(async ({ file, key }) => {
// Shorter cache for documents
await setObjectMetadata(key, {
"Cache-Control": "public, max-age=86400", // 1 day
});
});
```
## π§ Advanced Configuration
### Multiple Regions
Deploy across multiple regions for redundancy:
```typescript
// Primary region (closest to users)
const primarySpaces = createUploadConfig().provider("digitalOceanSpaces",{
region: "nyc3", // New York
bucket: "my-app-uploads-us",
});
// Backup region
const backupSpaces = createUploadConfig().provider("digitalOceanSpaces",{
region: "ams3", // Amsterdam
bucket: "my-app-uploads-eu",
});
```
### Lifecycle Policies
Automatically manage old files:
```typescript
// Configure automatic cleanup
.hooks({
onUploadComplete: async ({ file, metadata }) => {
// Schedule deletion of temporary files
if (metadata.category === 'temp') {
await scheduleCleanup(file.key, { days: 7 });
}
}
})
```
## π° Pricing Breakdown
| Resource | Included | Overage |
| ------------ | --------- | -------- |
| **Storage** | 250 GB | $0.02/GB |
| **Transfer** | 1 TB | $0.01/GB |
| **CDN** | Included | Free |
| **Requests** | Unlimited | Free |
**Total**: $5/month for most small-medium apps
## π Security Features
### Private Spaces
For sensitive files:
```typescript
// Configure private access
export const { s3, } = createUploadConfig()
.provider("digitalOceanSpaces",{
// ... config
acl: "private", // Files not publicly accessible
})
.defaults({
generatePresignedUrl: true, // Generate secure URLs
urlExpirationHours: 1, // URLs expire after 1 hour
})
.build();
```
### File Access Control
Control who can access what:
```typescript
.middleware(async ({ req, file }) => {
const user = await authenticate(req);
// Only allow users to upload to their own folder
const userPrefix = `users/${user.id}/`;
return {
userId: user.id,
keyPrefix: userPrefix,
};
})
```
## π Monitoring & Analytics
### Usage Monitoring
Track your Space usage:
```typescript
// Monitor uploads in real-time
.hooks({
onUploadStart: async ({ file }) => {
await analytics.track("upload_started", {
provider: "digitalocean-spaces",
fileSize: file.size,
fileType: file.type,
});
},
onUploadComplete: async ({ file, metadata }) => {
await analytics.track("upload_completed", {
provider: "digitalocean-spaces",
duration: metadata.uploadTime,
cdnEnabled: true,
});
}
})
```
## π Common Issues
**CORS errors?** β Verify your domain is in AllowedOrigins and CORS is enabled. For detailed CORS configuration, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).\
**Slow uploads?** β Check you're using the correct regional endpoint\
**CDN not working?** β Verify CDN is enabled and using the cdn.digitaloceanspaces.com endpoint\
**Access denied?** β Check your API keys have Spaces read/write permissions\
**File not found?** β Ensure you're using the CDN endpoint for file access
## π Performance Tips
1. **Use CDN endpoints** for all file access (not direct Space URLs)
2. **Choose closest region** to your primary user base
3. **Enable gzip compression** for text files
4. **Set proper cache headers** for different file types
5. **Use progressive image formats** (WebP, AVIF) when possible
***
**Next:** [Upload Your First Image](/docs/guides/image-uploads) or try [MinIO Setup](/docs/providers/minio)
# Google Cloud Storage (/docs/providers/google-cloud)
## Using Google Cloud Storage
Set up Google Cloud Storage (GCS) for scalable, global file uploads with Google's infrastructure.
## Why Choose Google Cloud Storage?
* **π Global Infrastructure**: Google's worldwide network for fast access
* **π S3 Compatible**: Works with S3-compatible libraries via XML API
* **π° Competitive Pricing**: Cost-effective with multiple storage classes
* **π Enterprise Security**: Google-grade security and compliance
* **β‘ High Performance**: Optimized for speed and reliability
## 1. Create a GCS Bucket
### Using Google Cloud Console
1. Go to [Google Cloud Console](https://console.cloud.google.com/storage)
2. Select or create a project
3. Click **"Create bucket"**
4. Configure your bucket:
* **Name**: Choose a globally unique name (e.g., `my-app-uploads-bucket`)
* **Location**: Choose region closest to your users
* **Storage class**: Standard (for frequently accessed files)
* **Access control**: Fine-grained (recommended)
5. Click **"Create"**
### Using gcloud CLI
```bash
# Install gcloud CLI (if not already installed)
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
# Create bucket
gsutil mb -p your-project-id -c STANDARD -l us-central1 gs://my-app-uploads-bucket
```
## 2. Configure Public Access (Optional)
### For Public Files (Images, Documents)
```bash
# Make bucket publicly readable
gsutil iam ch allUsers:objectViewer gs://my-app-uploads-bucket
# Or using Console:
# Storage β Bucket β Permissions β Add β "allUsers" β "Storage Object Viewer"
```
### For Private Files
Keep default settings - files will only be accessible via signed URLs.
## 3. Create Service Account
1. Go to [IAM & Admin](https://console.cloud.google.com/iam-admin/serviceaccounts)
2. Click **"Create Service Account"**
3. Enter details:
* **Name**: `upload-service`
* **Description**: "Service account for file uploads"
4. Click **"Create and Continue"**
5. Grant roles:
* **Storage Admin** (or **Storage Object Admin** for bucket-specific access)
6. Click **"Continue"** β **"Done"**
## 4. Generate Service Account Key
1. Click on your service account
2. Go to **"Keys"** tab
3. Click **"Add Key"** β **"Create new key"**
4. Choose **"JSON"** format
5. **Download and securely store the JSON file**
## 5. Configure CORS
**Comprehensive CORS Guide**: For detailed CORS configuration, testing, and troubleshooting, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
```bash
# Create cors.json file
cat > cors.json << EOF
[
{
"origin": ["http://localhost:3000", "https://yourdomain.com"],
"method": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"responseHeader": ["Content-Type", "ETag", "Content-Length"],
"maxAgeSeconds": 3600
}
]
EOF
# Apply CORS configuration
gsutil cors set cors.json gs://my-app-uploads-bucket
```
## 6. Configure Your App
Add to your `.env.local`:
```bash
# Google Cloud Storage Configuration
GOOGLE_APPLICATION_CREDENTIALS=./path/to/service-account-key.json
GCS_PROJECT_ID=your-project-id
GCS_BUCKET_NAME=my-app-uploads-bucket
# Optional: Custom domain for public files
GCS_CUSTOM_DOMAIN=https://storage.googleapis.com/my-app-uploads-bucket
# Or with custom domain: GCS_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
## 7. Update Your Upload Configuration
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, } = createUploadConfig()
.provider("gcs",{
projectId: process.env.GCS_PROJECT_ID!,
keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS!,
bucket: process.env.GCS_BUCKET_NAME!,
// Optional: Custom domain for public files
customDomain: process.env.GCS_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "100MB", // GCS supports large files
acl: "publicRead", // For public access
})
.build();
```
## 8. Test Your Setup
```bash
npx @pushduck/cli@latest test --provider gcs
```
This will verify your GCS connection and upload a test file.
## β
You're Ready!
Your Google Cloud Storage is configured! Benefits:
* **Global performance** via Google's network
* **Automatic scaling** to handle any load
* **Enterprise-grade security** and compliance
* **Multiple storage classes** for cost optimization
## π Advanced Features
### Multi-Regional Storage
```typescript
// Configure multi-regional bucket for global performance
export const { s3, } = createUploadConfig()
.provider("gcs",{
// ... basic config
bucket: process.env.GCS_BUCKET_NAME!,
// Multi-regional configuration
location: "US", // or "EU", "ASIA"
storageClass: "MULTI_REGIONAL",
})
.build();
```
### Storage Classes
Optimize costs with different storage classes:
```typescript
// Configure lifecycle policies for cost optimization
.hooks({
onUploadComplete: async ({ file, key, metadata }) => {
// Move old files to cheaper storage classes
if (metadata.category === 'archive') {
await moveToStorageClass(key, 'COLDLINE');
} else if (metadata.category === 'backup') {
await moveToStorageClass(key, 'ARCHIVE');
}
}
})
```
### CDN Integration
```typescript
// Use Cloud CDN for faster global delivery
export const { s3, } = createUploadConfig()
.provider("gcs",{
// ... config
// Configure CDN endpoint
cdnUrl: "https://your-cdn-domain.com",
// Cache control headers
defaultCacheControl: "public, max-age=31536000",
})
.build();
```
## π Security Best Practices
### Bucket-Level IAM
```bash
# Create custom role with minimal permissions
gcloud iam roles create upload_service_role \
--project=your-project-id \
--title="Upload Service Role" \
--permissions="storage.objects.create,storage.objects.get"
# Bind role to service account
gcloud projects add-iam-policy-binding your-project-id \
--member="serviceAccount:upload-service@your-project-id.iam.gserviceaccount.com" \
--role="projects/your-project-id/roles/upload_service_role"
```
### Object-Level Security
```typescript
// Implement user-based access control
.middleware(async ({ req, file }) => {
const user = await authenticate(req);
// Generate user-specific path
const userPath = `users/${user.id}/${file.name}`;
return {
userId: user.id,
keyPrefix: `users/${user.id}/`,
metadata: {
uploadedBy: user.id,
uploadedAt: new Date().toISOString(),
}
};
})
```
### Signed URLs for Private Access
```typescript
// Generate time-limited access URLs
.hooks({
onUploadComplete: async ({ file, key }) => {
if (file.private) {
// Generate signed URL valid for 1 hour
const signedUrl = await generateSignedUrl(key, {
action: 'read',
expires: Date.now() + 60 * 60 * 1000, // 1 hour
});
return { ...file, url: signedUrl };
}
return file;
}
})
```
## π° Cost Optimization
### Storage Class Strategy
| Use Case | Storage Class | Cost | Access Pattern |
| ---------------- | ------------- | ------ | ---------------- |
| **Active files** | Standard | Higher | Frequent access |
| **Backups** | Nearline | Medium | Monthly access |
| **Archives** | Coldline | Lower | Quarterly access |
| **Long-term** | Archive | Lowest | Yearly access |
### Lifecycle Management
```typescript
// Automatic lifecycle transitions
const lifecyclePolicy = {
rule: [
{
action: { type: "SetStorageClass", storageClass: "NEARLINE" },
condition: { age: 30 }, // Move to Nearline after 30 days
},
{
action: { type: "SetStorageClass", storageClass: "COLDLINE" },
condition: { age: 90 }, // Move to Coldline after 90 days
},
{
action: { type: "Delete" },
condition: { age: 365 }, // Delete after 1 year
},
],
};
```
## π Monitoring & Analytics
### Cloud Monitoring Integration
```typescript
// Track upload metrics
.hooks({
onUploadStart: async ({ file }) => {
await cloudMonitoring.createTimeSeries({
name: 'custom.googleapis.com/upload/started',
value: 1,
labels: {
file_type: file.type,
file_size_mb: Math.round(file.size / 1024 / 1024),
}
});
},
onUploadComplete: async ({ file, metadata }) => {
await cloudMonitoring.createTimeSeries({
name: 'custom.googleapis.com/upload/completed',
value: metadata.uploadDuration,
labels: {
success: 'true',
provider: 'gcs',
}
});
}
})
```
### Usage Analytics
```typescript
// Track storage usage and costs
const getStorageMetrics = async () => {
const bucket = storage.bucket(process.env.GCS_BUCKET_NAME!);
const [metadata] = await bucket.getMetadata();
return {
totalSize: metadata.storageClass?.totalBytes,
objectCount: metadata.storageClass?.objectCount,
storageClass: metadata.storageClass?.name,
location: metadata.location,
};
};
```
## π Custom Domain Setup
### 1. Verify Domain Ownership
```bash
# Add verification record to DNS
# TXT record: google-site-verification=your-verification-code
# Verify domain
gcloud domains verify yourdomain.com
```
### 2. Configure CNAME
```bash
# Add CNAME record to DNS
# uploads.yourdomain.com -> c.storage.googleapis.com
```
### 3. Update Configuration
```typescript
// Use custom domain for file URLs
export const { s3, } = createUploadConfig()
.provider("gcs",{
// ... config
customDomain: "https://uploads.yourdomain.com",
})
.build();
```
## π Common Issues
**Authentication errors?** β Check service account key path and permissions\
**CORS errors?** β Verify CORS configuration and allowed origins. For detailed CORS configuration, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).\
**Access denied?** β Check IAM roles and bucket permissions\
**Slow uploads?** β Choose region closer to your users\
**Quota exceeded?** β Check project quotas and billing account
## π Performance Tips
1. **Choose the right region** - closest to your users
2. **Use multi-regional** for global applications
3. **Enable CDN** for frequently accessed files
4. **Optimize image sizes** before upload
5. **Use parallel uploads** for multiple files
6. **Implement proper caching** headers
***
**Next:** [Upload Your First Image](/docs/guides/image-uploads) or check out our [Examples](/docs/examples)
# Storage Providers (/docs/providers)
import { Card, Cards } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
## Supported Storage Providers
Pushduck supports multiple cloud storage providers through a unified S3-compatible API. Choose the provider that best fits your needs, budget, and infrastructure requirements.
**Universal S3 API**: All providers use the same configuration pattern, making it easy to switch between them or use multiple providers in the same project.
## Supported Providers
**Best for**: Enterprise applications, complex workflows, global scale
β
Global edge locations\
β
Advanced security features\
β
Comprehensive ecosystem\
β
Trusted by millions
**Best for**: High-traffic applications, cost optimization
β
No egress fees\
β
Global edge network\
β
Fast performance\
β
Simple pricing
**Best for**: Small to medium applications, predictable costs
β
Flat-rate pricing\
β
Built-in CDN\
β
Simple setup\
β
Developer-friendly
**Best for**: AI/ML workloads, Google ecosystem integration
β
AI integration\
β
Global network\
β
Strong consistency\
β
Advanced analytics
**Best for**: Self-hosted deployments, data sovereignty
β
Self-hosted\
β
Full control\
β
High performance\
β
Open source
**Best for**: Custom deployments, other S3-compatible services
β
Universal compatibility\
β
Custom endpoints\
β
Flexible configuration\
β
Vendor independence
## Quick Setup
All providers follow the same configuration pattern:
```typescript
import { createS3Router, s3 } from 'pushduck/server';
const uploadRouter = createS3Router({
// Configure your storage provider
storage: {
provider: 'aws-s3', // or 'cloudflare-r2', 'digitalocean-spaces', etc.
region: 'us-east-1',
bucket: 'your-bucket-name',
credentials: {
accessKeyId: process.env.ACCESS_KEY_ID!,
secretAccessKey: process.env.SECRET_ACCESS_KEY!,
},
},
// Define your upload routes
routes: {
imageUpload: s3.image().maxFileSize("5MB"),
documentUpload: s3.file().maxFileSize("10MB"),
},
});
```
## Provider Comparison
| Provider | Pricing Model | Egress Fees | CDN Included | Best For |
| ----------------------- | ------------- | -------------- | ---------------- | ---------------------------- |
| **AWS S3** | Pay-per-use | Yes | Separate service | Enterprise, global scale |
| **Cloudflare R2** | Pay-per-use | **No** | Yes | High-traffic, cost-sensitive |
| **DigitalOcean Spaces** | Flat-rate | Included quota | Yes | Predictable costs |
| **Google Cloud** | Pay-per-use | Yes | Separate service | AI/ML integration |
| **MinIO** | Self-hosted | None | Self-managed | Data sovereignty |
## Need Help Choosing?
**Quick Recommendation**:
* **Starting out?** β DigitalOcean Spaces (simple, predictable)
* **High traffic?** β Cloudflare R2 (no egress fees)
* **Enterprise?** β AWS S3 (full ecosystem)
* **Self-hosted?** β MinIO (complete control)
## Next Steps
1. Choose your provider from the cards above
2. Follow the provider-specific setup guide
3. Configure your upload routes
4. Start uploading files!
Each provider guide includes:
* Step-by-step setup instructions
* Environment variable configuration
* Production deployment tips
* Troubleshooting common issues
# MinIO (/docs/providers/minio)
## Using MinIO
Set up MinIO for self-hosted, S3-compatible object storage with full control and privacy.
## Why Choose MinIO?
* **π Self-Hosted**: Complete control over your data and infrastructure
* **π S3 Compatible**: Drop-in replacement for AWS S3 API
* **π° Cost Effective**: No cloud provider fees - just your server costs
* **π Private**: Keep sensitive data on your own infrastructure
* **β‘ High Performance**: Optimized for speed and throughput
## π Quick Start with Docker
### 1. Start MinIO Server
```bash
# Create data directory
mkdir -p ~/minio/data
# Start MinIO server
docker run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-v ~/minio/data:/data \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin123" \
quay.io/minio/minio server /data --console-address ":9001"
```
### 2. Access MinIO Console
1. Open `http://localhost:9001` in your browser
2. Login with:
* **Username**: `minioadmin`
* **Password**: `minioadmin123`
### 3. Create a Bucket
1. Click **"Create Bucket"**
2. Enter bucket name (e.g., `my-app-uploads`)
3. Click **"Create Bucket"**
### 4. Create Access Keys
1. Go to **"Identity"** β **"Service Accounts"**
2. Click **"Create service account"**
3. Enter a name (e.g., "Upload Service")
4. Click **"Create"**
5. **Save your Access Key and Secret Key**
## π Production Docker Setup
### Docker Compose Configuration
```yaml
# docker-compose.yml
version: "3.8"
services:
minio:
image: quay.io/minio/minio:latest
container_name: minio
ports:
- "9000:9000" # API
- "9001:9001" # Console
volumes:
- minio_data:/data
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data --console-address ":9001"
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
volumes:
minio_data:
```
### Environment Variables
```bash
# .env
MINIO_ROOT_USER=your-admin-username
MINIO_ROOT_PASSWORD=your-secure-password-min-8-chars
```
### Start Production Server
```bash
docker-compose up -d
```
## π Production Deployment
### 1. Reverse Proxy Setup (Nginx)
```nginx
# /etc/nginx/sites-available/minio
server {
listen 80;
server_name uploads.yourdomain.com;
location / {
proxy_pass http://localhost:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Handle large uploads
client_max_body_size 100M;
}
}
# Console (optional - for admin access)
server {
listen 80;
server_name minio-console.yourdomain.com;
location / {
proxy_pass http://localhost:9001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### 2. SSL/TLS Setup
```bash
# Install Certbot
sudo apt install certbot python3-certbot-nginx
# Get SSL certificates
sudo certbot --nginx -d uploads.yourdomain.com
sudo certbot --nginx -d minio-console.yourdomain.com
```
## π§ Configure Your App
### Environment Variables
Add to your `.env.local`:
```bash
# MinIO Configuration
AWS_ACCESS_KEY_ID=your_minio_access_key
AWS_SECRET_ACCESS_KEY=your_minio_secret_key
AWS_ENDPOINT_URL=http://localhost:9000
# For production: AWS_ENDPOINT_URL=https://uploads.yourdomain.com
AWS_REGION=us-east-1
S3_BUCKET_NAME=my-app-uploads
# Optional: Custom domain for public files
MINIO_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
### Custom Domain Setup (Optional)
For production deployments, you can use a custom domain:
1. **Configure Reverse Proxy** (Nginx/Apache):
```nginx
server {
listen 80;
server_name uploads.yourdomain.com;
location / {
proxy_pass http://localhost:9000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
2. **Add DNS Record**:
```bash
# Add A record in your DNS
uploads.yourdomain.com -> your-server-ip
```
3. **SSL Certificate** (Recommended):
```bash
# Using Let's Encrypt
certbot --nginx -d uploads.yourdomain.com
```
## π Update Your Upload Configuration
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, } = createUploadConfig()
.provider("minio",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
endpoint: process.env.AWS_ENDPOINT_URL!,
region: process.env.AWS_REGION!,
bucket: process.env.S3_BUCKET_NAME!,
// Force path-style URLs for MinIO
forcePathStyle: true,
// Optional: Custom domain for public files
customDomain: process.env.MINIO_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "100MB", // MinIO handles large files well
acl: "public-read",
})
.build();
```
## π§ͺ Test Your Setup
```bash
npx @pushduck/cli@latest test --provider minio
```
This will verify your MinIO connection and upload a test file.
## β
You're Ready!
Your MinIO server is configured! Benefits:
* **Full control** over your data
* **No cloud fees** - just server costs
* **High performance** for local/regional traffic
* **Complete privacy** for sensitive files
## π Security Configuration
### 1. Bucket Policies
Set up access control for your bucket:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-app-uploads/*"
}
]
}
```
### 2. Access Key Management
Create restricted access keys:
```typescript
// Service account with limited permissions
const uploadOnlyPolicy = {
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Action: ["s3:PutObject", "s3:PutObjectAcl", "s3:GetObject"],
Resource: "arn:aws:s3:::my-app-uploads/*",
},
],
};
```
### 3. Network Security
```yaml
# docker-compose.yml with network isolation
version: "3.8"
services:
minio:
# ... other config
networks:
- minio-network
ports:
# Only expose what's needed
- "127.0.0.1:9000:9000" # Bind to localhost only
- "127.0.0.1:9001:9001"
networks:
minio-network:
driver: bridge
```
## π Monitoring & Maintenance
### Health Checks
```typescript
// Add health monitoring
.hooks({
onUploadStart: async () => {
// Check MinIO connectivity
const isHealthy = await checkMinIOHealth();
if (!isHealthy) {
throw new Error("MinIO server unavailable");
}
}
})
async function checkMinIOHealth() {
try {
const response = await fetch(`${process.env.AWS_ENDPOINT_URL}/minio/health/live`);
return response.ok;
} catch {
return false;
}
}
```
### Backup Strategy
```bash
# Regular data backups
#!/bin/bash
# backup-minio.sh
BACKUP_DIR="/backups/minio/$(date +%Y-%m-%d)"
mkdir -p $BACKUP_DIR
# Backup MinIO data
docker run --rm \
-v minio_data:/source:ro \
-v $BACKUP_DIR:/backup \
alpine tar czf /backup/minio-data.tar.gz -C /source .
# Backup configuration
docker exec minio mc admin config export /backup/
```
## π Performance Optimization
### 1. Storage Configuration
```yaml
# Optimized for performance
services:
minio:
# ... other config
environment:
# Performance tuning
MINIO_CACHE: "on"
MINIO_CACHE_DRIVES: "/tmp/cache"
MINIO_CACHE_QUOTA: "90"
volumes:
- minio_data:/data
- /tmp/minio-cache:/tmp/cache # Fast SSD cache
```
### 2. Connection Optimization
```typescript
// Optimize for high throughput
export const { s3, } = createUploadConfig()
.provider("minio",{
// ... config
maxRetries: 3,
retryDelayOptions: {
base: 300,
customBackoff: (retryCount) => Math.pow(2, retryCount) * 100,
},
// Connection pooling
maxSockets: 25,
timeout: 120000,
})
.build();
```
## π§ Advanced Features
### Multi-Tenant Setup
```typescript
// Different buckets per tenant
const createTenantConfig = (tenantId: string) =>
createUploadConfig()
.provider("minio",{
// ... base config
bucket: `tenant-${tenantId}-uploads`,
})
.middleware(async ({ req }) => {
const tenant = await getTenantFromRequest(req);
return { tenantId: tenant.id };
})
.build();
```
### Distributed Setup
```yaml
# Multi-node MinIO cluster
version: "3.8"
services:
minio1:
image: quay.io/minio/minio:latest
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
# ... configuration
minio2:
image: quay.io/minio/minio:latest
command: server http://minio{1...4}/data{1...2} --console-address ":9001"
# ... configuration
# minio3, minio4...
```
## π Common Issues
**Connection refused?** β Check MinIO is running and port 9000 is accessible\
**Access denied?** β Verify access keys and bucket permissions\
**CORS errors?** β Set bucket policy to allow your domain. For detailed CORS configuration, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).\
**Slow uploads?** β Check network connection and server resources\
**SSL errors?** β Verify certificate configuration for custom domains
## π‘ Use Cases
### Development Environment
* **Local testing** without cloud dependency
* **Offline development** for air-gapped environments
* **Cost-free** development and testing
### Production Scenarios
* **Data sovereignty** requirements
* **High-security** environments
* **Edge computing** deployments
* **Hybrid cloud** strategies
***
**Next:** [Upload Your First Image](/docs/guides/image-uploads) or explore [Google Cloud Storage](/docs/providers/google-cloud)
# S3-Compatible (/docs/providers/s3-compatible)
import { Step, Steps } from "fumadocs-ui/components/steps";
import { Callout } from "fumadocs-ui/components/callout";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
## S3-Compatible Storage
Connect to any S3-compatible storage service for flexible, vendor-agnostic file uploads with full type safety.
## Why Choose S3-Compatible?
* **π§ Vendor Flexibility**: Works with any S3-compatible service
* **π Self-Hosted Options**: Perfect for custom or self-hosted solutions
* **π Standard API**: Uses the familiar S3 API across all providers
* **π° Cost Control**: Choose providers based on your budget needs
* **π‘οΈ Data Sovereignty**: Keep data where you need it geographically
**Perfect for**: Self-hosted MinIO, SeaweedFS, Garage, custom storage solutions, or any S3-compatible service not explicitly supported by dedicated providers.
## Common S3-Compatible Services
| Service | Use Case | Best For |
| -------------------- | ------------------- | -------------------------------- |
| **SeaweedFS** | Distributed storage | High-performance clusters |
| **Garage** | Lightweight storage | Self-hosted, minimal resources |
| **Ceph RadosGW** | Enterprise storage | Large-scale deployments |
| **Wasabi** | Cloud storage | Cost-effective cloud alternative |
| **Backblaze B2** | Backup storage | Archive and backup scenarios |
| **Custom Solutions** | Specialized needs | Custom implementations |
## Identify Your S3-Compatible Service
First, gather the required information from your storage provider:
### Required Information
* **Endpoint URL**: The API endpoint for your service
* **Access Key ID**: Your access key or username
* **Secret Access Key**: Your secret key or password
* **Bucket Name**: The bucket/container where files will be stored
### Common Endpoint Patterns
```bash
# Self-hosted MinIO
https://minio.yourdomain.com
# SeaweedFS
https://seaweedfs.yourdomain.com:8333
# Wasabi (if not using dedicated provider)
https://s3.wasabisys.com
# Backblaze B2 (S3-compatible endpoint)
https://s3.us-west-000.backblazeb2.com
# Custom deployment
https://storage.yourcompany.com
```
## Verify S3 API Compatibility
Ensure your service supports the required S3 operations:
### Required Operations
* `PutObject` - Upload files
* `GetObject` - Download files
* `DeleteObject` - Delete files
* `ListObjects` - List bucket contents
* `CreateMultipartUpload` - Large file uploads
### Test API Access
```bash
# Test basic connectivity
curl -X GET "https://your-endpoint.com" \
-H "Authorization: AWS ACCESS_KEY:SECRET_KEY"
# Test bucket access
curl -X GET "https://your-endpoint.com/your-bucket" \
-H "Authorization: AWS ACCESS_KEY:SECRET_KEY"
```
```bash
# Configure AWS CLI for testing
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set default.region us-east-1
# Test with custom endpoint
aws s3 ls s3://your-bucket --endpoint-url https://your-endpoint.com
```
## Configure CORS (If Required)
**Comprehensive CORS Guide**: For detailed CORS configuration, testing, and troubleshooting, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).
Many S3-compatible services require CORS configuration for web uploads:
### Standard CORS Configuration
```bash
# Using MinIO client (mc)
mc cors set your-bucket --rule "effect=Allow&origin=*&methods=GET,PUT,POST,DELETE&headers=*"
# For production, restrict origins:
mc cors set your-bucket --rule "effect=Allow&origin=https://yourdomain.com&methods=GET,PUT,POST,DELETE&headers=*"
```
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"http://localhost:3000",
"https://yourdomain.com"
],
"ExposeHeaders": ["ETag", "Content-Length"],
"MaxAgeSeconds": 3600
}
]
```
## Configure Environment Variables
Set up your environment variables for the S3-compatible service:
```bash
# .env.local
S3_ENDPOINT=https://your-storage-service.com
S3_BUCKET_NAME=your-bucket-name
S3_ACCESS_KEY_ID=your_access_key
S3_SECRET_ACCESS_KEY=your_secret_key
S3_REGION=us-east-1
# Optional: Force path-style URLs (required for most self-hosted)
S3_FORCE_PATH_STYLE=true
# Optional: Custom domain for public files
S3_CUSTOM_DOMAIN=https://files.yourdomain.com
```
```bash
# Use your hosting platform's environment system
# Never store production keys in .env files
S3_ENDPOINT=https://your-storage-service.com
S3_BUCKET_NAME=your-bucket-name
S3_ACCESS_KEY_ID=your_access_key
S3_SECRET_ACCESS_KEY=your_secret_key
S3_REGION=us-east-1
S3_FORCE_PATH_STYLE=true
S3_CUSTOM_DOMAIN=https://files.yourdomain.com
```
## Configure Custom Domain (Optional)
For better performance and branding, you can use a custom domain for your files.
### Option 1: CDN/Reverse Proxy
1. **Set up a CDN or reverse proxy** (Nginx, Cloudflare, etc.):
```nginx
# Nginx configuration example
server {
listen 80;
server_name files.yourdomain.com;
location / {
proxy_pass https://your-storage-service.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
2. **Add DNS Record**:
```bash
# Add CNAME record in your DNS
files.yourdomain.com -> your-cdn-or-proxy-domain.com
```
3. **Update Environment Variables**:
```bash
# Add to your .env.local
S3_CUSTOM_DOMAIN=https://files.yourdomain.com
```
### Option 2: Public Bucket with Custom Domain
If your S3-compatible service supports public buckets:
1. **Configure public access** in your storage service
2. **Set up DNS** to point to your storage endpoint
3. **Update environment variables** with your custom domain
**Security Note**: Public buckets should only be used for non-sensitive files. Consider using presigned URLs for private content.
## Update Your Upload Configuration
Configure pushduck to use your S3-compatible service:
```typescript
// lib/upload.ts
import { createUploadConfig } from "pushduck/server";
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: process.env.S3_ENDPOINT!,
bucket: process.env.S3_BUCKET_NAME!,
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
region: process.env.S3_REGION || "us-east-1",
// Most S3-compatible services need path-style URLs
forcePathStyle: process.env.S3_FORCE_PATH_STYLE === "true",
// Optional: Custom domain for file access
customDomain: process.env.S3_CUSTOM_DOMAIN,
})
.defaults({
maxFileSize: "50MB",
acl: "public-read", // Adjust based on your needs
})
.build();
export { s3 };
```
### Advanced Configuration
```typescript
// For services with specific requirements
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: process.env.S3_ENDPOINT!,
bucket: process.env.S3_BUCKET_NAME!,
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
region: process.env.S3_REGION || "us-east-1",
forcePathStyle: true,
})
.paths({
// Organize files by service type
prefix: "uploads",
generateKey: (file, metadata) => {
const date = new Date().toISOString().split('T')[0];
const random = Math.random().toString(36).substring(2, 8);
return `${date}/${random}/${file.name}`;
},
})
.security({
allowedOrigins: [
process.env.FRONTEND_URL!,
"http://localhost:3000",
],
})
.build();
```
## Test Your Configuration
Verify everything works correctly:
```bash
# Test with pushduck CLI
npx @pushduck/cli@latest test --provider s3-compatible
# Or test manually in your app
npm run dev
```
### Manual Testing
```typescript
// Create a simple test route
// pages/api/test-upload.ts or app/api/test-upload/route.ts
import { s3 } from '@/lib/upload';
export async function POST() {
try {
// Test creating an upload route
const imageUpload = s3.image().maxFileSize("5MB");
return Response.json({
success: true,
message: "S3-compatible storage configured correctly"
});
} catch (error) {
return Response.json({
success: false,
error: error.message
}, { status: 500 });
}
}
```
## β
You're Ready!
Your S3-compatible storage is now configured! You can now:
* β
**Upload files** to your custom storage service
* β
**Generate secure URLs** for file access
* β
**Use familiar S3 APIs** with any compatible service
* β
**Maintain vendor independence** with standard protocols
## π§ Service-Specific Configurations
### SeaweedFS
```typescript
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: "https://seaweedfs.yourdomain.com:8333",
bucket: "uploads",
accessKeyId: process.env.SEAWEEDFS_ACCESS_KEY!,
secretAccessKey: process.env.SEAWEEDFS_SECRET_KEY!,
region: "us-east-1",
forcePathStyle: true, // Required for SeaweedFS
})
.build();
```
### Garage
```typescript
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: "https://garage.yourdomain.com",
bucket: "my-app-files",
accessKeyId: process.env.GARAGE_ACCESS_KEY!,
secretAccessKey: process.env.GARAGE_SECRET_KEY!,
region: "garage", // Garage-specific region
forcePathStyle: true,
})
.build();
```
### Wasabi (Alternative to dedicated provider)
```typescript
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: "https://s3.wasabisys.com",
bucket: "my-wasabi-bucket",
accessKeyId: process.env.WASABI_ACCESS_KEY!,
secretAccessKey: process.env.WASABI_SECRET_KEY!,
region: "us-east-1",
forcePathStyle: false, // Wasabi supports virtual-hosted style
})
.build();
```
### Backblaze B2 (S3-Compatible API)
```typescript
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
endpoint: "https://s3.us-west-000.backblazeb2.com",
bucket: "my-b2-bucket",
accessKeyId: process.env.B2_ACCESS_KEY!,
secretAccessKey: process.env.B2_SECRET_KEY!,
region: "us-west-000",
forcePathStyle: false,
})
.build();
```
## π Security Best Practices
### Access Control
```typescript
// Implement user-based access control
.middleware(async ({ req, file }) => {
const user = await authenticate(req);
// Create user-specific paths
const userPath = `users/${user.id}`;
return {
userId: user.id,
keyPrefix: userPath,
metadata: {
uploadedBy: user.id,
uploadedAt: new Date().toISOString(),
}
};
})
```
### Bucket Policies
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket/public/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket/private/*"
}
]
}
```
### Environment Security
```typescript
// Validate configuration at startup
const validateConfig = () => {
const required = [
'S3_ENDPOINT',
'S3_BUCKET_NAME',
'S3_ACCESS_KEY_ID',
'S3_SECRET_ACCESS_KEY'
];
for (const key of required) {
if (!process.env[key]) {
throw new Error(`Missing required environment variable: ${key}`);
}
}
};
validateConfig();
```
## π Monitoring & Analytics
### Health Monitoring
```typescript
// Monitor storage service health
.hooks({
onUploadStart: async ({ file }) => {
// Check service availability
const isHealthy = await checkStorageHealth();
if (!isHealthy) {
throw new Error("Storage service unavailable");
}
},
onUploadComplete: async ({ file, metadata }) => {
// Track successful uploads
await analytics.track("upload_completed", {
provider: "s3-compatible",
service: process.env.S3_ENDPOINT,
fileSize: file.size,
duration: metadata.uploadTime,
});
}
})
async function checkStorageHealth(): Promise {
try {
const response = await fetch(`${process.env.S3_ENDPOINT}/health`);
return response.ok;
} catch {
return false;
}
}
```
### Usage Analytics
```typescript
// Track storage usage patterns
const getStorageMetrics = async () => {
try {
// Use your service's API to get metrics
const metrics = await fetch(`${process.env.S3_ENDPOINT}/metrics`, {
headers: {
'Authorization': `Bearer ${process.env.S3_ACCESS_KEY_ID}`,
}
});
return await metrics.json();
} catch (error) {
console.error("Failed to fetch storage metrics:", error);
return null;
}
};
```
## π Performance Optimization
### Connection Pooling
```typescript
// Optimize for high throughput
export const { s3, config } = createUploadConfig()
.provider("s3Compatible", {
// ... config
maxRetries: 3,
retryDelayOptions: {
base: 300,
customBackoff: (retryCount) => Math.pow(2, retryCount) * 100,
},
timeout: 60000,
})
.build();
```
### Parallel Uploads
```typescript
// Enable multipart uploads for large files
.defaults({
maxFileSize: "100MB",
// Configure multipart threshold
multipartUploadThreshold: "25MB",
multipartUploadSize: "5MB",
})
```
## π Common Issues
### Connection Issues
**Certificate errors?** β Add SSL certificate or use `NODE_TLS_REJECT_UNAUTHORIZED=0` for development\
**Connection refused?** β Verify endpoint URL and port are correct\
**Timeout errors?** β Increase timeout settings or check network connectivity
### Authentication Issues
**Access denied?** β Verify access keys and bucket permissions\
**Invalid signature?** β Check secret key and ensure clock synchronization\
**Region mismatch?** β Verify the region setting matches your service
### Upload Issues
**CORS errors?** β Configure CORS policy on your storage service. For detailed CORS configuration, see the [CORS & ACL Configuration Guide](/docs/guides/security/cors-and-acl).\
**File size errors?** β Check service limits and adjust `maxFileSize`\
**Path errors?** β Enable `forcePathStyle: true` for most self-hosted services
### Debugging Commands
```bash
# Test connectivity
curl -v "https://your-endpoint.com/your-bucket"
# Check bucket contents
aws s3 ls s3://your-bucket --endpoint-url https://your-endpoint.com
# Test upload
aws s3 cp test.txt s3://your-bucket/ --endpoint-url https://your-endpoint.com
```
## π‘ Use Cases
### Self-Hosted Solutions
* **Data sovereignty** requirements
* **Air-gapped** environments
* **Custom compliance** needs
* **Cost optimization** for high usage
### Hybrid Cloud
* **Multi-cloud** strategies
* **Disaster recovery** setups
* **Geographic distribution**
* **Vendor diversification**
### Development & Testing
* **Local development** without cloud dependencies
* **CI/CD pipelines** with custom storage
* **Testing environments** with controlled data
***
**Next:** [Upload Your First Image](/docs/guides/image-uploads) or explore [Configuration Options](/docs/api/configuration/upload-config)
# createUploadClient (/docs/api/client/create-upload-client)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { TypeTable } from "fumadocs-ui/components/type-table";
## createUploadClient Function
Create a type-safe upload client with **property-based access** and optional **per-route configuration**. This is the recommended approach for most projects.
**Enhanced in v2.0**: Now supports per-route callbacks, progress tracking, and error handling while maintaining superior type safety.
## Why Use This Approach?
* π **Superior Type Safety** - Route names validated at compile time
* π― **Property-Based Access** - No string literals, full IntelliSense
* β‘ **Per-Route Configuration** - Callbacks, endpoints, and options per route
* π **Centralized Setup** - Single configuration for all routes
* π‘οΈ **Refactoring Safety** - Rename routes safely across codebase
This utility function provides property-based access to your upload routes. You can also use the `useUploadRoute()` hook if you prefer traditional React patterns.
## Basic Setup
**Create the upload client**
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from './upload'
export const upload = createUploadClient({
endpoint: '/api/upload' // [!code highlight]
})
```
**Use in components**
```typescript title="components/upload-form.tsx"
import { upload } from '@/lib/upload-client'
export function UploadForm() {
const { uploadFiles, files, isUploading } = upload.imageUpload() // [!code highlight]
return (
uploadFiles(Array.from(e.target.files || []))} // [!code highlight]
disabled={isUploading}
/>
)
}
```
## Configuration Options
Promise",
},
}}
/>
## Per-Route Configuration
Each route method now accepts optional configuration:
void",
},
onError: {
description: "Callback when upload fails",
type: "(error: Error) => void",
},
onProgress: {
description: "Callback for progress updates",
type: "(progress: number) => void",
},
endpoint: {
description: "Override endpoint for this specific route",
type: "string",
},
}}
/>
## Examples
Basic Usage
With Metadata
With Callbacks
Multiple Routes
Global + Per-Route
Advanced Config
```typescript
import { upload } from '@/lib/upload-client'
export function BasicUpload() {
// Simple usage - no configuration needed
const { uploadFiles, files, isUploading, reset } = upload.imageUpload()
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map(file => (
{file.name}
{file.status === 'success' &&
β
}
))}
Reset
)
}
```
```typescript
import { upload } from '@/lib/upload-client'
import { useState } from 'react'
export function MetadataUpload() {
const [selectedAlbum, setSelectedAlbum] = useState('vacation-2025');
const [tags, setTags] = useState([]);
const { uploadFiles, files } = upload.imageUpload({
onSuccess: (results) => {
console.log(`Uploaded ${results.length} images to album:`, selectedAlbum);
}
});
const handleUpload = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
// Pass client-side context as metadata
uploadFiles(selectedFiles, {
albumId: selectedAlbum,
tags: tags,
visibility: 'private',
uploadSource: 'web-app',
timestamp: Date.now()
});
};
return (
);
}
```
```typescript
import { upload } from '@/lib/upload-client'
import { toast } from 'sonner'
export function CallbackUpload() {
const [progress, setProgress] = useState(0)
const { uploadFiles, files, isUploading } = upload.imageUpload({
onSuccess: (results) => {
toast.success(`β
Uploaded ${results.length} images!`)
console.log('Upload results:', results)
},
onError: (error) => {
toast.error(`β Upload failed: ${error.message}`)
console.error('Upload error:', error)
},
onProgress: (progress) => {
setProgress(progress)
console.log(`π Progress: ${progress}%`)
}
})
return (
)
}
```
```typescript
import { upload } from '@/lib/upload-client'
export function MultiUploadForm() {
// Different configuration for each upload type
const images = upload.imageUpload({
onSuccess: (results) => {
// Store images with both permanent and temporary URLs
updateImageGallery(results.map(file => ({
id: file.id,
url: file.url, // Permanent access URL
downloadUrl: file.presignedUrl, // Temporary download URL (1 hour)
name: file.name,
key: file.key
})));
}
})
const documents = upload.documentUpload({
onSuccess: (results) => {
// Store documents with metadata
updateDocumentLibrary(results.map(file => ({
id: file.id,
url: file.url,
name: file.name,
size: file.size,
uploadedAt: new Date().toISOString()
})));
}
})
return (
)
}
```
```typescript
// Global configuration with per-route overrides
const upload = createUploadClient({
endpoint: '/api/upload',
// These apply to all routes by default
defaultOptions: {
onProgress: (progress) => updateGlobalProgress(progress),
onError: (error) => logError(error)
}
})
export function MixedConfigUpload() {
// Inherits global onProgress and onError
const basic = upload.imageUpload()
// Overrides global settings + adds success handler
const premium = upload.documentUpload({
endpoint: '/api/premium-upload', // Different endpoint
onSuccess: (results) => {
// This overrides global behavior
handlePremiumUpload(results)
}
// Still inherits global onProgress and onError
})
return (
)
}
```
```typescript
const upload = createUploadClient({
endpoint: '/api/upload',
// Custom fetch function
fetcher: async (input, init) => {
const token = await getAuthToken()
return fetch(input, {
...init,
headers: {
...init?.headers,
'Authorization': `Bearer ${token}`
}
})
},
defaultOptions: {
onError: (error) => {
// Global error tracking
analytics.track('upload_error', { error: error.message })
toast.error('Upload failed. Please try again.')
}
}
})
export function AdvancedUpload() {
const { uploadFiles, files } = upload.secureUpload({
endpoint: '/api/secure-upload',
onSuccess: (results) => {
// Audit log for secure uploads
auditLog('secure_upload_success', {
files: results.length,
user: user.id
})
}
})
// Handle permissions in component logic
const handleUpload = (files: File[]) => {
if (user.hasPermission('upload')) {
uploadFiles(files)
} else {
toast.error('You don\'t have permission to upload files')
}
}
return
}
```
## Type Safety Benefits
The structured client provides superior TypeScript integration:
```typescript
const upload = createUploadClient({ endpoint: '/api/upload' })
// β
IntelliSense shows available routes
upload.imageUpload() // Autocomplete suggests this
upload.documentUpload() // And this
upload.videoUpload() // And this
// β TypeScript error for non-existent routes
upload.invalidRoute() // Error: Property 'invalidRoute' does not exist
// β
Route rename safety
// If you rename 'imageUpload' to 'photoUpload' in your router,
// TypeScript will show errors everywhere it's used, making refactoring safe
// β
Callback type inference
upload.imageUpload({
onSuccess: (results) => {
// `results` is fully typed based on your router configuration
results.forEach(result => {
console.log(result.url) // TypeScript knows this exists
console.log(result.key) // And this
})
}
})
```
## Comparison with Hooks
| Feature | Enhanced Structured Client | Hook-Based |
| ---------------- | --------------------------------- | ---------------------- |
| Type Safety | β
**Superior** - Property-based | β
Good - Generic types |
| IntelliSense | β
**Full route autocomplete** | β οΈ String-based routes |
| Refactoring | β
**Safe rename across codebase** | β οΈ Manual find/replace |
| Callbacks | β
**Full support** | β
Full support |
| Per-route Config | β
**Full support** | β
Full support |
| Bundle Size | β
**Same** | β
Same |
| Performance | β
**Identical** | β
Identical |
## Migration from Hooks
Easy migration from hook-based approach:
```typescript
// Before: Hook-based
import { useUploadRoute } from 'pushduck/client'
const { uploadFiles, files } = useUploadRoute('imageUpload', {
onSuccess: handleSuccess,
onError: handleError
})
// After: Enhanced structured client
import { upload } from '@/lib/upload-client'
const { uploadFiles, files } = upload.imageUpload({
onSuccess: handleSuccess,
onError: handleError
})
```
Benefits of migration:
* π― **Better type safety** - Route names validated at compile time
* π **Enhanced IntelliSense** - Autocomplete for all routes
* ποΈ **Centralized config** - Single place for endpoint and defaults
* π‘οΈ **Refactoring safety** - Rename routes safely
* β‘ **Same performance** - Zero runtime overhead
***
**Recommended Approach**: Use `createUploadClient` for the best developer experience with full flexibility and type safety.
# Client API (/docs/api/client)
import { Card, Cards } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
## Client API Overview
Pushduck provides two powerful client-side approaches for handling file uploads: **useUploadRoute hook** for reactive state management and **Property-Based Client** for enhanced type safety and modern developer experience.
All hooks follow React's rules of hooks - call them only from React function components or custom hooks, not from regular JavaScript functions.
## Available APIs
* Enhanced type safety with IntelliSense
* Property-based access (`client.imageUpload()` returns hook)
* Built-in progress tracking and error handling
* Modern developer experience
**Perfect for**: New projects, enhanced type safety, modern React patterns
* Route-specific validation
* Enhanced type inference
* Multi-file support with progress tracking
* Advanced configuration and callbacks
* Reactive state management
**Perfect for**: All upload scenarios, the main React hook for uploads
## Quick Comparison
| Feature | `createUploadClient` | `useUploadRoute` |
| ------------------------ | -------------------------------- | ----------------------- |
| **Approach** | π§ Property-based | βοΈ React Hook |
| **Type Safety** | β
Excellent (IntelliSense) | β
Excellent |
| **Developer Experience** | β
Modern | β
Advanced |
| **Simplicity** | β
Intuitive | β
Straightforward |
| **Validation** | β
Enhanced | β
Advanced |
| **Multi-route** | β
Multiple routes | β
Single route per hook |
| **Progress Tracking** | β
Built-in | β
Built-in |
| **Error Handling** | β
Enhanced | β
Comprehensive |
| **Best For** | π New projects, modern patterns | π§ All upload scenarios |
## Basic Usage Examples
### createUploadClient (Property-Based)
```typescript
import { createUploadClient } from 'pushduck/client';
const client = createUploadClient({
endpoint: '/api/upload',
});
function ModernUpload() {
// Each route returns a hook with all upload functionality
const {
uploadFiles,
files,
isUploading,
progress
} = client.imageUpload({
onSuccess: (results) => console.log('Upload successful!', results),
onError: (error) => console.error('Upload failed:', error),
});
const handleFileSelect = async (selectedFiles: File[]) => {
await uploadFiles(selectedFiles);
};
return (
e.target.files && handleFileSelect(Array.from(e.target.files))}
disabled={isUploading}
/>
{isUploading &&
Progress: {progress}%
}
Uploaded: {files.filter(f => f.status === 'success').length} files
);
}
```
### useUploadRoute Hook
```typescript
import { useUploadRoute } from 'pushduck/client';
function AdvancedUpload() {
const {
uploadFiles,
files,
isUploading,
progress,
errors
} = useUploadRoute('imageUpload', {
endpoint: '/api/upload',
onSuccess: (results) => console.log('Upload successful!', results),
onError: (error) => console.error('Upload failed:', error),
});
const handleFileSelect = async (selectedFiles: File[]) => {
await uploadFiles(selectedFiles);
};
return (
e.target.files && handleFileSelect(Array.from(e.target.files))}
disabled={isUploading}
/>
{isUploading &&
Progress: {progress}%
}
{errors.length > 0 &&
Errors: {errors.join(', ')}
}
Uploaded files: {files.filter(f => f.status === 'success').length}
);
}
```
## Hook Features
### State Management
Both approaches provide reactive state management:
```typescript
// useUploadRoute returns:
const {
uploadFiles, // Function to start upload
files, // Array of uploaded files with status
isUploading, // Boolean: upload in progress
progress, // Number: overall progress (0-100)
errors, // Array of error messages
reset, // Function to reset state
uploadSpeed, // Current upload speed (bytes/sec)
eta // Estimated time remaining (seconds)
} = useUploadRoute('routeName', config);
```
### Progress Tracking
Real-time progress updates during file uploads:
```typescript
// Progress updates automatically during upload
{isUploading && (
{progress}% uploaded
Speed: {formatUploadSpeed(uploadSpeed)}
ETA: {formatETA(eta)}
)}
```
### Error Handling
Comprehensive error handling with detailed messages:
```typescript
{errors.length > 0 && (
{errors.map((error, index) => (
{error}
))}
)}
```
## Configuration Options
### Common Options
Both approaches support these configuration options:
```typescript
const config = {
endpoint: '/api/upload', // Upload endpoint (default: '/api/s3-upload')
onStart: (files) => {}, // Called when upload starts
onProgress: (progress) => {}, // Progress callback (0-100)
onSuccess: (results) => {}, // Success callback with file results
onError: (error) => {}, // Error callback
};
```
### useUploadRoute Specific Options
```typescript
const routeConfig = {
// All common options plus:
endpoint: '/api/upload', // Route-specific endpoint
metadata: { userId: '123' }, // Upload metadata
};
```
## Best Practices
**Performance**: Use `createUploadClient` for multiple upload types in the same component. Use `useUploadRoute` directly for single-purpose upload components.
**File Validation**: Always validate files on both client and server. Client validation provides immediate feedback, server validation ensures security.
**Error Recovery**: Both approaches include built-in retry logic for network errors and provide clear error messages for validation failures.
## Next Steps
1. **Property-based client?** β Start with [createUploadClient](/docs/api/client/create-upload-client)
2. **Direct hook usage?** β Use [useUploadRoute](/docs/api/client/use-upload-route)
3. **Complete examples?** β Check [examples documentation](/docs/examples)
4. **Server setup?** β See [server configuration](/docs/api/s3-router)
# useUploadRoute (/docs/api/client/use-upload-route)
import { Callout } from "fumadocs-ui/components/callout";
import { TypeTable } from "fumadocs-ui/components/type-table";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { formatETA, formatUploadSpeed } from "pushduck";
## useUploadRoute Hook
A React hook that provides reactive state management for file uploads with progress tracking and error handling.
This hook follows familiar React patterns and is perfect for teams that prefer
the traditional hook-based approach. Both this and `createUploadClient` are equally valid ways to handle uploads.
## When to Use This Hook
Use `useUploadRoute` when:
* πͺ **Prefer React hooks** - familiar pattern for React developers
* π§© **Granular control** needed over individual upload state
* π **Component-level state** management preferred
* π₯ **Team preference** for hook-based patterns
## Alternative Approach
You can also use the structured client approach:
```typescript title="Hook-based approach"
// Hook-based approach
import { useUploadRoute } from 'pushduck/client'
const { uploadFiles, files } = useUploadRoute('imageUpload') // [!code highlight]
```
```typescript title="Structured client approach"
// Structured client approach
import { createUploadClient } from 'pushduck/client'
const upload = createUploadClient({
endpoint: '/api/upload' // [!code highlight]
})
const { uploadFiles, files } = upload.imageUpload() // [!code highlight]
```
Both approaches provide the same functionality and type safety - choose what feels more natural for your team.
## Basic Usage
```typescript title="Basic Usage Example"
import { useUploadRoute } from "pushduck/client";
import { formatETA, formatUploadSpeed } from "pushduck";
import type { AppRouter } from "@/lib/upload";
export function ImageUploader() {
// With type parameter (recommended for better type safety)
const { uploadFiles, files, isUploading, error, reset, progress, uploadSpeed, eta } =
useUploadRoute("imageUpload"); // [!code highlight]
const handleFileSelect = (e: React.ChangeEvent) => {
const selectedFiles = Array.from(e.target.files || []);
uploadFiles(selectedFiles); // [!code highlight]
};
return (
{/* Overall Progress Tracking */}
{isUploading && files.length > 1 && progress !== undefined && (
Overall Progress: {Math.round(progress)}%
Speed: {uploadSpeed ? formatUploadSpeed(uploadSpeed) : '0 B/s'}
ETA: {eta ? formatETA(eta) : '--'}
)}
{/* Individual File Progress */}
{files.map((file) => (
{file.name}
{file.status === "success" &&
View } // [!code highlight]
))}
{error &&
{error.message}
}
Reset
);
}
```
## Overall Progress Tracking
The hook provides real-time overall progress metrics when uploading multiple files:
Overall progress tracking is especially useful for batch uploads and provides a better user experience when uploading multiple files simultaneously.
```typescript
const { progress, uploadSpeed, eta } = useUploadRoute("imageUpload");
// Progress: 0-100 percentage across all files
console.log(`Overall progress: ${progress}%`);
// Upload speed: Combined transfer rate in bytes/second
console.log(`Transfer rate: ${formatUploadSpeed(uploadSpeed)}`);
// ETA: Time remaining in seconds
console.log(`Time remaining: ${formatETA(eta)}`);
```
### Progress Calculation
* **progress**: Weighted by file sizes, not just file count
* **uploadSpeed**: Sum of all active file upload speeds
* **eta**: Calculated based on remaining bytes and current speed
* Values are `undefined` when no uploads are active
## Hook Signature
```typescript
// With type parameter (recommended)
function useUploadRoute(
route: keyof TRouter,
options?: UseUploadOptions
): UseUploadReturn;
// Without type parameter (also works)
function useUploadRoute(
route: string,
options?: UseUploadOptions
): UseUploadReturn;
```
## Parameters
### Type Parameter Benefits
```typescript
// β
With type parameter - better type safety
const { uploadFiles } = useUploadRoute("imageUpload");
// - Route names are validated at compile time
// - IntelliSense shows available routes
// - Typos caught during development
// β
Without type parameter - still works
const { uploadFiles } = useUploadRoute("imageUpload");
// - Works with any string
// - Less type safety but more flexible
// - Good for dynamic route names
```
## Options
void",
},
onSuccess: {
description: "Callback when uploads complete successfully",
type: "(results: UploadResult[]) => void",
},
onError: {
description: "Callback when upload fails",
type: "(error: UploadError) => void",
},
onProgress: {
description: "Callback for progress updates",
type: "(progress: number) => void",
},
}}
/>
## Return Value
Promise",
},
files: {
description: "Array of files with upload status",
type: "UploadFile[]",
},
isUploading: {
description: "Whether any upload is in progress",
type: "boolean",
},
uploadedFiles: {
description: "Successfully uploaded files",
type: "UploadResult[]",
},
error: {
description: "Upload error if any",
type: "UploadError | null",
},
reset: {
description: "Reset upload state",
type: "() => void",
},
progress: {
description: "Overall progress across all files (0-100)",
type: "number | undefined",
},
uploadSpeed: {
description: "Combined transfer rate in bytes per second",
type: "number | undefined",
},
eta: {
description: "Overall time remaining in seconds",
type: "number | undefined",
},
}}
/>
## Callback Execution Order
The callbacks follow a predictable order to provide clear upload lifecycle management:
**Proper Callback Sequence:** `onStart` β `onProgress(0)` β `onProgress(n)` β `onSuccess/onError`
```typescript
const { uploadFiles } = useUploadRoute('imageUpload', {
// 1. Called first after validation passes
onStart: (files) => {
console.log('π Upload starting for', files.length, 'files');
setUploading(true);
},
// 2. Called with progress updates (0-100)
onProgress: (progress) => {
console.log('π Progress:', progress + '%');
setProgress(progress);
},
// 3. Called on completion
onSuccess: (results) => {
console.log('β
Upload complete!');
results.forEach(file => {
console.log('File URL:', file.url); // Permanent URL
console.log('Download URL:', file.presignedUrl); // Temporary access (1 hour)
});
setUploading(false);
},
// OR 3. Called on error (no progress callbacks for validation errors)
onError: (error) => {
console.log('β Upload failed:', error.message);
setUploading(false);
}
});
```
### Validation Errors vs Upload Errors
* **Validation errors** (size limits, file types): Only `onError` is called
* **Upload errors** (network issues): `onStart` β `onProgress(0)` β `onError`
## Upload Result Structure
Each successfully uploaded file includes the following properties:
### URL Usage Examples
```typescript
const { uploadFiles } = useUploadRoute('fileUpload', {
onSuccess: (results) => {
results.forEach(file => {
// Use permanent URL for public files
if (file.url) {
console.log('Public URL:', file.url);
}
// Use presigned URL for private files or temporary access
if (file.presignedUrl) {
console.log('Download URL:', file.presignedUrl);
// This URL expires in 1 hour and can be used for secure downloads
}
});
}
});
```
## Advanced Examples
With All Callbacks
With Metadata
Multiple Routes
Form Integration
```typescript
const { uploadFiles, files } = useUploadRoute('documentUpload', {
onStart: (files) => {
toast.info(`Starting upload of ${files.length} files...`);
setUploadStarted(true);
},
onSuccess: (results) => {
toast.success(`Uploaded ${results.length} files`);
// Store both permanent and temporary URLs
updateDocuments(results.map(file => ({
url: file.url, // Permanent access
downloadUrl: file.presignedUrl, // Temporary access (1 hour)
name: file.name,
key: file.key
})));
setUploadStarted(false);
},
onError: (error) => {
toast.error(`Upload failed: ${error.message}`);
setUploadStarted(false);
},
onProgress: (progress) => {
setGlobalProgress(progress);
}
})
```
```typescript
import { useUploadRoute } from 'pushduck/client'
import type { AppRouter } from '@/lib/upload'
import { useState } from 'react'
export function ProductImageUpload({ productId }: { productId: string }) {
const [imageType, setImageType] = useState<'main' | 'gallery' | 'thumbnail'>('gallery');
const [sortOrder, setSortOrder] = useState(1);
const { uploadFiles, files, isUploading } = useUploadRoute('productImages', {
onSuccess: (results) => {
toast.success(`Uploaded ${results.length} ${imageType} images for product`);
// Update product in database with new image URLs
updateProduct(productId, results);
}
});
const handleUpload = (selectedFiles: File[]) => {
// Send product context to server via metadata
uploadFiles(selectedFiles, {
productId: productId,
imageType: imageType,
sortOrder: sortOrder,
category: 'product-media',
uploadedFrom: 'admin-dashboard'
});
};
return (
);
}
```
```typescript
const images = useUploadRoute('imageUpload')
const documents = useUploadRoute('documentUpload')
return (
)
```
```typescript
const { uploadFiles, uploadedFiles } = useUploadRoute('attachments', {
onSuccess: (results) => {
setValue('attachments', results.map(r => r.url))
}
})
const onSubmit = (data) => {
// Form data includes uploaded file URLs
console.log(data.attachments)
}
```
***
**Flexible API:** Use this hook when you prefer React's familiar hook patterns
or need more granular control over upload state.
# Client Configuration (/docs/api/configuration/client-options)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
import { TypeTable } from "fumadocs-ui/components/type-table";
## Client Setup Options
The upload client provides multiple APIs to suit different needs: **property-based access** for enhanced type safety, and **hook-based access** for familiar React patterns.
This guide focuses on the **enhanced client API** with property-based access.
This provides the best developer experience with full TypeScript inference and
eliminates string literals.
## Client Setup Structure
Organize your client configuration for maximum reusability:
## Basic Client Configuration
**Import your router types**
Start by importing the router type from your server configuration:
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from './upload'
export const upload = createUploadClient({
endpoint: '/api/upload'
})
```
**Use property-based access**
Access your upload endpoints as properties with full type safety:
```typescript title="components/upload-form.tsx"
import { upload } from '@/lib/upload-client'
export function ImageUploadForm() {
const { uploadFiles, files, isUploading, error } = upload.imageUpload
// ^ Full TypeScript inference from your server router
const handleUpload = async (selectedFiles: File[]) => {
await uploadFiles(selectedFiles)
}
return (
handleUpload(Array.from(e.target.files || []))} />
{files.map(file => (
{file.name}
{file.status} {/* 'pending' | 'uploading' | 'success' | 'error' */}
{file.url &&
View }
))}
)
}
```
**Handle upload results**
Process upload results with full type safety:
```typescript
const { uploadFiles, files, reset } = upload.documentUpload
const handleDocumentUpload = async (files: File[]) => {
try {
const results = await uploadFiles(files)
// results is fully typed based on your router configuration
console.log('Upload successful:', results.map(r => r.url))
// Reset the upload state
reset()
} catch (error) {
console.error('Upload failed:', error)
}
}
```
## Client Configuration Options
",
},
timeout: {
description: "Request timeout in milliseconds",
type: "number",
default: "30000",
},
retries: {
description: "Number of retry attempts for failed uploads",
type: "number",
default: "3",
},
onProgress: {
description: "Global progress callback for all uploads",
type: "(progress: UploadProgress) => void",
},
onError: {
description: "Global error handler for upload failures",
type: "(error: UploadError) => void",
},
}}
/>
### Advanced Client Configuration
```typescript title="lib/upload-client.ts"
import { createUploadClient } from "pushduck/client";
import type { AppRouter } from "./upload";
export const upload = createUploadClient({
endpoint: "/api/upload",
// Custom headers (e.g., authentication)
headers: {
Authorization: `Bearer ${getAuthToken()}`,
"X-Client-Version": "1.0.0",
},
// Upload timeout (30 seconds)
timeout: 30000,
// Retry failed uploads
retries: 3,
// Global progress tracking
onProgress: (progress) => {
console.log(`Upload progress: ${progress.percentage}%`);
},
// Global error handling
onError: (error) => {
console.error("Upload error:", error);
// Send to error tracking service
trackError(error);
},
});
```
## Upload Method Options
Each upload method accepts configuration options:
void",
},
onSuccess: {
description: "Success callback when upload completes",
type: "(results: UploadResult[]) => void",
},
onError: {
description: "Error callback for this upload",
type: "(error: UploadError) => void",
},
metadata: {
description: "Custom metadata to include with upload",
type: "Record",
},
abortSignal: {
description: "AbortSignal to cancel the upload",
type: "AbortSignal",
},
}}
/>
### Upload Method Examples
```typescript
const { uploadFiles } = upload.imageUpload
// Simple upload
const results = await uploadFiles(selectedFiles)
console.log('Uploaded files:', results)
```
```typescript
const { uploadFiles } = upload.imageUpload
await uploadFiles(selectedFiles, {
onProgress: (progress) => {
console.log(`Upload ${progress.percentage}% complete`)
updateProgressBar(progress.percentage)
},
onSuccess: (results) => {
console.log('Upload successful!', results)
showSuccessNotification()
},
onError: (error) => {
console.error('Upload failed:', error)
showErrorNotification(error.message)
}
})
```
```typescript
const { uploadFiles } = upload.documentUpload
// Pass metadata as second parameter (not in options)
await uploadFiles(selectedFiles, {
category: 'contracts',
department: 'legal',
priority: 'high',
tags: ['confidential', 'urgent'],
projectId: currentProject.id,
uploadedBy: currentUser.email
})
```
```typescript
const { uploadFiles } = upload.videoUpload
const abortController = new AbortController()
// Start upload
const uploadPromise = uploadFiles(selectedFiles, {
abortSignal: abortController.signal,
onProgress: (progress) => {
if (progress.percentage > 50 && shouldCancel) {
abortController.abort()
}
}
})
// Cancel upload after 10 seconds
setTimeout(() => abortController.abort(), 10000)
try {
await uploadPromise
} catch (error) {
if (error.name === 'AbortError') {
console.log('Upload was cancelled')
}
}
```
## Hook-Based API (Alternative)
For teams that prefer React hooks, the hook-based API provides a familiar pattern:
void",
},
onError: {
description: "Error callback for upload failures",
type: "(error: UploadError) => void",
},
}}
/>
### Hook Usage Examples
```typescript
import { useUpload } from 'pushduck/client'
import type { AppRouter } from '@/lib/upload'
export function ImageUploadComponent() {
const { uploadFiles, files, isUploading, error, reset } = useUpload('imageUpload', {
onSuccess: (results) => {
console.log('Upload completed:', results)
},
onError: (error) => {
console.error('Upload failed:', error)
}
})
return (
uploadFiles(Array.from(e.target.files || []))}
disabled={isUploading}
/>
{files.map(file => (
{file.name}
{file.status === 'error' &&
Failed: {file.error} }
{file.status === 'success' &&
View }
))}
Reset
)
}
```
```typescript
export function MultiUploadComponent() {
const images = useUpload('imageUpload')
const documents = useUpload('documentUpload')
return (
)
}
```
```typescript
import { useUpload } from 'pushduck/client'
import { useCallback } from 'react'
import type { AppRouter } from '@/lib/upload'
export function useImageUpload() {
const upload = useUpload('imageUpload', {
onSuccess: (results) => {
// Show success toast
toast.success(`Uploaded ${results.length} images`)
},
onError: (error) => {
// Show error toast
toast.error(`Upload failed: ${error.message}`)
}
})
const uploadImages = useCallback(async (files: File[]) => {
// Validate files before upload
const validFiles = files.filter(file => {
if (file.size > 5 * 1024 * 1024) { // 5MB
toast.error(`${file.name} is too large`)
return false
}
return true
})
if (validFiles.length > 0) {
await upload.uploadFiles(validFiles)
}
}, [upload.uploadFiles])
return {
...upload,
uploadImages
}
}
```
## Property-Based Client Access
The property-based client provides enhanced type inference and eliminates string literals:
### Type Safety Benefits
```typescript
const { uploadFiles } = upload.imageUpload
// ^ TypeScript knows this exists
const { uploadFiles: docUpload } = upload.nonExistentEndpoint
// ^ TypeScript error!
```
```typescript
upload. // IntelliSense shows: imageUpload, documentUpload, videoUpload
// ^ All your router endpoints are available with autocomplete
```
```typescript
// If you rename 'imageUpload' to 'images' in your router,
// TypeScript will show errors everywhere it's used,
// making refactoring safe and easy
```
### Enhanced Type Inference
The property-based client provides complete type inference from your server router:
```typescript
// Server router definition
export const router = createUploadRouter({
profilePictures: uploadSchema({
image: { maxSize: "2MB", maxCount: 1 },
}).middleware(async ({ req }) => {
const userId = await getUserId(req);
return { userId, category: "profile" };
}),
// ... other endpoints
});
// Client usage with full type inference
const { uploadFiles, files, isUploading } = upload.profilePictures;
// ^ uploadFiles knows it accepts File[]
// ^ files has type UploadFile[]
// ^ isUploading is boolean
// Upload files with inferred return type
const results = await uploadFiles(selectedFiles);
// ^ results is UploadResult[] with your specific metadata shape
```
## Framework-Specific Configuration
```typescript
// app/lib/upload-client.ts
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from './upload'
export const upload = createUploadClient({
endpoint: '/api/upload',
headers: {
// Next.js specific headers
'x-requested-with': 'pushduck'
}
})
// app/components/upload-form.tsx
'use client'
import { upload } from '@/lib/upload-client'
export function UploadForm() {
const { uploadFiles, files, isUploading } = upload.imageUpload
// Component implementation...
}
```
```typescript
// src/lib/upload-client.ts
import { createUploadClient } from 'pushduck/client'
import type { AppRouter } from './upload'
export const upload = createUploadClient({
endpoint: process.env.REACT_APP_UPLOAD_ENDPOINT || '/api/upload'
})
// src/components/UploadForm.tsx
import React from 'react'
import { upload } from '../lib/upload-client'
export function UploadForm() {
const { uploadFiles, files, isUploading } = upload.imageUpload
// Component implementation...
}
```
```typescript
// lib/upload-client.ts
import { createUploadClient } from '@pushduck/vue'
import type { AppRouter } from './upload'
export const upload = createUploadClient({
endpoint: '/api/upload'
})
// components/UploadForm.vue
```
```typescript
// lib/upload-client.ts
import { uploadStore } from '@pushduck/svelte'
import type { AppRouter } from './upload'
export const upload = uploadStore('/api/upload')
// components/UploadForm.svelte
```
## Error Handling Configuration
Configure comprehensive error handling for robust applications:
boolean",
},
maxConcurrentUploads: {
description: "Maximum number of concurrent file uploads",
type: "number",
default: "3",
},
chunkSize: {
description: "Size of upload chunks for large files",
type: "number",
default: "5242880",
},
}}
/>
### Advanced Error Handling
```typescript
export const upload = createUploadClient({
endpoint: "/api/upload",
// Custom retry configuration
retries: 3,
retryDelays: [1000, 2000, 4000], // 1s, 2s, 4s
// Custom retry logic
retryCondition: (error, attemptNumber) => {
// Don't retry client errors (4xx)
if (error.status >= 400 && error.status < 500) {
return false;
}
// Retry server errors up to 3 times
return attemptNumber < 3;
},
// Concurrent upload limits
maxConcurrentUploads: 2,
// Large file chunking
chunkSize: 10 * 1024 * 1024, // 10MB chunks
// Global error handler
onError: (error) => {
// Log to error tracking service
if (error.status >= 500) {
logError("Server error during upload", error);
}
// Show user-friendly message
if (error.code === "FILE_TOO_LARGE") {
showToast("File is too large. Please choose a smaller file.");
} else if (error.code === "NETWORK_ERROR") {
showToast("Network error. Please check your connection.");
} else {
showToast("Upload failed. Please try again.");
}
},
});
```
## Performance Optimization
Configure the client for optimal performance:
### Upload Performance
```typescript
export const upload = createUploadClient({
endpoint: "/api/upload",
// Optimize for performance
maxConcurrentUploads: 3, // Balance between speed and resource usage
chunkSize: 5 * 1024 * 1024, // 5MB chunks for large files
timeout: 60000, // 60 second timeout for large files
// Compression for images
compressImages: {
enabled: true,
quality: 0.8, // 80% quality
maxWidth: 1920, // Resize large images
maxHeight: 1080,
},
// Connection pooling
keepAlive: true,
maxSockets: 5,
// Progress throttling to avoid UI updates spam
progressThrottle: 100, // Update progress every 100ms
});
```
## Real-World Configuration Examples
### E-commerce Application
```typescript
export const ecommerceUpload = createUploadClient({
endpoint: "/api/upload",
headers: {
Authorization: `Bearer ${getAuthToken()}`,
"X-Store-ID": getStoreId(),
},
onProgress: (progress) => {
// Update global upload progress indicator
updateGlobalProgress(progress);
},
onError: (error) => {
// Track upload failures for analytics
analytics.track("upload_failed", {
error_code: error.code,
file_type: error.metadata?.fileType,
store_id: getStoreId(),
});
},
// E-commerce specific settings
retries: 2, // Quick retries for better UX
maxConcurrentUploads: 5, // Allow multiple product images
compressImages: {
enabled: true,
quality: 0.9, // High quality for product images
},
});
// Usage in product form
export function ProductImageUpload() {
const { uploadFiles, files, isUploading } = ecommerceUpload.productImages;
const handleImageUpload = async (files: File[]) => {
await uploadFiles(files, {
metadata: {
productId: getCurrentProductId(),
category: "product-images",
},
onSuccess: (results) => {
updateProductImages(results.map((r) => r.url));
},
});
};
return (
// Upload component implementation
{/* Upload UI */}
);
}
```
### Content Management System
```typescript
export const cmsUpload = createUploadClient({
endpoint: "/api/upload",
headers: {
Authorization: `Bearer ${getAuthToken()}`,
"X-Workspace": getCurrentWorkspace(),
},
// CMS-specific configuration
timeout: 120000, // 2 minutes for large documents
retries: 3,
maxConcurrentUploads: 2, // Conservative for large files
onError: (error) => {
// Show contextual error messages
if (error.code === "QUOTA_EXCEEDED") {
showUpgradeModal();
} else if (error.code === "UNAUTHORIZED") {
redirectToLogin();
}
},
});
// Usage in content editor
export function MediaLibrary() {
const images = cmsUpload.images;
const documents = cmsUpload.documents;
const videos = cmsUpload.videos;
return (
);
}
```
***
**Ready to upload?** Check out our [complete examples](/docs/examples) to see
these configurations in action, or explore our [provider setup
guides](/docs/providers/aws-s3) to configure your storage backend.
# Configuration (/docs/api/configuration)
import { Cards, Card } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
## Configuration Overview
Configure pushduck to match your application's specific needs. From basic upload settings to advanced path generation and middleware, pushduck provides flexible configuration options for every use case.
## Configuration Categories
**Core upload settings** for your application.
* File size and type validation
* Schema definitions (image, file, object)
* Default configurations
* Environment-based settings
**Start here** for basic pushduck setup.
**Enhanced client configuration** with property-based access.
* Property-based vs hook-based clients
* Upload callbacks and progress tracking
* Error handling and retry logic
* TypeScript integration
**Perfect for** modern React applications.
**Server route configuration** for API endpoints.
* Router setup and handlers
* Middleware integration
* Lifecycle hooks (onStart, onComplete, onError)
* Multi-route configurations
**Essential for** API route setup.
**Advanced path management** for organized storage.
* Custom path generation
* Dynamic naming patterns
* Hierarchical organization
* User-based separation
**Great for** complex applications.
## Quick Configuration Examples
### Basic Setup
```typescript
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
export const { s3, storage } = createUploadConfig()
.provider("cloudflareR2", {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
accountId: process.env.R2_ACCOUNT_ID,
bucket: process.env.S3_BUCKET_NAME,
})
.defaults({
maxFileSize: '10MB',
acl: 'public-read',
})
.build()
```
### Advanced Configuration
```typescript
// lib/upload.ts - Advanced setup
export const { s3, storage } = createUploadConfig()
.provider("aws", { /* credentials */ })
.defaults({
maxFileSize: '100MB',
acl: 'private',
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
const userId = metadata.userId || 'anonymous'
const timestamp = Date.now()
return `${userId}/${timestamp}/${file.name}`
},
})
.hooks({
onUploadStart: async ({ file, metadata }) => {
console.log(`Starting upload: ${file.name}`)
},
onUploadComplete: async ({ file, url, metadata }) => {
await saveToDatabase({ file, url, userId: metadata.userId })
},
})
.build()
```
## Configuration Flow
The configuration system follows a builder pattern:
1. **Provider Setup** β Choose your storage provider
2. **Defaults** β Set global upload settings
3. **Paths** β Configure file organization
4. **Hooks** β Add lifecycle callbacks
5. **Build** β Generate final configuration
**Best Practice**: Start with basic configuration and gradually add advanced features as your application grows.
## Environment Variables
Common environment variables across all configurations:
```bash
# Storage Provider
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
S3_BUCKET_NAME=your-bucket
# Optional: Custom endpoints
AWS_ENDPOINT_URL=https://your-custom-endpoint.com
```
## TypeScript Integration
All configuration options are fully typed:
```typescript
import type { UploadConfig, S3ProviderConfig } from 'pushduck/server'
// Full type safety throughout configuration
const config: UploadConfig = createUploadConfig()
.provider("aws", providerConfig as S3ProviderConfig)
.build()
```
***
## Next Steps
* **New to pushduck?** Start with [Upload Configuration](/docs/api/configuration/upload-config)
* **Building a client?** See [Client Options](/docs/api/configuration/client-options)
* **Setting up routes?** Check [Server Router](/docs/api/configuration/server-router)
* **Need custom paths?** Explore [Path Configuration](/docs/api/configuration/path-configuration)
# Path Configuration (/docs/api/configuration/path-configuration)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
import { TypeTable } from "fumadocs-ui/components/type-table";
## Custom Route Paths
Organize your uploads with powerful hierarchical path structures that provide clean file organization, prevent conflicts, and enable scalable storage patterns.
**New in v2.0:** The hierarchical path system allows global configuration to provide the foundation while route-level paths extend and nest within it - **no more overrides that lose configuration!**
## How Hierarchical Paths Work
The path system works in **layers** that build upon each other:
**Path Structure:** `{globalPrefix}/{routePrefix}/{globalGenerated}`
* **Global prefix:** `uploads` (foundation for all files)
* **Route prefix:** `images`, `documents` (category organization)
* **Global generated:** `{userId}/{timestamp}/{randomId}/{filename}` (file structure)
## Basic Path Configuration
### Global Foundation
Configure the base structure that all uploads will use:
```typescript title="lib/upload.ts"
import { createUploadConfig } from "pushduck/server";
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{
// ... provider config
})
.paths({
// Global prefix - foundation for ALL uploads
prefix: "uploads",
// Global structure - used when routes don't override
generateKey: (file, metadata) => {
const userId = metadata.userId || "anonymous";
const timestamp = Date.now();
const randomId = Math.random().toString(36).substring(2, 8);
const sanitizedName = file.name.replace(/[^a-zA-Z0-9.-]/g, "_");
// Return ONLY the file path part (no prefix)
return `${userId}/${timestamp}/${randomId}/${sanitizedName}`;
},
})
.build();
```
### Route-Level Extensions
Extend the global foundation with route-specific organization:
```typescript title="app/api/upload/route.ts"
import { s3 } from "@/lib/upload";
const s3Router = s3.createRouter({
// Images: uploads/images/{userId}/{timestamp}/{randomId}/photo.jpg
imageUpload: s3
.image()
.maxFileSize("5MB")
.paths({
prefix: "images", // Nests under global prefix
}),
// Documents: uploads/documents/{userId}/{timestamp}/{randomId}/report.pdf
documentUpload: s3
.file()
.maxFileSize("10MB")
.paths({
prefix: "documents", // Nests under global prefix
}),
// General: uploads/{userId}/{timestamp}/{randomId}/file.ext
generalUpload: s3
.file()
.maxFileSize("20MB")
// No .paths() - uses pure global configuration
});
```
**β¨ Result:** Clean, predictable paths that scale with your application. Global config provides consistency while routes add organization.
## Advanced Path Patterns
### Custom Route Generation
Override the default structure for specific use cases:
```typescript
galleryUpload: s3
.image()
.maxFileSize("5MB")
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const globalPrefix = globalConfig.prefix || "uploads";
const date = new Date();
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, "0");
const userId = metadata.userId || "anonymous";
// Custom path: uploads/gallery/2024/06/demo-user/photo.jpg
return `${globalPrefix}/gallery/${year}/${month}/${userId}/${file.name}`;
},
})
```
**Result:** `uploads/gallery/2024/06/demo-user/photo.jpg`
```typescript
productUpload: s3
.image()
.maxFileSize("8MB")
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const globalPrefix = globalConfig.prefix || "uploads";
const category = metadata.category || "general";
const productId = metadata.productId || "unknown";
const timestamp = Date.now();
// Custom path: uploads/products/electronics/prod-123/1234567890/image.jpg
return `${globalPrefix}/products/${category}/${productId}/${timestamp}/${file.name}`;
},
})
```
**Result:** `uploads/products/electronics/prod-123/1234567890/image.jpg`
```typescript
profileUpload: s3
.image()
.maxFileSize("2MB")
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const globalPrefix = globalConfig.prefix || "uploads";
const userId = metadata.userId || "anonymous";
const fileType = file.type.split('/')[0]; // image, video, etc.
// Custom path: uploads/users/user-123/profile/image/avatar.jpg
return `${globalPrefix}/users/${userId}/profile/${fileType}/${file.name}`;
},
})
```
**Result:** `uploads/users/user-123/profile/image/avatar.jpg`
### Environment-Based Paths
Organize files by environment for clean separation:
```typescript title="lib/upload.ts"
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{
// ... provider config
})
.paths({
// Environment-aware global prefix
prefix: process.env.NODE_ENV === "production" ? "prod" : "dev",
generateKey: (file, metadata) => {
const userId = metadata.userId || "anonymous";
const timestamp = Date.now();
const randomId = Math.random().toString(36).substring(2, 8);
return `${userId}/${timestamp}/${randomId}/${file.name}`;
},
})
.build();
```
**Result Paths:**
* **Development:** `dev/images/user123/1234567890/abc123/photo.jpg`
* **Production:** `prod/images/user123/1234567890/abc123/photo.jpg`
## Path Configuration API
### Global Configuration
string",
},
}}
/>
```typescript
.paths({
prefix: "uploads",
generateKey: (file, metadata) => {
// Return the file path structure (without prefix)
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
```
### Route Configuration
string",
},
}}
/>
```typescript
.paths({
prefix: "images", // Nested under global prefix
generateKey: (ctx) => {
const { file, metadata, globalConfig, routeName } = ctx;
// Full control with access to global configuration
return `${globalConfig.prefix}/custom/${file.name}`;
}
})
```
### PathContext Reference
When using custom `generateKey` functions at the route level, you receive a context object:
## Real-World Examples
### E-commerce Platform
```typescript title="app/api/upload/route.ts"
const s3Router = s3.createRouter({
// Product images: uploads/products/{category}/{productId}/images/photo.jpg
productImages: s3
.image()
.maxFileSize("8MB")
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const { productId, category } = await getProductContext(req);
return { productId, category, userId: await getUserId(req) };
})
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const { productId, category } = metadata;
return `${globalConfig.prefix}/products/${category}/${productId}/images/${file.name}`;
},
}),
// User avatars: uploads/users/{userId}/avatar/profile.jpg
userAvatar: s3
.image()
.maxFileSize("2MB")
.formats(["jpeg", "png", "webp"])
.middleware(async ({ req }) => {
const userId = await getUserId(req);
return { userId, type: "avatar" };
})
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
return `${globalConfig.prefix}/users/${metadata.userId}/avatar/${file.name}`;
},
}),
// Order documents: uploads/orders/{orderId}/documents/{timestamp}/receipt.pdf
orderDocuments: s3
.file()
.maxFileSize("10MB")
.types(["application/pdf", "image/*"])
.middleware(async ({ req }) => {
const { orderId } = await getOrderContext(req);
return { orderId, uploadedAt: new Date().toISOString() };
})
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const timestamp = Date.now();
return `${globalConfig.prefix}/orders/${metadata.orderId}/documents/${timestamp}/${file.name}`;
},
}),
});
```
### Content Management System
```typescript title="app/api/upload/route.ts"
const s3Router = s3.createRouter({
// Media library: uploads/media/{year}/{month}/{type}/filename.ext
mediaLibrary: s3
.file()
.maxFileSize("50MB")
.middleware(async ({ req, file }) => {
const userId = await getUserId(req);
const mediaType = file.type.split('/')[0]; // image, video, audio
return { userId, mediaType };
})
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const date = new Date();
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, "0");
const randomId = Math.random().toString(36).substring(2, 8);
return `${globalConfig.prefix}/media/${year}/${month}/${metadata.mediaType}/${randomId}-${file.name}`;
},
}),
// Page assets: uploads/pages/{pageSlug}/assets/image.jpg
pageAssets: s3
.image()
.maxFileSize("10MB")
.middleware(async ({ req }) => {
const { pageSlug } = await getPageContext(req);
return { pageSlug, userId: await getUserId(req) };
})
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
return `${globalConfig.prefix}/pages/${metadata.pageSlug}/assets/${file.name}`;
},
}),
// Temp uploads: uploads/temp/{userId}/{sessionId}/file.ext
tempUploads: s3
.file()
.maxFileSize("20MB")
.middleware(async ({ req }) => {
const userId = await getUserId(req);
const sessionId = req.headers.get('x-session-id') || 'unknown';
return { userId, sessionId, temporary: true };
})
.paths({
prefix: "temp", // Simple prefix approach
}),
});
```
## Best Practices
### π― Path Organization
```typescript
// β
Good
.paths({ prefix: "user-avatars" })
.paths({ prefix: "product-images" })
// β Avoid
.paths({ prefix: "imgs" })
.paths({ prefix: "files" })
```
```typescript
// β
Good - includes user and timestamp
return `${prefix}/users/${userId}/${timestamp}/${file.name}`;
// β Avoid - no traceability
return `${prefix}/${file.name}`;
```
```typescript
// β
Good - timestamp + random ID
const timestamp = Date.now();
const randomId = Math.random().toString(36).substring(2, 8);
return `${prefix}/${userId}/${timestamp}/${randomId}/${file.name}`;
```
### π Performance Tips
**Path Length Limits:** Most S3-compatible services have a 1024-character limit for object keys. Keep your paths reasonable!
* **Use short, meaningful prefixes** instead of long descriptive names
* **Avoid deep nesting** beyond 5-6 levels for better performance
* **Include timestamps** for natural chronological ordering
* **Sanitize filenames** to prevent special character issues
### π Security Considerations
```typescript
// β
Sanitize user input in paths
const sanitizePathSegment = (input: string) => {
return input.replace(/[^a-zA-Z0-9.-]/g, "_").substring(0, 50);
};
.paths({
generateKey: (ctx) => {
const { file, metadata, globalConfig } = ctx;
const safeUserId = sanitizePathSegment(metadata.userId);
const safeFilename = sanitizePathSegment(file.name);
return `${globalConfig.prefix}/users/${safeUserId}/${Date.now()}/${safeFilename}`;
}
})
```
## Migration from Legacy Paths
**Upgrading from v1.x?** The old `pathPrefix` option still works but is deprecated. Use the new hierarchical system for better organization.
### Before (Legacy)
```typescript
// Old way - simple prefix only
export const { POST, GET } = s3Router.handlers;
```
### After (Hierarchical)
```typescript
// New way - hierarchical configuration
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{ /* config */ })
.paths({
prefix: "uploads",
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
const s3Router = s3.createRouter({
images: s3.image().paths({ prefix: "images" }),
documents: s3.file().paths({ prefix: "documents" })
});
```
**Benefits of upgrading:**
* β
**Better organization** with route-specific prefixes
* β
**No configuration loss** - global settings are preserved
* β
**More flexibility** with custom generation functions
* β
**Type safety** with PathContext
* β
**Scalable patterns** for growing applications
**π You're ready!** Your uploads now have clean, organized, and scalable path structures that will grow with your application.
# Server Router (/docs/api/configuration/server-router)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { Steps, Step } from "fumadocs-ui/components/steps";
import { File, Folder, Files } from "fumadocs-ui/components/files";
import { TypeTable } from "fumadocs-ui/components/type-table";
## Setting Up Server Routes
The server router is the heart of pushduck. It defines your upload endpoints with complete type safety and validates all uploads against your schema.
This guide covers the **enhanced router API** with property-based client
access. Looking for the legacy API? Check our [migration
guide](/docs/guides/migrate-to-enhanced-client).
## Project Structure
A typical Next.js project with pushduck follows this structure:
## Basic Router Setup
**Create your upload route**
Start by creating the API route that will handle your uploads:
```typescript title="app/api/upload/route.ts"
import { s3 } from '@/lib/upload'
const s3Router = s3.createRouter({
imageUpload: s3
.image()
.maxFileSize('4MB')
.maxFiles(10)
.middleware(async ({ req }) => {
// Add your authentication logic here
return { userId: "user_123" }
}),
documentUpload: s3
.file()
.maxFileSize('10MB')
.types(['application/pdf'])
.maxFiles(1)
})
export const { POST, GET } = s3Router.handlers
```
**Export router types**
Create a separate file to export your router types for client-side usage:
```typescript title="lib/upload.ts"
import { createUploadConfig } from 'pushduck/server'
// Configure upload with provider and settings
const { s3, storage } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL!,
bucket: process.env.S3_BUCKET_NAME!,
accountId: process.env.R2_ACCOUNT_ID!,
})
.build()
// Define your router
const s3Router = s3.createRouter({
imageUpload: s3.image().maxFileSize('4MB').maxFiles(10),
documentUpload: s3.file().maxFileSize('10MB').types(['application/pdf']).maxFiles(1)
})
// Export for use in API routes and client
export { s3, storage }
export type AppS3Router = typeof s3Router
```
**Create typed client**
Set up your client-side upload client with full type safety:
```typescript title="lib/upload-client.ts"
import { createUploadClient } from 'pushduck/client'
import type { AppS3Router } from './upload'
export const upload = createUploadClient({
endpoint: '/api/upload'
})
```
## Schema Builder Reference
The `s3` schema builders provide a fluent API for defining your upload requirements:
### FileConfig Options
## Advanced Configuration
### Multiple File Types
You can define schemas that accept multiple file types:
```typescript
const s3Router = s3.createRouter({
mixedUpload: s3.object({
images: s3.image().maxFileSize('4MB').maxFiles(5),
pdfs: s3.file().maxFileSize('10MB').types(['application/pdf']).maxFiles(2),
documents: s3.file().maxFileSize('5MB').types(['application/vnd.openxmlformats-officedocument.wordprocessingml.document']).maxFiles(3)
})
})
```
```typescript
const s3Router = s3.createRouter({
mediaUpload: s3.object({
images: s3.image()
.maxFileSize('4MB')
.maxFiles(10)
.formats(['jpeg', 'jpg', 'png', 'webp']),
videos: s3.file()
.maxFileSize('100MB')
.maxFiles(2)
.types(['video/mp4', 'video/quicktime', 'video/avi'])
})
})
```
```typescript
const s3Router = s3.createRouter({
genericUpload: s3.file()
.maxFileSize('50MB')
.maxFiles(20)
.types([
'image/*', 'video/*', 'application/pdf',
'application/msword', 'text/plain'
])
})
```
### Global Configuration
Configure settings that apply to all upload endpoints:
**Deprecated:** The `pathPrefix` option is deprecated. Use the new [**hierarchical path configuration**](/docs/api/configuration/path-configuration) for better organization and flexibility.
```typescript
// β Deprecated approach - use modern createUploadConfig() instead
// This section is kept for reference only
```
**For new projects, use the [hierarchical path system](/docs/api/configuration/path-configuration) instead:**
```typescript
// β
Modern approach with hierarchical paths
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{ /* config */ })
.paths({
prefix: "uploads",
generateKey: (file, metadata) => {
return `${metadata.userId}/${Date.now()}/${file.name}`;
}
})
.build();
const s3Router = s3.createRouter({
images: s3.image().paths({ prefix: "images" }),
documents: s3.file().paths({ prefix: "documents" })
});
```
### Multiple Providers
Support different storage providers for different upload types:
```typescript
// β
Modern approach with multiple provider configurations
import { createUploadConfig } from "pushduck/server";
const primaryConfig = createUploadConfig()
.provider("cloudflareR2",{
// Primary R2 config for production files
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
accountId: process.env.R2_ACCOUNT_ID!,
bucket: process.env.R2_BUCKET!,
})
.build();
const backupConfig = createUploadConfig()
.provider("aws",{
// AWS S3 config for backups
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
region: process.env.AWS_REGION!,
bucket: process.env.AWS_BACKUP_BUCKET!,
})
.build();
// Use primary config for main uploads
const s3Router = primaryConfig.s3.createRouter({
productImages: primaryConfig.s3.image().maxFileSize("4MB").maxFiles(10),
// Backup handling would be done in lifecycle hooks
});
```
## Middleware Integration
Add authentication, logging, and custom validation:
### Authentication Middleware
```typescript
const s3Router = s3.createRouter({
privateUploads: s3.image()
.maxFileSize("4MB")
.maxFiles(5)
.middleware(async ({ req, metadata }) => {
const session = await getServerSession(req);
if (!session?.user?.id) {
throw new Error("Unauthorized");
}
return {
...metadata,
userId: session.user.id,
userRole: session.user.role,
};
}),
publicUploads: s3.image()
.maxFileSize("1MB")
.maxFiles(1), // No middleware = publicly accessible
});
```
### File Validation Middleware
```typescript
import { z } from "zod";
const s3Router = s3.createRouter({
profilePicture: s3.image()
.maxFileSize("2MB")
.maxFiles(1)
.middleware(async ({ req, file, metadata }) => {
// Custom file validation
if (file.name.includes("temp") || file.name.includes("test")) {
throw new Error("Temporary files not allowed");
}
const userId = await getUserId(req);
return { ...metadata, userId };
}),
});
```
### Metadata Enhancement
```typescript
const s3Router = s3.createRouter({
documentUpload: s3.file()
.maxFileSize("10MB")
.maxFiles(1)
.types(["application/pdf"])
.middleware(async ({ req, metadata }) => {
const enrichedMetadata = {
...metadata,
uploadedAt: new Date().toISOString(),
userAgent: req.headers.get("user-agent"),
ip: req.headers.get("x-forwarded-for") || "unknown",
};
return enrichedMetadata;
}),
});
```
## Lifecycle Hooks
Handle upload events for processing, notifications, and cleanup:
void | Promise",
},
onUploadProgress: {
description: "Called during upload progress",
type: "(context, progress) => void | Promise",
},
onUploadComplete: {
description: "Called when upload succeeds",
type: "(context, result) => void | Promise",
},
onUploadError: {
description: "Called when upload fails",
type: "(context, error) => void | Promise",
},
}}
/>
```typescript
const s3Router = s3.createRouter({
imageUpload: s3.image()
.maxFileSize("4MB")
.maxFiles(10)
.onUploadComplete(async ({ file, url, metadata }) => {
// Process uploaded image - use permanent URL for processing
await generateThumbnail(url);
await updateDatabase({
key: file.key,
publicUrl: url,
userId: metadata.userId
});
})
.onUploadError(async ({ error, metadata }) => {
// Log errors and notify admins
console.error("Upload failed:", error);
await notifyAdmins(`Upload failed for user ${metadata.userId}`);
}),
});
```
## Type Safety Features
### Router Type Export
Export your router type for end-to-end type safety:
```typescript title="lib/upload.ts"
import { createUploadConfig } from "pushduck/server";
const { s3 } = createUploadConfig()
.provider("cloudflareR2",{ /* your config */ })
.build();
const s3Router = s3.createRouter({
// ... your configuration
});
export { s3 };
export type AppS3Router = typeof s3Router;
// Extract individual endpoint types
export type ImageUploadType = AppS3Router["imageUpload"];
export type DocumentUploadType = AppS3Router["documentUpload"];
```
### Custom Context Types
Define custom context types for your middleware:
```typescript
interface CustomContext {
userId: string;
userRole: "admin" | "user" | "guest";
organizationId?: string;
}
const s3Router = s3.createRouter({
upload: s3.image()
.maxFileSize('4MB')
.maxFiles(5)
.middleware(async ({ req }): Promise => {
// Your auth logic here
return {
userId: "user_123",
userRole: "user",
};
}),
});
```
## Real-World Examples
### E-commerce Product Images
```typescript
const ecommerceRouter = s3.createRouter({
productImages: s3.image()
.maxFileSize('5MB')
.maxFiles(8)
.formats(['webp', 'jpeg'])
.middleware(async ({ req }) => {
const vendorId = await getVendorId(req);
return { vendorId, category: "products" };
})
.onUploadComplete(async ({ files, metadata }) => {
// Update product catalog
await updateProductImages(metadata.vendorId, files);
}),
});
```
### Document Management System
```typescript
const docsRouter = s3.createRouter({
contracts: s3.file()
.maxFileSize('25MB')
.types(['application/pdf'])
.maxFiles(1)
.middleware(async ({ req }) => {
const { userId, companyId } = await validateContractUpload(req);
return { userId, companyId, confidential: true };
}),
proposals: s3.object({
pdfs: s3.file().maxFileSize('50MB').types(['application/pdf']).maxFiles(3),
documents: s3.file().maxFileSize('25MB').types(['application/vnd.openxmlformats-officedocument.wordprocessingml.document']).maxFiles(5),
}).middleware(async ({ req }) => {
const { userId, projectId } = await validateProposalUpload(req);
return { userId, projectId };
}),
});
```
### Social Media Platform
```typescript
export const socialRouter = createUploadRouter({
profilePicture: uploadSchema({
image: {
maxSize: "2MB",
maxCount: 1,
processing: {
resize: { width: 400, height: 400 },
format: "webp",
},
},
}),
postMedia: uploadSchema({
image: { maxSize: "8MB", maxCount: 4 },
video: { maxSize: "100MB", maxCount: 1 },
}).middleware(async ({ req }) => {
const userId = await authenticateUser(req);
return { userId, postType: "media" };
}),
});
```
## Security Best Practices
**Important:** Always implement proper authentication and file validation in
production environments.
### Content Type Validation
```typescript
export const router = createUploadRouter({
secureUpload: uploadSchema({
image: {
maxSize: "4MB",
maxCount: 5,
mimeTypes: ["image/jpeg", "image/png", "image/webp"], // Explicit whitelist
},
}).middleware(async ({ req, files }) => {
// Additional security checks
for (const file of files) {
// Validate file headers match content type
const isValidImage = await validateImageFile(file);
if (!isValidImage) {
throw new Error("Invalid image file");
}
}
return { userId: await getUserId(req) };
}),
});
```
### Rate Limiting
```typescript
import { ratelimit } from "@/lib/ratelimit";
export const router = createUploadRouter({
upload: uploadSchema({
any: { maxSize: "10MB", maxCount: 3 },
}).middleware(async ({ req }) => {
const ip = req.headers.get("x-forwarded-for") || "unknown";
const { success } = await ratelimit.limit(ip);
if (!success) {
throw new Error("Rate limit exceeded");
}
return { ip };
}),
});
```
***
**Next Steps:** Now that you have your router configured, learn how to
[configure your client](/docs/api/configuration/client-options) for the best
developer experience.
# Upload Configuration (/docs/api/configuration/upload-config)
## Router Configuration Options
The `createUploadConfig()` builder provides a fluent API for configuring your S3 uploads with providers, security, paths, and lifecycle hooks.
## Basic Setup
```typescript title="lib/upload.ts"
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
const { storage, s3, config } = createUploadConfig()
.provider("cloudflareR2",{ // [!code highlight]
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL,
bucket: process.env.S3_BUCKET_NAME,
accountId: process.env.R2_ACCOUNT_ID, // [!code highlight]
})
.build()
export { storage, s3 }
```
## Provider Configuration
The `createUploadConfig().provider()` method provides full TypeScript type safety for provider configurations. When you specify a provider type, TypeScript will automatically infer the correct configuration interface and provide autocomplete and type checking.
### Features
* **Provider name autocomplete**: TypeScript suggests valid provider names
* **Configuration property validation**: Only valid properties for each provider are accepted
* **Required field checking**: TypeScript ensures all required fields are provided
* **Property type validation**: Each property must be the correct type
## Provider Configuration
### Cloudflare R2
```typescript title="Cloudflare R2 Configuration"
createUploadConfig().provider("cloudflareR2",{ // [!code highlight]
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'auto', // Always 'auto' for R2
endpoint: process.env.AWS_ENDPOINT_URL,
bucket: process.env.S3_BUCKET_NAME,
accountId: process.env.R2_ACCOUNT_ID, // Required for R2 // [!code highlight]
// Optional: Custom domain for public files
customDomain: process.env.R2_CUSTOM_DOMAIN,
})
```
### AWS S3
```typescript title="AWS S3 Configuration"
createUploadConfig().provider("aws",{ // [!code highlight]
region: 'us-east-1',
bucket: 'your-bucket', // [!code highlight]
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
// Optional: Custom domain for public files (CDN, CloudFront, etc.)
customDomain: process.env.S3_CUSTOM_DOMAIN,
})
```
### DigitalOcean Spaces
```typescript
createUploadConfig().provider("digitalOceanSpaces",{
region: 'nyc3',
bucket: 'your-space',
accessKeyId: process.env.DO_SPACES_ACCESS_KEY_ID,
secretAccessKey: process.env.DO_SPACES_SECRET_ACCESS_KEY,
// Optional: Custom domain for public files (CDN endpoint)
customDomain: process.env.DO_SPACES_CUSTOM_DOMAIN,
})
```
### MinIO
```typescript
createUploadConfig().provider("minio",{
endpoint: 'localhost:9000',
bucket: 'your-bucket',
accessKeyId: process.env.MINIO_ACCESS_KEY_ID,
secretAccessKey: process.env.MINIO_SECRET_ACCESS_KEY,
useSSL: false,
// Optional: Custom domain for public files
customDomain: process.env.MINIO_CUSTOM_DOMAIN,
})
```
### S3-Compatible (Generic)
For any S3-compatible storage service not explicitly supported:
```typescript
createUploadConfig().provider("s3Compatible",{
endpoint: 'https://your-s3-compatible-service.com',
bucket: 'your-bucket',
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
region: 'us-east-1', // Optional, defaults to us-east-1
forcePathStyle: true, // Optional, defaults to true for compatibility
// Optional: Custom domain for public files
customDomain: process.env.S3_CUSTOM_DOMAIN,
})
```
## Configuration Methods
### .defaults()
Set default file constraints and options:
```typescript
.defaults({
maxFileSize: '10MB',
allowedFileTypes: ['image/*', 'application/pdf', 'text/*'],
acl: 'public-read', // or 'private'
metadata: {
uploadedBy: 'system',
environment: process.env.NODE_ENV,
},
})
```
### .paths()
Configure global path structure:
```typescript
.paths({
// Global prefix for all uploads
prefix: 'uploads',
// Global key generation function
generateKey: (file, metadata) => {
const userId = metadata.userId || 'anonymous'
const timestamp = Date.now()
const randomId = Math.random().toString(36).substring(2, 8)
const sanitizedName = file.name.replace(/[^a-zA-Z0-9.-]/g, '_')
return `${userId}/${timestamp}/${randomId}/${sanitizedName}`
},
})
```
### .security()
Configure security and access control:
```typescript
.security({
requireAuth: true,
allowedOrigins: [
'http://localhost:3000',
'https://your-domain.com',
],
rateLimiting: {
maxUploads: 10,
windowMs: 60000, // 1 minute
},
})
```
### .hooks()
Add lifecycle hooks for upload events:
```typescript
.hooks({
onUploadStart: async ({ file, metadata }) => {
console.log(`π Upload started: ${file.name}`)
// Log to analytics, validate user, etc.
},
onUploadComplete: async ({ file, url, metadata }) => {
console.log(`β
Upload completed: ${file.name} -> ${url}`)
// Save to database
await db.files.create({
filename: file.name,
url,
userId: metadata.userId,
size: file.size,
})
// Send notifications
await notificationService.send({
type: 'upload_complete',
userId: metadata.userId,
filename: file.name,
})
},
onUploadError: async ({ file, error, metadata }) => {
console.error(`β Upload failed: ${file.name}`, error)
// Log to error tracking service
await errorService.log({
operation: 'file_upload',
error: error.message,
userId: metadata.userId,
filename: file.name,
})
},
})
```
## Complete Example
```typescript
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
const { storage, s3, config } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL,
bucket: process.env.S3_BUCKET_NAME,
accountId: process.env.R2_ACCOUNT_ID,
})
.defaults({
maxFileSize: '10MB',
acl: 'public-read',
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
const userId = metadata.userId || 'anonymous'
const date = new Date()
const year = date.getFullYear()
const month = String(date.getMonth() + 1).padStart(2, '0')
const day = String(date.getDate()).padStart(2, '0')
const randomId = Math.random().toString(36).substring(2, 8)
return `${userId}/${year}/${month}/${day}/${randomId}/${file.name}`
},
})
.security({
allowedOrigins: [
'http://localhost:3000',
'https://your-domain.com',
],
rateLimiting: {
maxUploads: 20,
windowMs: 60000,
},
})
.hooks({
onUploadComplete: async ({ file, url, metadata }) => {
// Save to your database
await saveFileRecord({
filename: file.name,
url,
userId: metadata.userId,
uploadedAt: new Date(),
})
},
})
.build()
export { storage, s3 }
```
## Environment Variables
The configuration automatically reads from environment variables:
```bash
# .env.local
# Cloudflare R2
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_ENDPOINT_URL=https://your-account-id.r2.cloudflarestorage.com
S3_BUCKET_NAME=your-bucket
R2_ACCOUNT_ID=your-account-id
# AWS S3 (alternative)
AWS_REGION=us-east-1
AWS_S3_BUCKET=your-s3-bucket
# DigitalOcean Spaces (alternative)
DO_SPACES_REGION=nyc3
DO_SPACES_BUCKET=your-space
DO_SPACES_ACCESS_KEY_ID=your_key
DO_SPACES_SECRET_ACCESS_KEY=your_secret
# Custom Domains (Optional)
S3_CUSTOM_DOMAIN=https://cdn.yourdomain.com
DO_SPACES_CUSTOM_DOMAIN=https://cdn.yourdomain.com
MINIO_CUSTOM_DOMAIN=https://uploads.yourdomain.com
GCS_CUSTOM_DOMAIN=https://storage.yourdomain.com
CLOUDFLARE_R2_CUSTOM_DOMAIN=https://uploads.yourdomain.com
```
## Custom Domain Configuration
When `customDomain` is configured, public URLs will use your custom domain instead of the storage provider's default URL. This is useful for:
* **CDN Integration**: Use CloudFront, Cloudflare, or other CDNs
* **Branding**: Serve files from your own domain
* **Performance**: Optimize file delivery with custom caching rules
* **Security**: Control access through your own domain
### How It Works
```typescript
// Without custom domain
file.url = "https://bucket.s3.region.amazonaws.com/path/file.jpg"
// With custom domain
file.url = "https://cdn.yourdomain.com/path/file.jpg"
```
**Note**: Internal operations (upload, delete) still use the storage provider's endpoints. Only public URLs are affected by the custom domain.
## Client Upload Results
After successful upload completion, clients receive file objects with both permanent and temporary URLs:
```typescript
// Client-side usage
const { uploadFiles } = useUploadRoute('fileUpload', {
onSuccess: (results) => {
results.forEach(file => {
console.log('File:', file.name);
console.log('Public URL:', file.url); // Permanent access
console.log('Download URL:', file.presignedUrl); // Temporary access (1 hour)
console.log('S3 Key:', file.key); // Object key/path
});
}
});
```
### URL Types
* **`url`** - Permanent URL for public file access, CDN caching, and embedding
* **`presignedUrl`** - Temporary download URL (expires in 1 hour) for secure access
* **`key`** - S3 object key/path for direct storage operations
Use the appropriate URL based on your access requirements:
* Public files: Use `url` for permanent access
* Private files: Use `presignedUrl` for time-limited access
* File operations: Use `key` with storage API
## Type Definitions
```typescript
interface UploadConfig {
provider: ProviderConfig
defaults?: {
maxFileSize?: string | number
allowedFileTypes?: string[]
acl?: 'public-read' | 'private'
metadata?: Record
}
paths?: {
prefix?: string
generateKey?: (
file: { name: string; type: string },
metadata: any
) => string
}
security?: {
requireAuth?: boolean
allowedOrigins?: string[]
rateLimiting?: {
maxUploads?: number
windowMs?: number
}
}
hooks?: {
onUploadStart?: (ctx: { file: any; metadata: any }) => Promise | void
onUploadComplete?: (ctx: {
file: any
url: string // Permanent file URL
metadata: any
}) => Promise | void
onUploadError?: (ctx: {
file: any
error: Error
metadata: any
}) => Promise | void
}
}
```
# File Operations (/docs/api/storage/file-operations)
## File Management Operations
## List Operations
### Basic File Listing
```typescript
// List all files
const files = await storage.list.files()
// List with options
const files = await storage.list.files({
prefix: 'uploads/',
maxResults: 50,
sortBy: 'lastModified',
sortOrder: 'desc'
})
```
### Paginated Listing
```typescript
// Get first page
const result = await storage.list.paginated({
maxResults: 20,
prefix: 'images/'
})
console.log(result.files) // FileInfo[]
console.log(result.hasMore) // boolean
console.log(result.nextToken) // string | undefined
// Get next page
if (result.hasMore) {
const nextPage = await storage.list.paginated({
maxResults: 20,
prefix: 'images/',
continuationToken: result.nextToken
})
}
```
### Filtered Listing
```typescript
// By file extension
const images = await storage.list.byExtension('jpg', 'photos/')
const pdfs = await storage.list.byExtension('pdf')
// By file size (bytes)
const largeFiles = await storage.list.bySize(1024 * 1024) // > 1MB
const mediumFiles = await storage.list.bySize(100000, 1024 * 1024) // 100KB - 1MB
// By date range
const recent = await storage.list.byDate(
new Date('2024-01-01'),
new Date('2024-12-31')
)
```
### Directory Listing
```typescript
// List "directories" (common prefixes)
const dirs = await storage.list.directories('uploads/')
// Returns: ['uploads/images/', 'uploads/documents/', 'uploads/videos/']
```
### Async Generator (Large Datasets)
```typescript
// Process large datasets efficiently
for await (const batch of storage.list.paginatedGenerator({ maxResults: 100 })) {
console.log(`Processing ${batch.files.length} files`)
// Process batch...
}
```
## Metadata Operations
### Single File Info
```typescript
const info = await storage.metadata.getInfo('uploads/image.jpg')
console.log(info.key) // 'uploads/image.jpg'
console.log(info.size) // 1024000
console.log(info.contentType) // 'image/jpeg'
console.log(info.lastModified) // Date object
console.log(info.etag) // '"abc123..."'
```
### Batch Metadata
```typescript
const keys = ['file1.jpg', 'file2.pdf', 'file3.mp4']
const results = await storage.metadata.getBatch(keys)
results.forEach(result => {
if (result.success) {
console.log(result.info.size)
} else {
console.log(result.error)
}
})
```
### Specific Metadata
```typescript
// Get individual properties
const size = await storage.metadata.getSize('file.jpg')
const contentType = await storage.metadata.getContentType('file.jpg')
const lastModified = await storage.metadata.getLastModified('file.jpg')
// Custom metadata
const customMeta = await storage.metadata.getCustom('file.jpg')
await storage.metadata.setCustom('file.jpg', {
userId: '123',
category: 'profile-image'
})
```
## Delete Operations
### Single File Delete
```typescript
const result = await storage.delete.file('uploads/old-file.jpg')
if (result.success) {
console.log('File deleted successfully')
} else {
console.log('Delete failed:', result.error)
}
```
### Batch Delete
```typescript
const keys = ['file1.jpg', 'file2.pdf', 'file3.mp4']
const result = await storage.delete.files(keys)
console.log(`Deleted: ${result.deleted.length}`)
console.log(`Failed: ${result.errors.length}`)
// Check individual results
result.errors.forEach(error => {
console.log(`Failed to delete ${error.key}: ${error.message}`)
})
```
### Delete by Prefix (Folder-like)
```typescript
// Delete all files with prefix
const result = await storage.delete.byPrefix('temp-uploads/')
console.log(`Deleted ${result.deletedCount} files`)
// With options
const result = await storage.delete.byPrefix('old-files/', {
dryRun: true, // Preview only, don't delete
batchSize: 500, // Process in batches
maxFiles: 1000 // Limit total files
})
if (result.dryRun) {
console.log(`Would delete ${result.totalFiles} files`)
}
```
## Validation Operations
### File Existence
```typescript
// Simple existence check
const exists = await storage.validation.exists('file.jpg')
// Existence with metadata
const result = await storage.validation.existsWithInfo('file.jpg')
if (result.exists) {
console.log('File size:', result.info.size)
}
```
### File Validation
```typescript
// Validate single file
const result = await storage.validation.validateFile('image.jpg', {
max: "5MB",
types: ['image/jpeg', 'image/png'],
min: "1KB"
})
if (result.valid) {
console.log('File is valid')
} else {
console.log('Validation errors:', result.errors)
}
// Validate multiple files
const results = await storage.validation.validateFiles(
['file1.jpg', 'file2.png'],
{ max: "10MB" }
)
```
### Connection Validation
```typescript
// Test S3 connection and permissions
const isHealthy = await storage.validation.connection()
console.log('S3 connection:', isHealthy ? 'OK' : 'Failed')
```
## Type Definitions
```typescript
interface FileInfo {
key: string
size: number
contentType: string
lastModified: Date
etag: string
metadata?: Record
}
interface ListFilesOptions {
prefix?: string
maxResults?: number
sortBy?: 'key' | 'size' | 'lastModified'
sortOrder?: 'asc' | 'desc'
}
interface ValidationRules {
max?: string | number
min?: string | number
types?: string[]
requiredMetadata?: string[]
}
```
# Storage API (/docs/api/storage)
import { Cards, Card } from "fumadocs-ui/components/card";
import { Callout } from "fumadocs-ui/components/callout";
## Storage API Overview
The Storage API provides direct access to file operations, metadata retrieval, and storage management. Use it for building custom file management interfaces, bulk operations, and advanced storage workflows.
## API Categories
**Get started quickly** with common operations.
* File listing and searching
* Basic upload/download operations
* Quick delete and metadata operations
* Essential API patterns
**Perfect for** getting up to speed fast.
**Structured API interface** with organized methods.
* `storage.list.*` - File listing operations
* `storage.metadata.*` - File information
* `storage.delete.*` - Deletion operations
* `storage.download.*` - URL generation
**Great for** organized code architecture.
**Complete file management** capabilities.
* List files with filtering and pagination
* Delete single files or batches
* Move and copy operations
* File metadata retrieval
**Essential for** file management features.
**Secure, temporary URLs** for storage operations.
* Download presigned URLs
* Upload presigned URLs
* Custom expiration times
* Access control integration
**Perfect for** secure file access.
**Debug storage problems** effectively.
* Access denied errors
* Empty file lists
* Presigned URL issues
* Performance optimization
**When you need** to solve problems.
## Getting Started
### Basic Storage Setup
```typescript
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
const { storage } = createUploadConfig()
.provider("cloudflareR2", {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
accountId: process.env.R2_ACCOUNT_ID,
bucket: process.env.S3_BUCKET_NAME,
})
.build()
export { storage }
```
### Using in API Routes
```typescript
// app/api/files/route.ts
import { storage } from '@/lib/upload'
export async function GET() {
try {
const files = await storage.list.files()
return Response.json({ files })
} catch (error) {
return Response.json({ error: 'Failed to list files' }, { status: 500 })
}
}
export async function DELETE(request: Request) {
const { key } = await request.json()
try {
await storage.delete.file(key)
return Response.json({ success: true })
} catch (error) {
return Response.json({ error: 'Failed to delete file' }, { status: 500 })
}
}
```
## Common Operations
### File Listing with Pagination
```typescript
// Get paginated file list
const result = await storage.list.paginated({
maxResults: 50,
prefix: 'uploads/',
})
console.log(`Found ${result.files.length} files`)
if (result.hasMore) {
console.log('More files available')
}
```
### Bulk File Operations
```typescript
// Delete multiple files
const filesToDelete = ['file1.jpg', 'file2.pdf', 'file3.png']
await storage.delete.files(filesToDelete)
// Get metadata for multiple files
const fileInfos = await Promise.all(
fileKeys.map(key => storage.metadata.getInfo(key))
)
```
### Secure Downloads
```typescript
// Generate secure download URL
const downloadUrl = await storage.download.presignedUrl(
'private/document.pdf',
3600 // 1 hour expiration
)
// Use with fetch or window.open
window.open(downloadUrl)
```
## Storage Instance Structure
The storage instance organizes operations into logical namespaces:
```typescript
storage.list.* // File listing operations
storage.metadata.* // File metadata operations
storage.download.* // Download and URL operations
storage.upload.* // Upload operations
storage.delete.* // Delete operations
storage.validation.* // Validation operations
```
## Error Handling
All storage operations use structured error handling:
```typescript
import { isPushduckError } from 'pushduck/server'
try {
const files = await storage.list.files()
} catch (error) {
if (isPushduckError(error)) {
console.log('Storage error:', error.code, error.context)
switch (error.code) {
case 'ACCESS_DENIED':
// Handle permission errors
break
case 'NETWORK_ERROR':
// Handle connectivity issues
break
default:
// Handle other errors
}
}
}
```
**TypeScript Support**: All storage operations are fully typed with intelligent autocomplete and error detection.
## Performance Considerations
### Efficient File Listing
```typescript
// β Inefficient - loads all files
const allFiles = await storage.list.files()
// β
Efficient - use pagination
const result = await storage.list.paginated({ maxResults: 50 })
// β
Even better - filter with prefix
const userFiles = await storage.list.files({
prefix: `users/${userId}/`,
maxResults: 100
})
```
### Batch Operations
```typescript
// β Inefficient - multiple API calls
for (const key of fileKeys) {
await storage.delete.file(key)
}
// β
Efficient - single batch operation
await storage.delete.files(fileKeys)
```
***
## Next Steps
* **New to storage?** Start with [Quick Reference](/docs/api/storage/quick-reference)
* **Building file management?** See [File Operations](/docs/api/storage/file-operations)
* **Need secure access?** Check [Presigned URLs](/docs/api/storage/presigned-urls)
* **Having issues?** Visit [Troubleshooting](/docs/api/storage/troubleshooting)
# Presigned URLs (/docs/api/storage/presigned-urls)
## Generating Presigned URLs
Presigned URLs allow secure access to private S3 files without exposing credentials. They're essential for serving files from private buckets.
## Download URLs
### Automatic Download URLs
When files are uploaded through pushduck, download presigned URLs are automatically generated:
```typescript
const { uploadFiles } = useUploadRoute('fileUpload', {
onSuccess: (results) => {
results.forEach(file => {
console.log('Public URL:', file.url); // Permanent access
console.log('Download URL:', file.presignedUrl); // Temporary access (1 hour)
});
}
});
```
### Manual Presigned URL Generation
You can also generate presigned URLs manually using the storage API:
```typescript
// Generate URL valid for 1 hour (default)
const url = await storage.download.presignedUrl('private/document.pdf')
// Custom expiration (in seconds)
const url = await storage.download.presignedUrl('private/image.jpg', 3600) // 1 hour
const url = await storage.download.presignedUrl('private/video.mp4', 86400) // 24 hours
```
### Direct File URLs
For public buckets, get direct URLs:
```typescript
const publicUrl = await storage.download.url('public/image.jpg')
// Returns: https://bucket.s3.amazonaws.com/public/image.jpg
```
## Upload URLs
### Single Upload URL
```typescript
const uploadUrl = await storage.upload.presignedUrl({
key: 'uploads/new-file.jpg',
contentType: 'image/jpeg',
expiresIn: 300, // 5 minutes
maxFileSize: 5 * 1024 * 1024 // 5MB
})
console.log(uploadUrl.url) // Presigned URL for PUT request
console.log(uploadUrl.fields) // Form fields for multipart upload
```
### Batch Upload URLs
```typescript
const requests = [
{ key: 'file1.jpg', contentType: 'image/jpeg' },
{ key: 'file2.pdf', contentType: 'application/pdf' },
{ key: 'file3.mp4', contentType: 'video/mp4' }
]
const urls = await storage.upload.presignedBatch(requests)
urls.forEach((result, index) => {
if (result.success) {
console.log(`Upload URL for ${requests[index].key}:`, result.url)
} else {
console.log(`Failed to generate URL:`, result.error)
}
})
```
## Frontend Usage Examples
### Direct File Access
```typescript
// API Route (app/api/files/[key]/route.ts)
import { storage } from '@/lib/upload'
import { NextRequest, NextResponse } from 'next/server'
export async function GET(
request: NextRequest,
{ params }: { params: { key: string } }
) {
try {
const url = await storage.download.presignedUrl(params.key, 3600)
return NextResponse.redirect(url)
} catch (error) {
return NextResponse.json(
{ error: 'File not found' },
{ status: 404 }
)
}
}
```
### File Viewer Component
```tsx
'use client'
import { useState, useEffect } from 'react'
interface FileViewerProps {
fileKey: string
}
export function FileViewer({ fileKey }: FileViewerProps) {
const [url, setUrl] = useState()
const [loading, setLoading] = useState(true)
useEffect(() => {
async function getUrl() {
try {
const response = await fetch('/api/presigned', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
operation: 'get-download-url',
key: fileKey
})
})
const data = await response.json()
if (data.success) {
setUrl(data.url)
}
} finally {
setLoading(false)
}
}
getUrl()
}, [fileKey])
if (loading) return Loading...
if (!url) return Failed to load file
return (
)
}
```
### API Route for Presigned URLs
```typescript
// app/api/presigned/route.ts
import { storage } from '@/lib/upload'
import { NextRequest, NextResponse } from 'next/server'
export async function POST(request: NextRequest) {
try {
const { operation, key, expiresIn = 3600 } = await request.json()
switch (operation) {
case 'get-download-url':
const downloadUrl = await storage.download.presignedUrl(key, expiresIn)
return NextResponse.json({
success: true,
url: downloadUrl,
expiresIn
})
case 'get-upload-url':
const uploadUrl = await storage.upload.presignedUrl({
key,
expiresIn,
contentType: 'application/octet-stream'
})
return NextResponse.json({
success: true,
...uploadUrl
})
default:
return NextResponse.json(
{ success: false, error: 'Unknown operation' },
{ status: 400 }
)
}
} catch (error) {
return NextResponse.json(
{ success: false, error: error.message },
{ status: 500 }
)
}
}
```
## Security Considerations
### Expiration Times
Choose appropriate expiration times:
```typescript
// Short-lived for sensitive files
const sensitiveUrl = await storage.download.presignedUrl('private/sensitive.pdf', 300) // 5 minutes
// Medium-lived for user content
const userUrl = await storage.download.presignedUrl('user/profile.jpg', 3600) // 1 hour
// Longer-lived for public content
const publicUrl = await storage.download.presignedUrl('public/banner.jpg', 86400) // 24 hours
```
### Access Control
Implement proper access control before generating URLs:
```typescript
// API Route with authentication
export async function POST(request: NextRequest) {
// Verify user authentication
const user = await getAuthenticatedUser(request)
if (!user) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}
const { key } = await request.json()
// Check if user can access this file
if (!canUserAccessFile(user.id, key)) {
return NextResponse.json({ error: 'Forbidden' }, { status: 403 })
}
// Generate presigned URL
const url = await storage.download.presignedUrl(key, 3600)
return NextResponse.json({ url })
}
```
## Type Definitions
```typescript
interface PresignedUrlOptions {
key: string
contentType?: string
expiresIn?: number
maxFileSize?: number
metadata?: Record
}
interface PresignedUrlResult {
url: string
fields?: Record
expiresAt: Date
}
```
# Quick Reference (/docs/api/storage/quick-reference)
## Storage Operations Overview
## Setup
```typescript
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
const { storage } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL,
bucket: process.env.S3_BUCKET_NAME,
accountId: process.env.R2_ACCOUNT_ID,
})
.build()
export { storage }
```
## Essential Operations
### List Files
```typescript
const files = await storage.list.files({ prefix: 'uploads/', maxResults: 50 })
const paginated = await storage.list.paginated({ maxResults: 20 })
const images = await storage.list.byExtension('jpg')
```
### File Info
```typescript
const info = await storage.metadata.getInfo('file.jpg')
const exists = await storage.validation.exists('file.jpg')
```
### Delete Files
```typescript
await storage.delete.file('old-file.jpg')
await storage.delete.files(['file1.jpg', 'file2.pdf'])
await storage.delete.byPrefix('temp/')
```
### Presigned URLs
```typescript
const downloadUrl = await storage.download.presignedUrl('private/file.pdf', 3600)
const uploadUrl = await storage.upload.presignedUrl({ key: 'new-file.jpg', contentType: 'image/jpeg' })
```
## API Route Example
```typescript
// app/api/files/route.ts
import { storage } from '@/lib/upload'
export async function GET() {
const files = await storage.list.files()
return Response.json({ files })
}
export async function DELETE(request: Request) {
const { key } = await request.json()
const result = await storage.delete.file(key)
return Response.json(result)
}
```
## Error Handling
```typescript
import { isPushduckError } from 'pushduck/server'
try {
await storage.list.files()
} catch (error) {
if (isPushduckError(error)) {
console.log(error.code, error.context)
}
}
```
## Types
```typescript
import type {
FileInfo,
ListFilesOptions,
ValidationRules
} from 'pushduck/server'
```
# Storage Instance (/docs/api/storage/storage-instance)
## Storage API Instance
The `StorageInstance` provides a clean, object-style API for all S3 operations. It groups related operations under namespaces for better discoverability.
## Getting the Storage Instance
The `storage` instance comes from your upload configuration, not created separately:
```typescript
// lib/upload.ts
import { createUploadConfig } from 'pushduck/server'
const { storage, s3, config } = createUploadConfig()
.provider("cloudflareR2",{
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'auto',
endpoint: process.env.AWS_ENDPOINT_URL,
bucket: process.env.S3_BUCKET_NAME,
accountId: process.env.R2_ACCOUNT_ID,
})
.defaults({
maxFileSize: '10MB',
acl: 'public-read',
})
.paths({
prefix: 'uploads',
generateKey: (file, metadata) => {
const userId = metadata.userId || 'anonymous'
const timestamp = Date.now()
const randomId = Math.random().toString(36).substring(2, 8)
return `${userId}/${timestamp}/${randomId}/${file.name}`
},
})
.build()
// Export the storage instance
export { storage }
```
Then use it in your API routes:
```typescript
// app/api/files/route.ts
import { storage } from '@/lib/upload'
export async function GET() {
const files = await storage.list.files()
return Response.json({ files })
}
```
## API Structure
The storage instance organizes operations into logical namespaces:
```typescript
storage.list.* // File listing operations
storage.metadata.* // File metadata operations
storage.download.* // Download and URL operations
storage.upload.* // Upload operations
storage.delete.* // Delete operations
storage.validation.* // Validation operations
```
## Configuration Methods
### getConfig()
Get the current configuration (read-only):
```typescript
const config = storage.getConfig()
console.log(config.provider.bucket) // 'my-bucket'
```
### getProviderInfo()
Get provider information:
```typescript
const info = storage.getProviderInfo()
// Returns: { provider: 'aws-s3', bucket: 'my-bucket', region: 'us-east-1' }
```
## Error Handling
All storage operations throw structured `PushduckError` instances:
```typescript
import { isPushduckError } from 'pushduck/server'
try {
const files = await storage.list.files()
} catch (error) {
if (isPushduckError(error)) {
console.log(error.code) // Error code
console.log(error.context) // Additional context
}
}
```
## TypeScript Support
The storage instance is fully typed. Import types as needed:
```typescript
import type { FileInfo, ListFilesOptions } from 'pushduck/server'
const options: ListFilesOptions = {
prefix: 'uploads/',
maxResults: 100
}
const files: FileInfo[] = await storage.list.files(options)
```
# Troubleshooting (/docs/api/storage/troubleshooting)
## Storage Issues and Fixes
## Common Issues
### Access Denied (403 Errors)
**Problem**: Getting 403 errors when listing or accessing files.
**Solutions**:
1. **Check credentials**:
```typescript
// Verify your environment variables are set
console.log('Access Key:', process.env.AWS_ACCESS_KEY_ID?.substring(0, 5) + '...')
console.log('Bucket:', process.env.S3_BUCKET_NAME)
```
2. **Test connection**:
```typescript
const isHealthy = await storage.validation.connection()
if (!isHealthy) {
console.log('Connection failed - check credentials and permissions')
}
```
3. **Verify bucket permissions**:
* Ensure your access key has `s3:ListBucket`, `s3:GetObject`, `s3:DeleteObject` permissions
* For Cloudflare R2, check your API token has the necessary permissions
### Empty File Lists
**Problem**: `storage.list.files()` returns empty array but files exist.
**Solutions**:
1. **Check prefix**:
```typescript
// Try without prefix first
const allFiles = await storage.list.files()
console.log('Total files:', allFiles.length)
// Then with specific prefix
const prefixFiles = await storage.list.files({ prefix: 'uploads/' })
console.log('Prefix files:', prefixFiles.length)
```
2. **Verify bucket name**:
```typescript
const info = storage.getProviderInfo()
console.log('Current bucket:', info.bucket)
```
### Presigned URL Errors
**Problem**: Presigned URLs return 403 or expire immediately.
**Solutions**:
1. **Check expiration time**:
```typescript
// Use longer expiration for testing
const url = await storage.download.presignedUrl('file.jpg', 3600) // 1 hour
```
2. **Verify file exists**:
```typescript
const exists = await storage.validation.exists('file.jpg')
if (!exists) {
console.log('File does not exist')
}
```
3. **Check bucket privacy settings**:
* Private buckets require presigned URLs
* Public buckets can use direct URLs
### Large File Operations Timeout
**Problem**: Operations on large datasets timeout or fail.
**Solutions**:
1. **Use pagination**:
```typescript
// Instead of loading all files at once
const allFiles = await storage.list.files() // β May timeout
// Use pagination
const result = await storage.list.paginated({ maxResults: 100 }) // β
Better
```
2. **Use async generators for processing**:
```typescript
for await (const batch of storage.list.paginatedGenerator({ maxResults: 50 })) {
console.log(`Processing ${batch.files.length} files`)
// Process batch...
}
```
3. **Batch operations**:
```typescript
// Delete files in batches
const filesToDelete = ['file1.jpg', 'file2.jpg', /* ... many files */]
const batchSize = 100
for (let i = 0; i < filesToDelete.length; i += batchSize) {
const batch = filesToDelete.slice(i, i + batchSize)
await storage.delete.files(batch)
}
```
## Error Handling Patterns
### Graceful Degradation
```typescript
async function getFilesSafely() {
try {
const files = await storage.list.files()
return { success: true, files }
} catch (error) {
if (isPushduckError(error)) {
console.log('Storage error:', error.code, error.context)
// Handle specific error types
if (error.code === 'NETWORK_ERROR') {
return { success: false, error: 'Network connection failed', files: [] }
}
if (error.code === 'ACCESS_DENIED') {
return { success: false, error: 'Access denied', files: [] }
}
}
// Fallback for unknown errors
return { success: false, error: 'Unknown error', files: [] }
}
}
```
### Retry Logic
```typescript
async function withRetry(
operation: () => Promise,
maxRetries = 3,
delay = 1000
): Promise {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await operation()
} catch (error) {
if (attempt === maxRetries) throw error
console.log(`Attempt ${attempt} failed, retrying in ${delay}ms...`)
await new Promise(resolve => setTimeout(resolve, delay))
delay *= 2 // Exponential backoff
}
}
throw new Error('Max retries exceeded')
}
// Usage
const files = await withRetry(() => storage.list.files())
```
## Performance Optimization
### Efficient File Listing
```typescript
// β Inefficient - loads all metadata
const files = await storage.list.files()
// β
Efficient - only get what you need
const files = await storage.list.files({
maxResults: 50,
sortBy: 'lastModified',
sortOrder: 'desc'
})
// β
Even better - use pagination for large datasets
const result = await storage.list.paginated({ maxResults: 20 })
```
### Batch Metadata Retrieval
```typescript
// β Inefficient - multiple API calls
const fileInfos = []
for (const key of fileKeys) {
const info = await storage.metadata.getInfo(key)
fileInfos.push(info)
}
// β
Efficient - single batch call
const fileInfos = await storage.metadata.getBatch(fileKeys)
```
### Smart Caching
```typescript
// Cache file lists for a short time
const cache = new Map()
async function getCachedFiles(prefix?: string) {
const cacheKey = `files:${prefix || 'all'}`
if (cache.has(cacheKey)) {
const { data, timestamp } = cache.get(cacheKey)
if (Date.now() - timestamp < 60000) { // 1 minute cache
return data
}
}
const files = await storage.list.files({ prefix })
cache.set(cacheKey, { data: files, timestamp: Date.now() })
return files
}
```
## Debugging Tips
### Enable Debug Logging
```typescript
// Check if debug mode is enabled in your config
const config = storage.getConfig()
console.log('Debug mode:', config.provider.debug)
// The storage operations will log detailed information when debug is true
```
### Inspect Configuration
```typescript
// Check your current configuration
const config = storage.getConfig()
console.log('Provider:', config.provider.provider)
console.log('Bucket:', config.provider.bucket)
console.log('Region:', config.provider.region)
// Check provider info
const info = storage.getProviderInfo()
console.log('Provider info:', info)
```
### Test Individual Operations
```typescript
// Test each operation individually
console.log('Testing connection...')
const isHealthy = await storage.validation.connection()
console.log('Connection:', isHealthy ? 'OK' : 'Failed')
console.log('Testing file listing...')
const files = await storage.list.files({ maxResults: 1 })
console.log('Files found:', files.length)
console.log('Testing file existence...')
if (files.length > 0) {
const exists = await storage.validation.exists(files[0].key)
console.log('First file exists:', exists)
}
```
## Getting Help
If you're still experiencing issues:
1. **Check the logs** - Look for detailed error messages in your console
2. **Verify environment variables** - Ensure all required variables are set
3. **Test with minimal configuration** - Start with basic setup and add complexity gradually
4. **Check provider documentation** - Verify your bucket/account settings
5. **Use health check** - Run `storage.validation.connection()` to verify basic connectivity
# Authentication (/docs/guides/security/authentication)
import { Callout } from "fumadocs-ui/components/callout";
import { Card, Cards } from "fumadocs-ui/components/card";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
import { TypeTable } from "fumadocs-ui/components/type-table";
## Authentication & Authorization
Secure your file upload endpoints with robust authentication and authorization middleware.
**Important:** Never expose upload endpoints without proper authentication in
production. Unprotected endpoints can lead to storage abuse and security
vulnerabilities.
## Authentication Patterns
### Better Auth (Recommended)
Modern, type-safe authentication with native TypeScript support:
```typescript
import { s3 } from "@/lib/upload";
import { auth } from "@/lib/auth"; // Better Auth instance
const s3Router = s3.createRouter({
userFiles: s3.image()
.maxFileSize("5MB")
.maxFiles(10)
.middleware(async ({ req, metadata }) => {
const session = await auth.api.getSession({ headers: req.headers });
if (!session?.user?.id) {
throw new Error("Authentication required");
}
return {
...metadata,
userId: session.user.id,
userEmail: session.user.email,
};
}),
});
export const { GET, POST } = s3Router.handlers;
```
### NextAuth.js Integration
```typescript
import { s3 } from "@/lib/upload";
import { getServerSession } from "next-auth";
import { authOptions } from "@/lib/auth";
const s3Router = s3.createRouter({
userFiles: s3.image()
.maxFileSize("5MB")
.maxFiles(10)
.middleware(async ({ req, metadata }) => {
const session = await getServerSession(authOptions);
if (!session?.user?.id) {
throw new Error("Authentication required");
}
return {
...metadata,
userId: session.user.id,
userEmail: session.user.email,
};
}),
});
export const { GET, POST } = s3Router.handlers;
```
### JWT Token Validation
```typescript
import jwt from "jsonwebtoken";
const s3Router = s3.createRouter({
protectedUploads: s3.file()
.maxFileSize("10MB")
.maxFiles(5)
.middleware(async ({ req, metadata }) => {
const token = req.headers.get("authorization")?.replace("Bearer ", "");
if (!token) {
throw new Error("Authorization token required");
}
try {
const payload = jwt.verify(token, process.env.JWT_SECRET!) as any;
return {
...metadata,
userId: payload.sub,
roles: payload.roles || [],
};
} catch (error) {
throw new Error("Invalid or expired token");
}
}),
});
```
### Custom Authentication
```typescript
const s3Router = s3.createRouter({
apiKeyUploads: s3.file()
.maxFileSize("25MB")
.maxFiles(1)
.types(['application/pdf', 'application/msword'])
.middleware(async ({ req, metadata }) => {
const apiKey = req.headers.get("x-api-key");
if (!apiKey) {
throw new Error("API key required");
}
// Validate API key against your database
const client = await validateApiKey(apiKey);
if (!client) {
throw new Error("Invalid API key");
}
return {
...metadata,
clientId: client.id,
plan: client.plan,
quotaUsed: client.quotaUsed,
};
}),
});
```
## Simple Role Checking
For apps with admins vs regular users:
```typescript
const s3Router = s3.createRouter({
adminUploads: s3.file()
.maxFileSize("100MB")
.middleware(async ({ req, metadata }) => {
const { userId, role } = await authenticateUser(req);
if (role !== "admin") {
throw new Error("Admin access required");
}
return { ...metadata, userId, role };
}),
});
```
That's it! For more complex permissions (multi-tenant, resource-based), see [advanced patterns](/docs/api/configuration/server-router#middleware).
***
## Next Steps
Configure cross-origin requests for your S3 bucket
8 essential items to go live safely
Pass context from UI to server securely
# CORS & ACL Setup (/docs/guides/security/cors-and-acl)
## CORS & ACL Setup
This guide covers Cross-Origin Resource Sharing (CORS) configuration for file uploads and provides an overview of Access Control Lists (ACLs) across different cloud storage providers.
## Table of Contents
* [CORS Configuration](#cors-configuration)
* [Understanding ACLs](#understanding-acls)
* [Provider-Specific Considerations](#provider-specific-considerations)
* [Common Issues & Troubleshooting](#common-issues--troubleshooting)
## CORS Configuration
Cross-Origin Resource Sharing (CORS) is essential for allowing your web application to upload files directly to cloud storage from the browser.
### Why CORS is Required
When uploading files directly from the browser to cloud storage, you're making requests from your domain (e.g., `https://myapp.com`) to a different domain (e.g., `https://mybucket.s3.amazonaws.com`). Browsers block these cross-origin requests by default unless the target server explicitly allows them via CORS headers.
### Basic CORS Configuration
#### AWS S3 CORS Configuration
```json
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"https://yourdomain.com",
"https://www.yourdomain.com"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header"
],
"MaxAgeSeconds": 3000
}
]
```
#### Setting CORS via AWS CLI
```bash
# Save the above JSON to cors-config.json
aws s3api put-bucket-cors \
--bucket your-bucket-name \
--cors-configuration file://cors-config.json
```
#### Setting CORS via AWS Console
1. Go to S3 Console β Your Bucket β Permissions
2. Scroll to "Cross-origin resource sharing (CORS)"
3. Click "Edit" and paste your CORS configuration
4. Save changes
### Development vs Production CORS
#### Development Configuration (Permissive)
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"http://localhost:3000",
"http://localhost:3001",
"http://127.0.0.1:3000",
"https://yourdomain.com"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]
```
#### Production Configuration (Restrictive)
```json
[
{
"AllowedHeaders": [
"Content-Type",
"Content-MD5",
"Authorization",
"x-amz-date",
"x-amz-content-sha256"
],
"AllowedMethods": ["PUT", "POST"],
"AllowedOrigins": [
"https://yourdomain.com",
"https://www.yourdomain.com"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 86400
}
]
```
### Provider-Specific CORS Setup
#### Cloudflare R2
```json
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]
```
Set via Cloudflare Dashboard:
1. Go to R2 Object Storage β Your Bucket
2. Navigate to Settings β CORS policy
3. Add your CORS rules
#### DigitalOcean Spaces
```json
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag", "x-amz-meta-*"],
"MaxAgeSeconds": 3600
}
]
```
Set via DigitalOcean Control Panel:
1. Go to Spaces β Your Space β Settings
2. Add CORS configuration
#### MinIO (Self-Hosted)
```bash
# Using MinIO Client (mc)
mc admin config set myminio api cors_allow_origin="https://yourdomain.com"
mc admin service restart myminio
```
Or via MinIO Console:
1. Access MinIO Console β Buckets β Your Bucket
2. Navigate to Anonymous β Access Rules
3. Configure CORS policy
### Advanced CORS Configurations
#### Multiple Environment Setup
```json
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "POST"],
"AllowedOrigins": [
"https://yourdomain.com",
"https://staging.yourdomain.com"
],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 86400
},
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["*"],
"MaxAgeSeconds": 3600
}
]
```
#### CDN Integration
When using CloudFront or other CDNs:
```json
[
{
"AllowedHeaders": [
"Origin",
"Access-Control-Request-Method",
"Access-Control-Request-Headers"
],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
"AllowedOrigins": [
"https://yourdomain.com",
"https://d1234567890.cloudfront.net"
],
"ExposeHeaders": ["ETag", "x-amz-version-id"],
"MaxAgeSeconds": 86400
}
]
```
### Testing CORS Configuration
#### Browser Developer Tools
1. Open your web app
2. Try uploading a file
3. Check Network tab for CORS errors
4. Look for preflight OPTIONS requests
#### Command Line Testing
```bash
# Test preflight request
curl -X OPTIONS \
-H "Origin: https://yourdomain.com" \
-H "Access-Control-Request-Method: PUT" \
-H "Access-Control-Request-Headers: Content-Type" \
https://yourbucket.s3.amazonaws.com/
# Should return CORS headers if configured correctly
```
#### Programmatic Testing
```javascript
// Test CORS from browser console
fetch('https://yourbucket.s3.amazonaws.com/', {
method: 'OPTIONS',
headers: {
'Origin': window.location.origin,
'Access-Control-Request-Method': 'PUT'
}
})
.then(response => {
console.log('CORS Headers:', response.headers);
})
.catch(error => {
console.error('CORS Error:', error);
});
```
## Understanding ACLs
Access Control Lists (ACLs) define who can access your uploaded files and what permissions they have. **Important: ACL implementation varies significantly across providers.**
### What are ACLs?
ACLs are permission sets that determine:
* **Who** can access files (users, groups, public)
* **What** they can do (read, write, delete)
* **How** permissions are inherited
### Common ACL Types
#### Public Access Levels
* **`public-read`**: Anyone can download the file
* **`public-read-write`**: Anyone can download and upload
* **`private`**: Only bucket owner has access
* **`authenticated-read`**: Only authenticated users can read
#### AWS S3 ACL Examples
```typescript
// Using PushDuck with S3 ACLs
const uploadConfig = createUploadConfig({
provider: s3({
bucket: "my-bucket",
region: "us-east-1",
acl: "public-read" // File will be publicly accessible
}),
// ... other config
});
```
### Provider-Specific ACL Differences
#### AWS S3
* **Full ACL Support**: Comprehensive ACL system
* **Canned ACLs**: Predefined permission sets
* **Custom ACLs**: Granular user/group permissions
* **Bucket Policies**: Override ACL settings
```typescript
// S3 supports traditional ACLs
const s3Config = s3({
bucket: "my-bucket",
acl: "public-read", // β
Supported
// Custom ACL with specific permissions
customAcl: {
Owner: { ID: "owner-id", DisplayName: "Owner" },
Grants: [
{
Grantee: { Type: "Group", URI: "http://acs.amazonaws.com/groups/global/AllUsers" },
Permission: "READ"
}
]
}
});
```
#### Cloudflare R2
* **Limited ACL Support**: Basic public/private only
* **No Canned ACLs**: Doesn't support AWS-style ACL names
* **Bucket-Level Permissions**: Access controlled at bucket level
```typescript
// R2 has limited ACL support
const r2Config = cloudflareR2({
bucket: "my-bucket",
// β acl: "public-read" // Not supported
// Access controlled via Cloudflare dashboard or API
});
```
#### DigitalOcean Spaces
* **S3-Compatible ACLs**: Supports most S3 ACL types
* **Public/Private Toggle**: Simple public access control
* **CDN Integration**: Automatic CDN for public files
```typescript
// Spaces supports basic S3 ACLs
const spacesConfig = digitalOceanSpaces({
bucket: "my-space",
acl: "public-read", // β
Supported
region: "nyc3"
});
```
#### MinIO (Self-Hosted)
* **Policy-Based**: Uses bucket policies instead of ACLs
* **No Traditional ACLs**: Custom permission system
* **IAM Integration**: Role-based access control
```typescript
// MinIO uses policies, not ACLs
const minioConfig = minio({
endpoint: "https://minio.yourdomain.com",
bucket: "my-bucket",
// β acl: "public-read" // Not supported
// Access controlled via MinIO policies
});
```
### ACL Best Practices
#### Security Considerations
1. **Default to Private**: Start with `private` ACL
2. **Explicit Public Access**: Only make files public when necessary
3. **Regular Audits**: Review public files periodically
4. **Bucket Policies**: Use bucket policies for complex permissions
#### Implementation Strategy
```typescript
// Environment-based ACL configuration
const getAcl = () => {
if (process.env.NODE_ENV === 'development') {
return 'public-read'; // Easy testing
}
if (process.env.FILE_TYPE === 'profile-images') {
return 'public-read'; // Profile images need public access
}
return 'private'; // Default to private
};
const uploadConfig = createUploadConfig({
provider: s3({
bucket: process.env.S3_BUCKET!,
acl: getAcl()
})
});
```
## Provider-Specific Considerations
### AWS S3
* **Bucket Policies Override ACLs**: Bucket policies take precedence
* **Block Public Access**: May prevent ACL-based public access
* **IAM Permissions**: Required for ACL operations
### Cloudflare R2
* **Dashboard Configuration**: Set public access via Cloudflare dashboard
* **Custom Domains**: Use custom domains for public files
* **Workers Integration**: Use Cloudflare Workers for access control
### DigitalOcean Spaces
* **CDN Integration**: Automatic CDN for public files
* **Subdomain Access**: Public files accessible via subdomain
* **CORS + ACL**: Both required for browser uploads
### MinIO
* **Policy-Only**: No ACL support, use bucket policies
* **Admin Configuration**: Set policies via MinIO admin
* **Custom Authentication**: Integrate with your auth system
## Common Issues & Troubleshooting
### CORS Issues
#### Symptom: "CORS policy" errors in browser console
**Solution:**
1. Check CORS configuration includes your domain
2. Verify all required methods are allowed
3. Ensure preflight requests are handled
```bash
# Debug CORS with curl
curl -X OPTIONS \
-H "Origin: https://yourdomain.com" \
-H "Access-Control-Request-Method: PUT" \
-v https://yourbucket.s3.amazonaws.com/
```
#### Symptom: Uploads work locally but fail in production
**Solution:**
1. Add production domain to CORS origins
2. Check for HTTPS vs HTTP mismatches
3. Verify subdomain configurations
### ACL Issues
#### Symptom: Files uploaded but not accessible
**Solution:**
1. Check if bucket has "Block Public Access" enabled
2. Verify ACL permissions match requirements
3. Review bucket policies for conflicts
#### Symptom: ACL settings ignored
**Solution:**
1. Provider may not support ACLs (R2, MinIO)
2. Bucket policies may override ACL settings
3. Check IAM permissions for ACL operations
### Mixed Issues
#### Symptom: Uploads succeed but files have wrong permissions
**Solution:**
```typescript
// Ensure ACL is set correctly for your provider
const config = createUploadConfig({
provider: s3({
bucket: "my-bucket",
acl: "public-read", // Only for S3-compatible providers
}),
onUploadComplete: async ({ file, url }) => {
// Verify file accessibility
const response = await fetch(url, { method: 'HEAD' });
if (!response.ok) {
console.error('File not publicly accessible:', response.status);
}
}
});
```
### Debug Checklist
* [ ] CORS configuration includes all required origins
* [ ] All HTTP methods needed are allowed
* [ ] Headers match what your client sends
* [ ] ACL settings match provider capabilities
* [ ] Bucket policies don't conflict with ACLs
* [ ] IAM permissions allow required operations
* [ ] Block Public Access settings reviewed
* [ ] CDN/proxy configurations considered
### Provider Support Matrix
| Feature | AWS S3 | Cloudflare R2 | DigitalOcean Spaces | MinIO |
| --------------- | ------ | ------------- | ------------------- | ---------- |
| CORS | β
Full | β
Full | β
Full | β
Full |
| Canned ACLs | β
Yes | β No | β
Limited | β No |
| Custom ACLs | β
Yes | β No | β No | β No |
| Bucket Policies | β
Yes | β
Yes | β No | β
Yes |
| Public Access | β
Yes | β
Dashboard | β
Yes | β
Policies |
This guide should help you configure CORS properly and understand how ACLs work differently across providers. Remember to test your configuration thoroughly in both development and production environments.