Build a Secure TypeScript File Upload Pipeline with Multer, Sharp, and AWS S3
Learn to build a secure TypeScript file upload pipeline with Multer, Sharp, Zod, and AWS S3 for safer, faster image uploads.
I’ve spent years building file upload systems that break in production. Memory leaks from large files, format mismatches, corrupted uploads, and security holes where users could upload PHP shells. I wanted a solution that combined TypeScript safety with practical performance. That’s why I built this end-to-end pipeline using Node.js, Multer, Sharp, and AWS S3. Let me walk you through it.
First, why do we need type safety for file uploads? Because a single wrong assumption about file size or MIME type can crash your server. In my early projects, I trusted client‑side validation alone. Big mistake. A malicious user can craft a request that bypasses frontend checks. With Zod, we validate at the server boundary before any processing begins.
Let’s set up the project. I start with a standard Express app and install Multer for multipart parsing, Sharp for image manipulation, and the AWS SDK for S3. I also add Zod for schema validation and Prisma for storing metadata.
npm install express multer sharp @aws-sdk/client-s3 zod @prisma/client
npm install -D typescript @types/express ...
Now, define a strict Zod schema for the incoming upload request. This schema ensures that only allowed MIME types and sizes pass through. I also allow optional parameters like target dimensions and quality.
import { z } from 'zod';
export const UploadSchema = z.object({
file: z.instanceof(Express.Multer.File).refine(
(f) => ['image/jpeg','image/png','image/webp'].includes(f.mimetype),
'Unsupported file type'
),
quality: z.coerce.number().min(1).max(100).optional().default(80),
width: z.coerce.number().positive().optional(),
});
The Multer middleware handles parsing the multipart form data. I use memory storage to avoid writing temporary files to disk. That’s important for containerised deployments.
import multer from 'multer';
const upload = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 10 * 1024 * 1024 }, // 10MB
fileFilter: (_req, file, cb) => {
if (['image/jpeg','image/png','image/webp'].includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error('Only images are allowed'));
}
},
});
Now the interesting part: processing the image with Sharp directly from the buffer. I’ve had cases where I needed thumbnails, watermarked versions, or format conversion. Sharp’s pipeline is perfect.
import sharp from 'sharp';
async function processImage(buffer: Buffer, options: { width?: number; quality?: number }) {
let pipeline = sharp(buffer);
if (options.width) {
pipeline = pipeline.resize(options.width, undefined, { fit: 'inside' });
}
return pipeline.jpeg({ quality: options.quality ?? 80 }).toBuffer();
}
Notice that we’re not saving the original file – we process and upload directly. This saves bandwidth and storage. But hold on: what if the upload fails after processing? We need a transaction.
I use Prisma to record the upload metadata before sending to S3. If S3 throws, I can delete the record or retry.
import { prisma } from './prisma';
const record = await prisma.uploadedFile.create({
data: {
originalName: file.originalname,
mimeType: file.mimetype,
sizeBytes: processedBuffer.length,
storageKey: `uploads/${uuid()}`,
status: 'PROCESSING',
},
});
Now upload to S3 with a presigned URL for extra security. I generate the URL before the upload so the client never sees my secret keys. But here we’re doing a server‑to‑server upload; we can simply use the SDK.
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ region: 'us-east-1' });
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: record.storageKey,
Body: processedBuffer,
ContentType: 'image/jpeg',
CacheControl: 'max-age=31536000',
}));
After successful upload, I update the record status to COMPLETED and store the public URL. If anything fails, I set status FAILED and log the error.
You might ask: what about large files that exceed memory? Memory storage can blow up your server for 100MB PDFs. For those, I switch to a chunked upload strategy. Use Multer’s diskStorage temporarily and stream to S3 using multipart upload. But for most image uploads, memory is fine.
Another problem I solved: generating multiple variants. For an e‑commerce site, I need thumbnail, medium, and large versions. I run Sharp multiple times on the same buffer and upload each variant with a different key.
async function generateVariants(buffer: Buffer) {
const variants = [
{ name: 'thumb', width: 150, height: 150, fit: 'cover' },
{ name: 'medium', width: 600, height: undefined, fit: 'inside' },
];
const results = await Promise.all(variants.map(async (v) => {
const processed = await sharp(buffer)
.resize(v.width, v.height, { fit: v.fit })
.jpeg({ quality: 80 })
.toBuffer();
const key = `variants/${v.name}/${uuid()}.jpg`;
await s3.send(new PutObjectCommand({ Bucket: 'my-bucket', Key: key, Body: processed }));
return { variantType: v.name, storageKey: key, sizeBytes: processed.length, format: 'jpeg' };
}));
// Store variant metadata in database
}
I also added a watermarking step. Users can pass a watermark flag and a logo buffer. Sharp’s composite method overlays the watermark in the bottom‑right corner.
const watermarked = await sharp(buffer)
.composite([{ input: logoBuffer, gravity: 'southeast' }])
.toBuffer();
Now, you might be thinking: how do I test all this without hitting AWS every time? I wrote a simple in‑memory S3 mock for integration tests using vitest. I can also use s3rver or localstack. But for unit tests, mocking the SDK is easier.
import { vi } from 'vitest';
vi.mock('@aws-sdk/client-s3', () => ({
S3Client: vi.fn(() => ({
send: vi.fn().mockResolvedValue({ ETag: '"abc123"' }),
})),
PutObjectCommand: vi.fn(),
}));
Finally, secure your upload endpoint. I use a middleware that validates a signed token from the client. That token contains the allowed file size and folder. I also set CORS to allow only your frontend domain. For virus scanning, I can pipe the buffer through ClamAV before processing, but that’s another article.
After months of using this system, I never had a corrupted upload or a memory explosion. Type safety with Zod caught 90% of mistakes before they reached the database. Sharp gave me pixel‑perfect control. And S3’s durability means I sleep well at night.
If you’ve been wrestling with file uploads, start with this foundation. Add your own tweaks – maybe convert to AVIF, store metadata in DynamoDB, or add progress events with Socket.IO. The pattern scales.
If this walkthrough saved you time, hit the like button, share it with a friend who still uses base64 for images, and leave a comment about your own upload war stories. I read every one.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva