I remember the first time I tried to handle file uploads in a Node.js application. I thought it would be simple: accept a multipart form, write the file to disk, and move on. But then the production server started timing out under load, the disk filled up with user avatars, and I discovered that serving static files from the same process that handles API requests is a terrible idea. That is when I learned about AWS S3 presigned URLs. They solved the core problem: keeping the application server out of the data transfer path.
Let me walk you through building a type-safe, production-grade upload system using Node.js, TypeScript, and S3 presigned URLs. I will focus on the parts that actually matter – security, performance, and code that you can use tomorrow.
Why Not Just Send Everything Through Your Server?
Think about what happens when a client uploads a 200 MB video file to your Express app. Your server has to receive the entire stream, buffer it, then forward it to S3. During that time, the event loop is blocked, and other requests suffer. Plus, you are paying for bandwidth twice – once to your server, once to AWS. A presigned URL flips the model: the client gets a temporary, signed link that allows a PUT request directly to S3. Your server only handles the metadata and the signature generation.
But here is the catch: you must validate everything before issuing the presigned URL. If you do not check file type, size, or user permissions, anyone can upload anything. That is where our type-safe contracts and Zod validation come in.
Setting Up the Shared Type Contracts
The biggest lesson I learned from building distributed systems is that your server and client must agree on shapes. If the server expects {fileName: string} but the client sends {name: string}, things break silently. I use a shared package with TypeScript types and Zod schemas that both ends import.
// packages/shared/src/upload.types.ts
import { z } from 'zod';
export const PresignedUrlRequestSchema = z.object({
fileName: z.string().min(1).max(255),
fileType: z.enum([
'image/jpeg', 'image/png', 'image/webp',
'application/pdf', 'video/mp4'
]),
fileSize: z.number().int().positive().max(10_000_000), // 10 MB
folder: z.string().regex(/^[a-z0-9-]+$/),
checksum: z.string().length(64) // SHA-256 hex
});
export type PresignedUrlRequest = z.infer<typeof PresignedUrlRequestSchema>;
I also define response types and a status enum. This single source of truth prevents mismatched APIs. Every time I change the schema, TypeScript yells at both ends until they match. It sounds trivial, but I have wasted days debugging “upload succeeded but server says it didn’t” because of a missing field.
Server: Generate the Presigned URL
On the server side, I expose a single endpoint that accepts the validated request and returns a signed URL. I use the AWS SDK v3, which is modular – I only import the commands I need.
// packages/server/src/routes/upload.ts
import { Router } from 'express';
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PresignedUrlRequestSchema } from 'shared/src/upload.types';
const router = Router();
const s3 = new S3Client({ region: process.env.AWS_REGION });
router.post('/request-upload', async (req, res) => {
// 1. Validate input
const parseResult = PresignedUrlRequestSchema.safeParse(req.body);
if (!parseResult.success) {
return res.status(400).json({ error: parseResult.error.issues });
}
const { fileName, fileType, fileSize, folder } = parseResult.data;
// 2. (Optional) check user permissions, rate limit, etc.
// 3. Generate a unique key
const key = `${folder}/${Date.now()}-${fileName}`;
// 4. Create command with restrictions
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
ContentType: fileType,
ContentLength: fileSize,
Metadata: {
originalName: fileName,
uploadedBy: req.user?.id || 'anonymous'
}
});
// 5. Sign URL with 5-minute expiry
const presignedUrl = await getSignedUrl(s3, command, { expiresIn: 300 });
// 6. Store upload session in database (optional)
res.json({
uploadId: crypto.randomUUID(),
presignedUrl,
s3Key: key,
expiresAt: Math.floor(Date.now() / 1000) + 300
});
});
Notice that I enforce ContentType and ContentLength inside the PutObjectCommand. When the client actually PUTs the file using the presigned URL, S3 checks that the headers match. If they try to upload a video file with a PDF content type, S3 rejects it. That is your second layer of defense.
But what about files larger than 5MB? For big uploads, you should use S3 multipart upload. The client sends each part independently, and you provide presigned URLs for each part. That is a more advanced scenario, but the pattern is similar: validate, sign each part, and let the client control the parallelism.
Client: Upload Directly to S3
On the frontend, I use the Fetch API (or axios) to PUT the file to the presigned URL. The key point is that I do not send the file to my own server at all.
// packages/client/src/hooks/useUpload.ts
async function uploadFile(file: File, presignedUrl: string) {
const response = await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
'Content-Length': String(file.size)
}
});
if (!response.ok) {
throw new Error(`Upload failed: ${response.statusText}`);
}
// Upload done – now notify your server
}
I can also track progress if the client supports upload streams. The browser’s XMLHttpRequest provides an upload.onprogress event. I use that to update a progress bar, but the call to my own server only happens after the S3 PUT completes.
But why is it important to notify the server afterward? Because the server needs to update its database, maybe generate thumbnails, or trigger a processing pipeline. S3 can send an event notification via SNS or Lambda, but often you want immediate confirmation. So I add a second endpoint that confirms the upload and returns a public URL.
Handling Large Files with Multipart Uploads
Have you ever tried uploading a 1 GB video with a single presigned URL? It works, but if the connection drops, you have to restart the whole thing. Multipart uploads solve that by dividing the file into parts (typically 5 MB each) and uploading them in parallel.
The flow changes: first, the client requests to initiate a multipart upload. The server returns an UploadId and a list of presigned URLs for each part. The client uploads each part in any order, then sends a complete request listing the ETags returned by S3.
// Server: Initiate multipart
import { CreateMultipartUploadCommand, UploadPartCommand } from '@aws-sdk/client-s3';
const multipartCommand = new CreateMultipartUploadCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
ContentType: fileType
});
const { UploadId } = await s3.send(multipartCommand);
// Now generate presigned URLs for each part
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
const partCommand = new UploadPartCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
PartNumber: partNumber,
UploadId
});
const partUrl = await getSignedUrl(s3, partCommand, { expiresIn: 3600 });
urls.push(partUrl);
}
The client then uploads each part, keeping track of ETags. At the end, it calls your server with the list of completed parts, and the server calls CompleteMultipartUploadCommand. This pattern is resilient and fast.
Validation and Security Considerations
It is tempting to trust the presigned URL once it is issued. But remember: the presigned URL can be used by anyone who intercepts it. So keep the expiry short (5 minutes is usually enough). Also, do not generate URLs for sensitive folders without checking user ownership.
Another trick I use: I embed the file’s SHA-256 checksum in the request and verify it on the server after the upload. When the client uploads, I know the checksum, and later I compute the hash of the object stored in S3 and compare. This catches corruption – and it also prevents a sneaky attacker from uploading a different file under the same key.
You might ask: what stops a user from uploading a 10 GB file to a presigned URL that only allowed 10 MB? The ContentLength restriction inside the PutObjectCommand header is enforced by S3. If they send a larger file, S3 rejects with a 403. Always set ContentLength in the signed command, not just the header.
Type Safety End to End
I cannot stress enough how much shared types reduce errors. Every time I change the upload request structure – say, adding a tags field – TypeScript forces me to update both the server validation and the client UI. No runtime surprises. The Zod schema on the server also acts as documentation.
Here is a small but critical touch: I use z.input and z.output to differentiate between what the client sends and what the server stores. For example, the client sends fileSize as a number, but after validation I might convert it to a BigInt for database storage. The type system keeps me honest.
Handling Post-Upload Processing
After the file lands in S3, you often need to process it – resize images, scan for viruses, extract metadata. The cleanest way is to have your server receive a notification that the upload is done, instead of polling S3. Use S3 event notifications to trigger an SQS queue or a Lambda function. But if you want simplicity, have the client call a “confirm upload” endpoint after the PUT succeeds.
// Client after successful PUT
await fetch('/api/upload/confirm', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ s3Key, originalName: file.name })
});
Your server then queries S3 for the object metadata and updates the database. That is a synchronous pattern that works well for small teams.
Conclusion
I hope this walkthrough has given you a practical foundation for building secure, scalable file uploads with Node.js and S3. The key takeaways are: keep your server out of the data path, validate everything before signing, enforce constraints through presigned URL parameters, and share types between client and server to avoid silent bugs.
If you found this useful, please like this article, share it with your team, and leave a comment with your biggest file upload challenge. I read every comment and reply with solutions. Let me know what you build!
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva