Production-Ready Direct-to-S3 Uploads with Presigned URLs in Node.js and TypeScript
Learn direct-to-S3 uploads with presigned URLs in Node.js and TypeScript to handle large files securely, reduce server load, and scale.
Have you ever built a file upload feature that worked perfectly in development, only to crash in production when a user uploaded a 2GB video? I’ve been there. That painful moment when your server’s memory spikes, the request times out, and you realize your beautiful Express route that streams the file to S3 is actually a ticking bomb. This is why I want to walk you through a better way – direct-to-S3 uploads using presigned URLs, with full type safety and a production-ready architecture.
The core idea is simple: your backend never touches the file data. Instead, it generates a temporary, signed URL that gives the client permission to upload directly to your S3 bucket. No proxying, no buffering, no double bandwidth costs. But doing this correctly requires careful orchestration – especially when you need to handle large files, enforce file type restrictions, and keep your IAM policies locked down. Let me show you how to build it from scratch.
When you use a traditional upload proxy, your server becomes a middleman that reads the incoming stream and writes it to S3. For small files it’s fine. But when a marketing team uploads a 4K promotional video, your server’s CPU jumps, memory usage grows, and if the connection drops halfway, you have to manage retries yourself. Presigned URLs turn that on its head. Your server issues a short-lived token – usually valid for a few minutes – and the client uses that token to PUT the file directly to S3. Your server only needs to handle the metadata and the URL generation. S3 handles the heavy lifting.
Why does this matter? Because at scale, the proxy approach kills your server costs and reliability. Dropbox, Notion, and Vercel all use this pattern. The architecture is straightforward: the client requests an upload URL from your API, your backend validates the request (file type, size, user permissions), generates a presigned PUT URL, and returns it. The client then uploads directly to S3 using that URL. After the upload completes, S3 can trigger a Lambda function to process the file – perhaps resize an image, scan for viruses, or update your database.
Let’s get our hands dirty. First, you need an S3 bucket configured with no public access. I use the AWS CLI or SDK to create it, but I also explicitly block all public ACLs – this is critical for security. Your EC2 or Lambda function will access the bucket via IAM roles, not public keys. For local development, you can use access keys stored in environment variables. I recommend creating an IAM user with minimal permissions: just s3:PutObject and s3:GetObject on the specific folder of your bucket.
Now, the backend setup. I use Node.js with TypeScript and the AWS SDK v3 because it’s modular – you only install @aws-sdk/client-s3 and @aws-sdk/s3-request-presigner. The presigner function is straightforward:
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { v4 as uuid } from "uuid";
const s3 = new S3Client({ region: process.env.AWS_REGION });
async function generateUploadUrl(
userId: string,
fileName: string,
fileType: string
): Promise<{ uploadUrl: string; fileKey: string }> {
const ext = fileName.split('.').pop();
const fileKey = `uploads/${userId}/${uuid()}.${ext}`;
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: fileKey,
ContentType: fileType,
});
const uploadUrl = await getSignedUrl(s3, command, {
expiresIn: 3600, // 1 hour
});
return { uploadUrl, fileKey };
}
Notice I set the ContentType in the command. This forces the client to use the same content type when uploading – otherwise S3 will reject the request if they don’t match. You must validate the file type and size on the backend before generating the URL. I use Zod to parse the incoming request:
import { z } from "zod";
const uploadRequestSchema = z.object({
fileName: z.string().min(1).max(255),
fileType: z.enum(["image/jpeg", "image/png", "image/webp", "application/pdf"]),
fileSize: z.number().max(100 * 1024 * 1024), // 100 MB
});
If the validation fails, I return a 400. This keeps your bucket clean of unwanted files.
But what happens when a user uploads a file larger than 100MB and the connection drops? The presigned URL is still valid, so they can retry – but they have to upload the whole thing again. For large files, you should use multipart uploads. AWS supports uploading parts in parallel and then completing the upload. The @aws-sdk/lib-storage package provides a higher-level Upload class that handles this automatically on the client side. However, if you want full control, you can orchestrate multipart uploads from your backend.
Here’s a simplified multipart flow: your backend initiates a multipart upload and gets an UploadId. Then it generates presigned URLs for each part (e.g., 5MB each). The client uploads each part in parallel, and after all parts are uploaded, the backend sends a complete multipart upload request. This approach works well for very large files but adds complexity. For most use cases, a simple presigned URL with a reasonable timeout is sufficient.
On the frontend, you need to send the PUT request directly to S3. Don’t use fetch with a body – use the XMLHttpRequest API if you want upload progress, or the modern fetch with a ReadableStream if you don’t care about progress. I prefer XMLHttpRequest for progress reporting:
async function uploadFile(file, uploadUrl) {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.open("PUT", uploadUrl, true);
xhr.setRequestHeader("Content-Type", file.type);
xhr.upload.onprogress = (e) => {
if (e.lengthComputable) {
const progress = (e.loaded / e.total) * 100;
updateProgressBar(progress);
}
};
xhr.onload = () => {
if (xhr.status === 200) resolve();
else reject(new Error(`Upload failed with status ${xhr.status}`));
};
xhr.onerror = () => reject(new Error("Network error"));
xhr.send(file);
});
}
After the upload finishes, you need to notify your backend that the file is ready. You can do this by calling an API endpoint with the fileKey. Alternatively, let S3 Event Notifications trigger a Lambda function automatically. I prefer the event-driven approach because it decouples the upload from post-processing. You configure the bucket to send events to an SQS queue or directly to Lambda. The Lambda can then access the file, run validation, create database records, or even transform the file (like image resizing).
One question you might have: how do you ensure that only authenticated users can upload? The presigned URL generation should be protected by your authentication middleware. If a user is logged in, you generate the URL scoped to their user ID path. That way, even if someone steals the URL, they can only upload to that user’s folder. Additionally, you can set a very short expiration – 60 seconds is often enough for the client to initiate the upload. And always enable S3 server-side encryption.
Security goes deeper than permissions. Never trust the client to tell you the file type or size. Always validate on the backend, and if possible, also on the Lambda after upload. Use IAM policies that restrict s3:PutObject to only the prefix your application uses, and deny s3:GetObject on the upload folder so users can’t list each other’s files. You can also add bucket policies that require x-amz-server-side-encryption header.
Now, how do you test this pipeline end-to-end? I write integration tests that hit the backend, get a presigned URL, upload a dummy file using curl or fetch, then check that the object exists in S3. For multipart uploads, I simulate part uploads with random data. I also test edge cases: what if the presigned URL expires before the upload completes? The client gets a 403 and must request a new URL. What if the file exceeds the size limit? My validation rejects it before generating the URL.
A common pitfall is forgetting to set the Content-Type correctly on the presigned URL. I’ve seen errors where the client sends a different content type than what was signed, and S3 returns a 403. The solution is to either sign without content type constraint (use *) or enforce it on both sides. I prefer to enforce it because it prevents someone from uploading an executable disguised as a JPEG.
Another mistake is exposing the presigned URL too early. Always generate it right before the upload, not several minutes beforehand. The URL should be short-lived. If your frontend needs time to prepare the file, ask the user to start the upload, then fetch the presigned URL.
Let’s talk about scaling. With presigned URLs, your server is only handling lightweight API calls. You can easily handle thousands of upload requests per second because you’re not moving bytes. The bottleneck shifts to S3’s throughput, which is massive. You still need to consider rate limiting on your API to prevent abuse, but overall this pattern is extremely efficient.
In conclusion, building a type-safe file upload system with presigned URLs is not just about following a tutorial – it’s about designing for failure. Your server should never be the data path. By moving the heavy lifting to S3, you reduce costs, improve reliability, and simplify your codebase. Start with basic presigned URLs, add validation, then layer on multipart for large files and event-driven processing for enrichment.
If this article helped you understand the flow, give it a thumbs up, share it with your team, and drop a comment about your own upload horror stories – I’d love to hear how you solved them.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva