js

How to Build Secure, Resumable S3 File Uploads with Presigned URLs

Learn to build secure, scalable S3 file uploads with presigned URLs, multipart uploads, and async processing for a better UX.

How to Build Secure, Resumable S3 File Uploads with Presigned URLs

Have you ever watched a file upload fail at 99%? I have. It happened during a critical client demo, with a spreadsheet that took twenty minutes to process. The progress bar filled, then vanished. No error message, no retry option—just silence and a room full of disappointed faces. That moment sparked a months-long effort to build something better. Not just a working upload, but a resilient, predictable, and secure system. This is what I learned.

The common approach is to send a file directly to your server, which then forwards it to storage like AWS S3. It works for tiny files, but it’s a dead end. Your server becomes a traffic bottleneck, consuming memory and bandwidth. A large file can tie up resources, blocking other requests. What if you could let the user’s browser talk directly to S3, while your server acts only as a trusted gatekeeper? This changes everything.

Here is the core idea. Your backend validates the request and generates a special, time-limited URL. This URL grants the frontend permission to upload one specific file to one specific location in your S3 bucket. Your server never sees the file bytes. It handles the paperwork—authentication, validation, and database records—while S3 handles the heavy lifting of receiving the data. This is called using presigned URLs.

But how do you ensure only valid files are allowed? You cannot check the file itself before it’s uploaded. The answer is to validate everything you can check first. The frontend must provide the file’s name, type, and size. This metadata is your first line of defense. I use Zod, a TypeScript library, to define a strict contract for this data.

import { z } from 'zod';

const FileMetadataSchema = z.object({
  filename: z.string().min(1).max(200),
  mimeType: z.string().regex(/^image\/(jpeg|png|gif)$/),
  sizeBytes: z.number().int().positive().max(10_000_000), // 10MB max
});

This simple schema blocks invalid types and enforces a size limit. It turns runtime guesses into compile-time certainty. If the data doesn’t match this shape, the request fails instantly. Why trust data when you can validate it with a few lines of code?

Now, let’s create the presigned URL. After validating the metadata, your server talks to AWS.

import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3Client = new S3Client({ region: "us-east-1" });

const generateUploadUrl = async (fileName, fileType) => {
  const command = new PutObjectCommand({
    Bucket: "your-secure-bucket",
    Key: `uploads/${Date.now()}_${fileName}`,
    ContentType: fileType,
  });

  const url = await getSignedUrl(s3Client, command, { expiresIn: 3600 });
  return url; // This URL is sent to the frontend.
};

The frontend receives this URL and can use a standard fetch PUT request to upload the file binary directly. Your server’s job is done in milliseconds. Have you considered what stops someone from uploading a massive file if they bypass your frontend? S3 can help with that too.

Security is not an afterthought. A presigned URL is powerful; it grants temporary permission. You must scope this permission tightly. I define my S3 bucket policy to only accept uploads with specific conditions, like a mandatory encryption header. This adds a second layer of security at the cloud infrastructure level, independent of your application code.

What about files bigger than, say, 100MB? A single HTTP upload can be unreliable. The solution is multipart uploads. You split the file into smaller pieces, upload each piece independently, and then tell S3 to assemble them. If one piece fails, you only retry that piece, not the whole file. It makes uploads resumable.

The process has three steps. First, your server asks S3 to start a multipart upload, getting a unique ID. Second, it generates a presigned URL for each part. The frontend uploads all parts. Finally, the server tells S3 to combine the parts. This logic is more complex but essential for a good user experience with large media.

Handling the upload is only half the story. Once a file lands in S3, you often need to process it: create thumbnails, scan for viruses, or extract metadata. You don’t want your API to do this synchronously. Instead, you can use S3 Event Notifications. When a new file is uploaded, S3 can automatically send a message to another service, like an AWS Lambda function, to handle processing asynchronously. Your API stays fast and responsive.

Here’s a tiny example of a Lambda function trigger in a serverless setup.

# serverless.yml excerpt
functions:
  processUpload:
    handler: src/processor.handler
    events:
      - s3:
          bucket: ${self:custom.uploadsBucket}
          event: s3:ObjectCreated:*
          rules:
            - prefix: uploads/

This setup is incredibly powerful. The user gets a quick “upload successful” message, and minutes later, their video has a thumbnail generated automatically. The system feels alive and responsive.

Building this requires a shift in thinking. Your server is not a file courier. It is an air traffic controller. It doesn’t carry the cargo; it coordinates the landing, ensuring every plane has the right clearance, follows the correct path, and reports to the tower upon arrival. This separation of concerns is what makes modern applications scalable and robust.

Does this seem like a lot of moving parts? It is. The complexity isn’t in any one piece but in their coordination. The payoff is a system that rarely fails, scales effortlessly with users, and provides clear feedback at every step. No more vanishing progress bars.

I built this system piece by piece after that failed demo. The frustration of that moment was the best teacher. It forced me to look past the simple “it works on my machine” solution and build for the real world—a world of spotty connections, impatient users, and bad actors. The result wasn’t just code; it was confidence.

If this journey from a broken upload to a resilient pipeline helps you, please share it with another developer. Have you faced similar upload nightmares? What was your breaking point? Let me know in the comments.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Keywords: S3 file uploads, presigned URLs, multipart uploads, AWS Lambda, TypeScript validation



Similar Posts
Blog Image
How to Build Scalable Real-time Notifications with Server-Sent Events, Redis, and TypeScript

Learn to build scalable real-time notifications using Server-Sent Events, Redis & TypeScript. Complete guide with authentication, performance optimization & deployment strategies.

Blog Image
How to Build Production-Ready Event-Driven Microservices with NestJS, RabbitMQ, and Redis

Learn to build scalable event-driven microservices with NestJS, RabbitMQ & Redis. Master async communication, caching, error handling & production deployment patterns.

Blog Image
Complete Guide to Integrating Next.js with Prisma ORM for Full-Stack TypeScript Applications

Learn how to integrate Next.js with Prisma ORM for type-safe full-stack development. Build powerful apps with seamless database operations and enhanced developer experience.

Blog Image
Build Production-Ready APIs: Fastify, Prisma, Redis Performance Guide with TypeScript and Advanced Optimization Techniques

Learn to build high-performance APIs using Fastify, Prisma, and Redis. Complete guide with TypeScript, caching strategies, error handling, and production deployment tips.

Blog Image
Complete Guide to Building Rate-Limited GraphQL APIs with Apollo Server, Redis and TypeScript

Learn to build a production-ready GraphQL API with Apollo Server, TypeScript & Redis. Master rate limiting strategies, custom directives & deployment. Complete tutorial with code examples.

Blog Image
Complete Guide to Integrating Next.js with Prisma ORM for Type-Safe Database Operations

Learn how to integrate Next.js with Prisma ORM for type-safe, scalable web apps. Discover seamless database operations and improved developer productivity.