Scalable S3 File Uploads with Presigned URLs in Node.js and TypeScript

Learn direct-to-S3 uploads with presigned URLs in Node.js and TypeScript to cut server load, improve scale, and secure file handling.

Scalable S3 File Uploads with Presigned URLs in Node.js and TypeScript

I remember the first time I built a file upload feature. It seemed simple enough—accept a file from a user, save it somewhere. I used Multer, streamed the file to an Express endpoint, then uploaded it to S3 from the server. It worked. For about ten users. Then came the day when someone tried to upload a 500 MB video, my server froze, and my AWS bill doubled overnight. That’s when I realized the classic server‑relayed pattern is an anti‑pattern at scale. Every byte travels through your Node.js process, consuming CPU and memory, and you pay for both ingress and egress. Your server becomes the bottleneck, not S3.

So I switched to presigned URLs. The idea is elegant: you give the client a temporary, signed URL that allows it to upload a file directly to your S3 bucket, without the file ever touching your server. Your API only orchestrates the permission—issue the URL, validate the request, and handle post‑upload processing. The client does the heavy lifting. In this guide, I’ll walk you through building that exact pipeline with Node.js, TypeScript, and the AWS SDK v3. We’ll keep everything type‑safe, include code snippets you can adapt, and I’ll share a few pitfalls I stumbled into.

But before we dive into code, ask yourself: is your current upload path going to scale? If you’re already cutting presigned URLs, great. If not, this is the architecture that powers services like Dropbox and Netflix.


Setting the foundation

Let’s start with the environment. I use Zod to validate my environment variables because misconfigured credentials can lead to hours of debugging. Here’s a minimal validation snippet for your env.ts:

import { z } from "zod";
import dotenv from "dotenv";
dotenv.config();

const schema = z.object({
  AWS_REGION: z.string().min(1),
  S3_BUCKET_NAME: z.string().min(1),
  PRESIGNED_URL_EXPIRES_IN: z.coerce.number().positive().default(3600),
  MAX_FILE_SIZE_MB: z.coerce.number().positive().default(100),
});

export const env = schema.parse(process.env);

Why do I do this? Because a typo in .env can silently fail. Zod gives a clear error message if something is missing. That saved me more than once.

Next, configure the S3 client. With AWS SDK v3, you import only what you need, keeping bundle size small:

import { S3Client } from "@aws-sdk/client-s3";
import { env } from "./env";

export const s3Client = new S3Client({
  region: env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
  },
  maxAttempts: 3, // retry on transient errors
});

In production, I use IAM roles instead of hard‑coded keys. The SDK automatically picks them up from the EC2 instance metadata service. That’s one less credential to leak.


Generating the presigned URL

Now the core: creating a presigned PUT URL. The server first validates the client’s request—file type, size, and maybe authentication. Only then do we sign a URL. Here’s a service function using @aws-sdk/s3-request-presigner:

import { PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { s3Client, BUCKET_NAME } from "../config/aws";
import { v4 as uuid } from "uuid";

export async function createUploadUrl(
  contentType: string,
  fileSize: number
): Promise<{ url: string; key: string }> {
  const key = `uploads/${uuid()}`;
  const command = new PutObjectCommand({
    Bucket: BUCKET_NAME,
    Key: key,
    ContentType: contentType,
    Conditions: [["content-length-range", 0, 10_485_760]], // 10 MB max
  });

  const url = await getSignedUrl(s3Client, command, {
    expiresIn: 3600, // 1 hour
  });

  return { url, key };
}

Notice I embed Conditions inside the command? That forces AWS to enforce the file size limit on the client side. If the client tries to upload a larger file, S3 rejects it with a 403. Much safer than relying on the client to behave.

But wait – do you trust the client’s claimed content type? I don’t. I validate it server‑side before signing. The client can send anything; my server decides what’s allowed. That’s a key security principle.


Validating the upload request

Before creating a presigned URL, I run a validation middleware. Here’s a simple Zod schema for the incoming request body:

import { z } from "zod";

export const createUploadSchema = z.object({
  contentType: z.enum([
    "image/jpeg",
    "image/png",
    "image/webp",
    "image/gif",
    "application/pdf",
  ]),
  fileSize: z.number().positive().max(env.MAX_FILE_SIZE_MB * 1024 * 1024),
});

In the route handler, if validation passes, I generate the URL. Otherwise, I return a 400 with a clear error message. This keeps your upload pipeline clean.


Post‑upload: scan and store metadata

Once the client uploads directly to S3, you need to know about it. I set up an S3 Event Notification that publishes to an SQS queue. A Lambda function picks up the event, scans the file with ClamAV (using Node.js ClamAV client), and moves the file to a “quarantine” folder if infected, or to a final folder if clean. The Lambda then updates a DynamoDB table with the file’s status.

You don’t need to implement all that in a single article, but the principle is straightforward: never trust the uploaded bytes until you’ve scanned them. Your API only creates a database record after you receive a confirmation from the scanner. That way, a malicious file never reaches your CDN.


Delivering files via CloudFront

After scanning, I serve the file through a CloudFront distribution with signed cookies. The presigned URL pattern works for uploads, but for downloads I want to restrict access to authenticated users. Here’s a quick snippet to generate a signed CloudFront URL using the AWS CloudFront SDK:

import { CloudFrontSigner } from "@aws-sdk/cloudfront-signer";

const signer = new CloudFrontSigner({
  keyPairId: process.env.CLOUDFRONT_KEY_PAIR_ID!,
  privateKey: process.env.CLOUDFRONT_PRIVATE_KEY!,
});

const signedUrl = signer.sign({
  url: `https://d123.cloudfront.net/uploads/file.pdf`,
  dateLessThan: new Date(Date.now() + 1000 * 60 * 60), // 1 hour
});

Now the file is accessible only to users who have a valid signed URL. Combine that with your auth layer, and you have a secure end‑to‑end pipeline.


Handling large files with multipart uploads

For files bigger than 100 MB, a single PUT can time out. AWS supports multipart uploads, and you can presign each part separately. The server creates a multipart upload, generates presigned URLs for each part, and the client uploads them in parallel. Then the client sends a complete multipart upload request. Here’s a high‑level flow:

  1. createMultipartUpload returns an UploadId.
  2. For each part number, getSignedUrl with UploadPartCommand.
  3. Client uploads each part directly to S3 with the part number and upload ID.
  4. Client calls completeMultipartUpload with the list of ETags.

The @aws-sdk/lib-storage package handles the client‑side logic if you control the client. But if you’re building a browser uploader, you’ll need to implement the multipart logic yourself. I recommend using the native Fetch API with ReadableStream for streaming large files.


Testing the pipeline

I use Jest along with aws-sdk-client-mock to mock S3 calls. That way, I can test my presigned URL generation without hitting AWS. Here’s a sample test:

import { mockClient } from "aws-sdk-client-mock";
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

const s3Mock = mockClient(S3Client);

jest.mock("@aws-sdk/s3-request-presigner", () => ({
  getSignedUrl: jest.fn().mockResolvedValue("https://fake-url.com"),
}));

it("returns a presigned URL", async () => {
  const result = await createUploadUrl("image/png", 1000);
  expect(result.url).toBe("https://fake-url.com");
});

This keeps my tests fast and deterministic.


Putting it all together

You might wonder: do you need all this for a simple blog app? Probably not. But if you’re building a SaaS that handles user‑generated content, a presigned URL pipeline is the difference between a 5‑line script and a production system that survives a traffic spike.

I’ve seen teams spend weeks debugging file upload failures caused by proxy timeouts or server memory exhaustion. With direct‑to‑S3 uploads, those problems vanish. Your Node.js API stays lean, your S3 bucket absorbs the load, and your users get fast uploads.


The final step: make your uploads resilient

One last thing I always add: a callback URL in the upload request. When the client finishes uploading, it calls your API with the key and a checksum. Your server verifies the checksum, marks the file as “ready,” and triggers any downstream workflows (thumbnail generation, virus scan, etc.). This event‑driven pattern decouples the upload from the processing, letting you scale each piece independently.

Now, if you’ve been following along, you have the foundation for a type‑safe, scalable upload service. Try it out. Start by generating a presigned URL from an Express route. Then upload a file with curl using that URL. Once you see how clean it is, you’ll never go back to server‑relayed uploads.

I’d love to hear about your experience. Did the code work out of the box? Did you hit any edge cases? Drop a comment below – your story might help someone else avoid the same mistake. And if you found this guide useful, like and share it with your team. Good upload architecture is worth spreading.


As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!


📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!


Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

// Our Network

More from our team

Explore our publications across finance, culture, tech, and beyond.

// More Articles

Similar Posts