Lately, I’ve been thinking a lot about the quiet, frustrating parts of building web applications—the features that are absolutely essential but rarely exciting to build. One of those is file uploads. We all need them, but getting them right—truly right for a production environment—is a different story. It’s not just about saving a file. It’s about security, performance, user experience, and maintainable code. So, I decided to build a system from the ground up that handles all of it, and I want to show you exactly how I did it. This is that guide. If you find it helpful, I’d be grateful if you’d share it with your network or leave a comment with your own thoughts.
Let’s start with a simple question: when a user uploads a profile picture, what is actually happening behind the scenes? Most of the time, the server receives a raw stream of data. Our first job is to safely intercept that data. I use Multer, but with a specific configuration. Instead of saving files to disk immediately, I keep them in memory. This lets me inspect, validate, and transform the file before it ever touches a permanent storage location. It’s a crucial first step for control and security.
Here’s how I set up that Multer instance. Notice how I define exactly which file types are allowed. Guessing based on a file extension isn’t safe; we’ll do real validation a bit later.
// config/multer.ts
import multer from 'multer';
import { Request } from 'express';
import { AllowedMimeType } from '../types/upload.types';
const MAX_SIZE = 10 * 1024 * 1024; // 10MB
const upload = multer({
storage: multer.memoryStorage(), // Keep file in memory as a Buffer
limits: { fileSize: MAX_SIZE },
fileFilter: (_req: Request, file, cb) => {
const allowed: AllowedMimeType[] = ['image/jpeg', 'image/png', 'image/webp'];
if (allowed.includes(file.mimetype as AllowedMimeType)) {
cb(null, true);
} else {
cb(new Error('File type not permitted.'));
}
}
});
export const singleUpload = upload.single('file');
But what if someone renames a .exe file to .jpg? The mimetype from the request can be faked. This is where “magic bytes” or file signatures come in. A file’s first few bytes often reveal its true format. I use the file-type package to check this against the buffer we have in memory.
// middleware/fileValidator.ts
import { fromBuffer } from 'file-type';
async function validateFile(buffer: Buffer): Promise<boolean> {
const fileInfo = await fromBuffer(buffer);
if (!fileInfo) return false;
const validTypes = ['image/jpeg', 'image/png'];
return validTypes.includes(fileInfo.mime);
}
// Use this function in your route before processing the upload.
Now, let’s say the file is a valid image. The user might have uploaded a massive 12-megabyte photo from their modern phone. Serving that to every visitor is a bandwidth nightmare. This is where Sharp, an incredible Node.js module, enters the picture. It lets you resize, compress, and convert images on the fly, directly in your server’s memory. Have you considered how much data you could save for your users?
I create a simple service to handle common transformations. The goal is to produce a much smaller, web-optimized image without a noticeable loss in quality.
// services/imageService.ts
import sharp from 'sharp';
export async function optimizeImage(inputBuffer: Buffer): Promise<Buffer> {
return await sharp(inputBuffer)
.resize(1200, 1200, { fit: 'inside', withoutEnlargement: true }) // Fit within 1200x1200, don't make small images bigger
.jpeg({ quality: 80 }) // Convert to JPEG at 80% quality
.toBuffer();
}
// You can also create a thumbnail in the same process.
export async function createThumbnail(inputBuffer: Buffer): Promise<Buffer> {
return await sharp(inputBuffer)
.resize(200, 200, { fit: 'cover' })
.jpeg({ quality: 70 })
.toBuffer();
}
We have a validated, optimized image buffer. Now it needs a home. Writing files directly to your server’s filesystem doesn’t scale and makes backups harder. Object storage like AWS S3 is the standard solution. I use the modern AWS SDK v3 to upload our buffer directly. The key is to generate a unique filename to avoid collisions. A timestamp or a UUID works well.
But here’s a subtle point: should you upload the original filename? Usually, no. It can contain special characters, spaces, or path segments that cause problems. I strip it and use my own generated key.
// services/s3Service.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { v4 as uuidv4 } from 'uuid';
const s3Client = new S3Client({ region: 'us-east-1' });
const Bucket = process.env.S3_BUCKET!;
export async function uploadToS3(
buffer: Buffer,
mimeType: string,
folder: string = 'uploads'
): Promise<string> {
const Key = `${folder}/${uuidv4()}.jpg`; // Generate unique key
const command = new PutObjectCommand({
Bucket,
Key,
Body: buffer,
ContentType: mimeType,
});
await s3Client.send(command);
return Key; // Return the S3 object key for later reference
}
The file is now securely in S3. But how do you let a user’s browser view it? You could make the object public, but that’s rarely a good idea. A better method is to generate a pre-signed URL. This is a temporary URL that grants access to the private S3 object for a limited time, like 15 minutes or an hour. It’s secure, controlled, and doesn’t expose your bucket structure.
Think about a social media site. When you load your feed, it doesn’t load hundreds of public images instantly; it loads temporary URLs for just the images on your screen. How much harder would it be to build that feature without this mechanism?
// services/s3Service.ts - continued
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { GetObjectCommand } from '@aws-sdk/client-s3';
export async function getPresignedUrl(key: string): Promise<string> {
const command = new GetObjectCommand({ Bucket, Key: key });
// URL expires in 1 hour (3600 seconds)
return await getSignedUrl(s3Client, command, { expiresIn: 3600 });
}
All these pieces need to come together in a TypeScript controller. Type safety here is a lifesaver. It prevents you from accidentally passing a file buffer where a file path is expected, or mixing up argument orders. I define clear interfaces for what data flows between validation, processing, and storage.
// types/upload.types.ts
export interface UploadResult {
key: string;
url: string; // The pre-signed URL
size: number;
mimeType: string;
}
// controllers/uploadController.ts
import { Request, Response } from 'express';
import { singleUpload } from '../config/multer';
import { validateFile } from '../middleware/fileValidator';
import { optimizeImage } from '../services/imageService';
import { uploadToS3, getPresignedUrl } from '../services/s3Service';
export const uploadImage = async (req: Request, res: Response) => {
try {
// 1. Use Multer middleware
await new Promise((resolve, reject) => {
singleUpload(req, res, (err: any) => {
if (err) reject(err);
else resolve(undefined);
});
});
if (!req.file) {
return res.status(400).json({ error: 'No file provided.' });
}
// 2. Validate with magic bytes
const isValid = await validateFile(req.file.buffer);
if (!isValid) {
return res.status(400).json({ error: 'Invalid file format.' });
}
// 3. Optimize with Sharp
const optimizedBuffer = await optimizeImage(req.file.buffer);
// 4. Upload to S3
const s3Key = await uploadToS3(optimizedBuffer, 'image/jpeg', 'profile-images');
// 5. Generate a temporary URL for the client
const presignedUrl = await getPresignedUrl(s3Key);
const result: UploadResult = {
key: s3Key,
url: presignedUrl,
size: optimizedBuffer.byteLength,
mimeType: 'image/jpeg'
};
res.status(201).json(result);
} catch (error) {
console.error('Upload failed:', error);
res.status(500).json({ error: 'File processing failed.' });
}
};
This approach creates a robust pipeline. It’s secure from fake file types, efficient through image optimization, scalable using S3, and controlled via temporary URLs. Adding more features, like support for PDFs, creating multiple image sizes, or even handling large files in chunks, builds naturally on this foundation.
Building this was about solving a real problem elegantly. It’s the kind of backend work that users never see but always appreciate when it’s fast and reliable. I hope walking through my process gives you a solid template for your own projects. If this guide clarified things for you, please consider liking or sharing it. I’d also love to hear about your own experiences or challenges with file uploads in the comments below—let’s learn from each other.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva