I’ve been thinking about file uploads lately. Not the simple ones, but the kind that power real applications. You know, when you upload a large video, a batch of high-resolution photos, or important documents. These aren’t just clicks; they’re critical operations. If they fail, users get frustrated and data can be lost. This got me thinking: how do the big platforms make it so seamless? The answer often involves smart architecture, not just code. Let’s build something robust together.
Think about the last time you uploaded a file that failed halfway through. Annoying, right? You had to start over. A good system shouldn’t do that. It should pick up where it left off. It should also be fast, no matter where the user is located. And it must be secure, checking files before they touch your storage. These are the problems we’re solving today.
We’ll use tools that give us flexibility. Express.js is our reliable framework. For storage, we’ll use Minio. It speaks the same language as Amazon S3, which means our code will work with many cloud providers. This is a powerful idea. You’re not locked into one vendor. You can run Minio on your own servers for development or use a compatible service in production.
First, let’s set up our project. We need a solid foundation. Create a new directory and initialize it.
mkdir robust-upload-system && cd robust-upload-system
npm init -y
Now, install the core packages. We’ll need Express, the Minio client, a tool for handling uploads called Multer, and TypeScript for better code quality.
npm install express minio multer uuid
npm install -D typescript @types/node @types/express @types/multer nodemon
Create a tsconfig.json file for TypeScript settings.
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true
}
}
Our storage needs to be ready. Let’s run Minio locally using Docker. It’s like having a private S3 bucket on your machine. Create a docker-compose.yml file.
version: '3.8'
services:
minio:
image: minio/minio
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: myaccesskey
MINIO_ROOT_PASSWORD: mysecretkey
command: server /data --console-address ":9001"
volumes:
- minio_data:/data
volumes:
minio_data:
Run docker-compose up -d to start it. You can now access the Minio console at http://localhost:9001. Use the credentials above to log in and create a bucket named uploads.
Now, the core connection. How do we talk to Minio from our Node.js app? We create a configuration service. In src/services/storage.ts, we set up the client.
import { Client } from 'minio';
const minioClient = new Client({
endPoint: 'localhost',
port: 9000,
useSSL: false,
accessKey: 'myaccesskey',
secretKey: 'mysecretkey'
});
export const checkBucketExists = async (bucketName: string): Promise<boolean> => {
try {
return await minioClient.bucketExists(bucketName);
} catch (error) {
console.error('Bucket check failed:', error);
return false;
}
};
// Ensure our bucket is ready on startup
checkBucketExists('uploads').then(exists => {
if (!exists) {
minioClient.makeBucket('uploads', 'us-east-1', (err) => {
if (err) console.error('Create bucket error:', err);
else console.log('Bucket "uploads" created.');
});
}
});
export { minioClient };
This code does two things. It creates a client object with our connection details. It also has a helper function to check if a bucket exists. The startup check ensures our uploads bucket is ready for action. Notice the error handling? It’s simple but crucial. We log the error instead of crashing the app.
Basic uploads are a start, but they have limits. What if the file is 5 gigabytes? Sending it all at once can timeout or use too much memory. The solution is to break it into pieces, or chunks. This allows pausing and resuming. It’s how services like YouTube or Dropbox handle large files.
We need to track these chunks. Let’s design a simple flow. The frontend splits a file into parts, say 10MB each. It uploads each part with a unique ID and a part number. The backend receives these parts and tells Minio to store them as a “multipart upload.” Once all parts are sent, we finalize it.
Here’s a simplified version of that backend logic. We create an endpoint to start the process.
import express from 'express';
import { minioClient } from './services/storage.js';
import { v4 as uuidv4 } from 'uuid';
const router = express.Router();
// Start a multipart upload
router.post('/upload/start', async (req, res) => {
const { filename, fileType } = req.body;
const uploadId = uuidv4();
const objectKey = `multipart/${uploadId}/${filename}`;
try {
const uploadIdFromMinio = await minioClient.initiateNewMultipartUpload('uploads', objectKey);
// Store this uploadId and objectKey in your database, linked to the user's session
res.json({ uploadId: uploadIdFromMinio, objectKey, multipartId: uploadId });
} catch (err) {
res.status(500).json({ error: 'Failed to start upload' });
}
});
The frontend gets back an uploadId and an objectKey. It then uploads each chunk to a separate endpoint, sending the part number and the chunk data. After all parts are uploaded, the frontend calls a ‘complete’ endpoint. The backend then tells Minio to assemble the final file from all the parts.
But what about user location? If your server is in the US and a user in Asia is uploading, it could be slow. A multi-region strategy helps. You deploy Minio or S3-compatible storage in different geographic areas. The user’s upload is directed to the closest region. This reduces latency and improves speed.
How do we decide the region? A simple way is to have the frontend perform a quick latency test to a few endpoints when the app loads. It then tags all upload requests with the chosen region code. Our backend reads this header and uses the corresponding Minio client configuration for that region.
// Pseudo-code for region routing
const regionClients = {
'us-east': minioClientUS,
'eu-west': minioClientEU,
'asia-pacific': minioClientAP
};
const region = req.headers['x-upload-region'] || 'us-east';
const clientForUpload = regionClients[region];
// Use clientForUpload for all storage operations for this request
Security is non-negotiable. You can’t just accept any file. We need validation. Check the file extension and MIME type. Limit the file size. For an extra layer of safety, consider integrating a virus scanning service. This can be done by temporarily storing the file, scanning it with a tool like ClamAV, and only then moving it to the final bucket if it’s clean.
Also, never trust the filename from the user. They might send a file named invoice.pdf.exe. Always generate a safe, unique name on the server.
import path from 'path';
import { v4 as uuidv4 } from 'uuid';
const generateSafeName = (originalName: string): string => {
const ext = path.extname(originalName); // e.g., .pdf
const uniqueId = uuidv4();
return `${uniqueId}${ext}`; // e.g., '123e4567-e89b-12d3-a456-426614174000.pdf'
};
Sometimes, you don’t want files to go through your server at all due to load. You can generate a pre-signed URL. This is a special, time-limited URL that gives the frontend permission to upload directly to Minio. Your server generates this URL and sends it to the client. The client then uses it to PUT the file directly into the storage bucket. This offloads the bandwidth from your application server.
router.get('/presigned-url', async (req, res) => {
const { filename } = req.query;
const objectName = generateSafeName(filename as string);
const expirySeconds = 60 * 15; // URL expires in 15 minutes
try {
const url = await minioClient.presignedPutObject('uploads', objectName, expirySeconds);
res.json({ url, objectName });
} catch (err) {
res.status(500).json({ error: 'Could not generate URL' });
}
});
The frontend fetches this URL from your server, then uses it to upload the file directly to https://your-minio-url/.... It’s efficient and scalable.
Building this requires attention to detail. Test with different file sizes and network conditions. Monitor your storage performance and error logs. Use tools like Winston for structured logging so you can track failed uploads and diagnose issues.
Remember, the goal is a system that feels invisible to the user. It just works—reliably, quickly, and securely. We’ve covered chunking for reliability, multi-region for speed, and validation for security. These concepts combine to form a production-ready backbone.
What part of this process seems most challenging to implement in your own projects? Is it the resume functionality, or perhaps managing the storage across different regions? I’d love to hear what you’re working on.
Putting these pieces together takes effort, but the result is worth it. You’ll have a file handling system that can scale with your needs and provide a great user experience. Start with the basics, get a simple upload working, then incrementally add chunking and direct uploads. Each step makes your system more resilient.
I hope this guide gives you a clear path forward. Building robust systems is a journey. If you found this walkthrough helpful, please share it with other developers who might be facing similar challenges. Have you tried a different approach for handling large files? Let me know in the comments—I’m always interested in learning about other methods and solutions.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva