I was building a high-traffic API last month when I hit a wall. The database was groaning under the load, response times were climbing, and users were getting frustrated. That’s when I realized I had been treating caching as an afterthought—a simple set and get. It wasn’t enough. The real challenge wasn’t just storing data in Redis; it was knowing what to store, when to store it, and how to keep it fresh without bringing the system to its knees. This experience sent me down a path of research and experimentation, leading to the strategies I want to share with you today. If you’ve ever wondered why your cache isn’t delivering the performance boost you expected, you’re in the right place.
Let’s start with a fundamental shift in perspective. Caching isn’t just a tool; it’s a strategy. The most common pattern is Cache-Aside, often called lazy loading. The application code is in charge. It tries to read from the cache first. On a miss, it fetches the data from the primary database, stores it in the cache, and then returns it. It’s simple and effective, but it has a hidden problem. What happens if the cache is empty and a thousand users request the same data at once?
This is called a cache stampede, or thundering herd. A thousand simultaneous database queries can crash your service. The solution? A lock. When the first request misses the cache, it acquires a lock. Other concurrent requests see the lock and wait, or get a stale version of the data, instead of hammering the database. Here’s a basic way to implement that logic.
async function getWithLock(key, fetchFromDb) {
// 1. Try to get cached data
let data = await redis.get(key);
if (data) return JSON.parse(data);
// 2. Try to acquire a lock
const lockKey = `lock:${key}`;
const lockAcquired = await redis.set(lockKey, '1', 'PX', 5000, 'NX'); // Lock for 5 seconds
if (lockAcquired) {
try {
// 3. I have the lock, so I fetch from DB
data = await fetchFromDb();
await redis.set(key, JSON.stringify(data), 'EX', 3600); // Cache for 1 hour
} finally {
// 4. Always release the lock
await redis.del(lockKey);
}
} else {
// 5. I didn't get the lock, wait and retry
await new Promise(resolve => setTimeout(resolve, 100));
return getWithLock(key, fetchFromDb); // Retry
}
return data;
}
But what about writing data? Cache-Aside only handles reads. For writes, we need different patterns. Write-Through is one approach. Every time you write to the database, you also write to the cache. This keeps the cache very fresh, but it makes every write operation slower because it has to complete two actions. Is the consistency worth the latency cost for your use case?
Then there’s Write-Behind. This is more complex but powerful. The application writes to the cache immediately and returns a fast response to the user. The cache then batches these writes and updates the database asynchronously in the background. It’s incredibly fast for users, but it risks data loss if the cache fails before the batch is written. Would you trade some durability for massive speed?
Invalidation is where many strategies fall apart. You have cached a user’s profile. The user updates their name. If you only invalidate the cache key for that specific profile, you’re safe. But what if you have a cached list of “Top 10 Users” that includes this person? That list is now stale. You need a way to tag related data. Redis can help with sets. You can store a tag like user:123 and associate it with every cache key that contains data about that user. When the user updates their profile, you find all keys tagged with user:123 and delete them.
// When caching data, add a tag
await redis.set(`data:top_users`, jsonData, 'EX', 600);
await redis.sadd(`tag:user:123`, `data:top_users`); // Tag this key
// Later, when user 123 updates their profile, invalidate all tagged keys
const keysToDelete = await redis.smembers(`tag:user:123`);
if (keysToDelete.length > 0) {
await redis.del(...keysToDelete); // Delete all cached data for this user
}
await redis.del(`tag:user:123`); // Clean up the tag set
Have you considered a multi-layered cache? Not all data is equal. Some is accessed so frequently it should live in the application’s own memory (L1 cache). Less frequent, but still shared data, goes in Redis (L2 cache). Node-Cache is a simple module for an in-memory layer. The logic flows: check memory first, then Redis, then the database. This reduces network calls to Redis for the hottest data.
const NodeCache = require('node-cache');
const localCache = new NodeCache({ stdTTL: 30 }); // Short TTL for L1
async function getMultiLayer(key) {
// 1. Check Local (L1) Cache
let data = localCache.get(key);
if (data) {
console.log('L1 Cache Hit');
return data;
}
// 2. Check Redis (L2) Cache
data = await redis.get(key);
if (data) {
console.log('L2 Cache Hit');
data = JSON.parse(data);
localCache.set(key, data); // Populate L1
return data;
}
// 3. Hit Database
console.log('Cache Miss');
data = await fetchFromDb(key);
// 4. Set in both caches
await redis.setex(key, 3600, JSON.stringify(data));
localCache.set(key, data);
return data;
}
Monitoring is non-negotiable. You must know your cache hit ratio. A low ratio means your cache isn’t working well—maybe you’re caching the wrong things or TTLs are too short. Redis provides the INFO command. You can track keyspace_hits and keyspace_misses. Calculate the ratio: hits / (hits + misses). Aim for above 0.9, or 90%. How would you know if you’re not measuring?
Finally, remember that a cache is a copy of data. It is not the source of truth. Your system must work correctly even if the cache is completely empty. This is the golden rule. Cache failures should lead to slower performance, not broken features. Start simple with Cache-Aside, add locking to prevent stampedes, then explore invalidation tags and layered approaches as your needs grow.
I hope walking through these patterns gives you a clearer map for your own projects. Caching, done right, is what separates a sluggish application from a snappy, scalable one. What’s the first caching problem you’ll tackle with these ideas? If this guide helped clarify the complex world of caching strategies, please share it with a fellow developer who might be facing similar challenges. I’d also love to hear about your experiences and questions in the comments below.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva