I was building a microservice that needed to handle 10,000 requests per second without breaking a sweat. The database was groaning, response times were climbing, and I knew I needed a better solution. That’s when I started looking seriously at distributed caching. If you’ve ever watched your application slow to a crawl under load, you know exactly why we’re talking about this today. Let’s build something fast together.
Why KeyDB? It’s simple: more speed with less pain. While Redis is fantastic, KeyDB takes the same great concepts and makes them work harder on modern hardware. It uses multiple threads, so it can actually take advantage of all those CPU cores your server has. Think of it as Redis with a turbocharger. It speaks the same language, so your existing Redis code works, but everything just runs quicker.
Getting started is straightforward. We’ll use Docker to run everything. Here’s a basic setup to get KeyDB and a database running.
# docker-compose.yml
services:
keydb:
image: eqalpha/keydb:latest
ports:
- "6379:6379"
command: keydb-server --server-threads 4
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: appdb
With our services ready, let’s connect to them from a Fastify application. Fastify is my go-to Node.js framework because it’s built for speed, just like our cache.
// src/app.ts
import Fastify from 'fastify';
import Redis from 'ioredis';
const app = Fastify({ logger: true });
const redis = new Redis('redis://localhost:6379');
app.get('/health', async () => {
return { status: 'ok', cache: await redis.ping() };
});
await app.listen({ port: 3000 });
Now, the real question is: how do we decide what to cache? Not everything belongs in cache. You typically cache data that is expensive to get, doesn’t change too often, and is read frequently. User profiles, product catalogs, and session data are classic examples.
Let’s implement a basic cache-aside pattern. This is where your application code is responsible for loading data into the cache.
// src/services/product.service.ts
async function getProduct(id: string) {
// 1. Try to get from cache first
const cached = await redis.get(`product:${id}`);
if (cached) {
return JSON.parse(cached);
}
// 2. If not in cache, get from database
const product = await db.products.findUnique({ where: { id } });
if (product) {
// 3. Store in cache for future requests
await redis.setex(`product:${id}`, 300, JSON.stringify(product)); // 5 minute TTL
}
return product;
}
See the pattern? Check cache, miss, get from source, populate cache. It’s simple but powerful. But what happens when ten thousand requests ask for the same uncached product at once? They all miss the cache and hit the database simultaneously—a “cache stampede.” We need to prevent that.
One way is to use a mutex, or lock, so only the first request fetches the data.
async function getProductWithLock(id: string) {
const cacheKey = `product:${id}`;
const lockKey = `lock:${cacheKey}`;
// Try to get cached data
let product = await redis.get(cacheKey);
if (product) return JSON.parse(product);
// Try to acquire a lock
const lockAcquired = await redis.set(lockKey, '1', 'PX', 1000, 'NX'); // Lock for 1 second
if (!lockAcquired) {
// Someone else is fetching, wait and retry
await new Promise(resolve => setTimeout(resolve, 50));
return getProductWithLock(id);
}
try {
// We have the lock, fetch from DB
product = await db.products.findUnique({ where: { id } });
if (product) {
await redis.setex(cacheKey, 300, JSON.stringify(product));
}
} finally {
// Always release the lock
await redis.del(lockKey);
}
return product;
}
This ensures only one request per key does the expensive database fetch. The others wait briefly and then get the cached result. But can we do better? What if we could serve slightly old data while fetching fresh data in the background? That’s the stale-while-revalidate pattern.
Imagine a product page. It’s okay if the price is 30 seconds old, but we want it updated eventually. We can store two values: the data and a timestamp.
async function getProductStale(id: string) {
const cacheKey = `product:${id}`;
const metaKey = `meta:${id}`;
const [cachedData, cachedMeta] = await redis.mget(cacheKey, metaKey);
if (cachedData && cachedMeta) {
const meta = JSON.parse(cachedMeta);
const now = Date.now();
// If data is fresh (less than 60 seconds old), return it
if (now - meta.timestamp < 60000) {
return JSON.parse(cachedData);
} else {
// Data is stale, but return it anyway and refresh in background
setTimeout(() => refreshProductCache(id), 0);
return JSON.parse(cachedData);
}
}
// No cache at all, fetch synchronously
return fetchAndCacheProduct(id);
}
The user gets a response immediately, even if it’s slightly old, and a background job updates the cache for the next user. This is great for user experience.
Now, let’s talk about cache invalidation. It’s famously one of the hard problems in computer science. When a product’s price changes in the database, our cache is now wrong. We need to remove or update that cached entry.
A simple approach is to delete the cache key on update.
async function updateProduct(id: string, data: ProductUpdate) {
// 1. Update the primary database
const updated = await db.products.update({ where: { id }, data });
// 2. Invalidate the cache
await redis.del(`product:${id}`);
// Optional: 3. Warm the cache with the new data
await redis.setex(`product:${id}`, 300, JSON.stringify(updated));
return updated;
}
But what if you have many related keys? If you cache product lists by category, updating one product invalidates all those list caches. This is where cache tagging helps. You can associate a tag, like cat:electronics, with a cache entry. When a product in that category updates, you can find and delete all keys tagged with cat:electronics.
How do we know if our caching is actually working? We need metrics. Let’s add some simple counters to see our cache hit rate.
// src/plugins/metrics.ts
import client from 'prom-client';
const cacheHits = new client.Counter({
name: 'cache_hits_total',
help: 'Total number of cache hits',
});
const cacheMisses = new client.Counter({
name: 'cache_misses_total',
help: 'Total number of cache misses',
});
// In our getProduct function
if (cached) {
cacheHits.inc();
return JSON.parse(cached);
} else {
cacheMisses.inc();
// ... fetch from DB
}
A good cache hit rate is often above 90%. If you’re seeing lower numbers, you might be caching the wrong things or your TTLs (Time-To-Live) are too short.
Finally, let’s structure our Fastify app cleanly. We can create a cache decorator to make our routes easy to read.
// src/decorators/cache.decorator.ts
export function cache(ttlSeconds: number) {
return function (target: any, propertyKey: string, descriptor: PropertyDescriptor) {
const originalMethod = descriptor.value;
descriptor.value = async function (...args: any[]) {
const cacheKey = `func:${propertyKey}:${JSON.stringify(args)}`;
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
const result = await originalMethod.apply(this, args);
await redis.setex(cacheKey, ttlSeconds, JSON.stringify(result));
return result;
};
return descriptor;
};
}
// Using it in a service
class ProductService {
@cache(300) // Cache for 5 minutes
async findFeatured() {
return db.products.findMany({ where: { featured: true } });
}
}
This approach keeps your business logic clean and separates caching concerns. It’s a pattern that scales well as your application grows.
We’ve covered a lot: from basic patterns to advanced problems like stampedes and stale data. The goal is always the same: make your application respond faster and handle more load with the same resources. Start with a simple cache-aside pattern, measure your hit rate, and add complexity only when you need it.
Did this help you see caching in a new light? What’s the first endpoint in your current project that you would cache? Building these systems has transformed how I think about performance. If you found this walkthrough useful, please share it with another developer who might be battling slow APIs. Drop a comment below with your biggest caching challenge—I’d love to hear how you’re tackling these problems in your own projects.
As a best-selling author, I invite you to explore my books on Amazon. Don’t forget to follow me on Medium and show your support. Thank you! Your support means the world!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva