Nuxt 3 on Vercel - Store cache in memory vs Vercel KV ?

Hi everyone, I developed this habit of always tying every new Nuxt project to a Vercel KV store. But every now an then, I'm facing this issue where the store reaches 5-7k keys (300b to 12kb each) , even though I only have 50 pages on some of the websites, and I can't seem to delete a single key or even flush the entire store using useStorage().removeItem() or useStorage().clear() from my Nitro event handler. I don't exactly remember the error message but it was a Redis error related to the store being to big or having too many keys. I use this setup to store my API responses so I don't have to fetch the same content over and over. I set up a process in order to clear some cache entries on content updates on the CMS and also on Vercel deployments. But sometimes it's hard to know every single part of the website you need to clear the cache for, so I get some pages that don't reflect the latest changes. Where am I going with all this....I'm suddenly asking myself if this whole KV thing is actually giving me some advantages compared to a simple memory storage, knowing that I don't need this data to be persisted between deployments. Using the memory driver, I could still invalidate some pages cache the same way I do it at the moment, and the entire cache would be flushed on a new Vercel deployment. It would work perfectly for me, I don't think I really need a KV store, but I wonder if I would feel some big difference in terms of performance. I'm also looking at the LRU cache driver. I'd greatly appreciate your input on this.
2 Replies
kapa.ai
kapa.ai6d ago
Hi @CedsTrash. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- Based on your situation, it sounds like you're facing challenges with Vercel KV storage for caching in your Nuxt 3 application. Let me help you understand the tradeoffs between memory storage and Vercel KV for your use case. Memory vs. Vercel KV for Caching Since you mentioned you don't need data to persist between deployments, using the memory driver could indeed be a simpler solution. The memory driver keeps data in memory using JavaScript's Map object and is the default storage driver in unstorage Memory Driver. Memory Storage Benefits 1. Simplicity: No external service to configure or manage 2. Automatic clearing on deployment: As you noted, memory cache is cleared with each deployment 3. No key limits: You won't hit the Redis key limits you're experiencing The memory driver is particularly suitable when: - You don't need persistence between server restarts - Your cache invalidation strategy is tied to deployments - You're dealing with a manageable amount of data LRU Cache Alternative The LRU (Least Recently Used) cache driver could be a good middle ground. It keeps cached data in memory but with size limits:
import { createStorage } from "unstorage";
import lruCacheDriver from "unstorage/drivers/lru-cache";

const storage = createStorage({
driver: lruCacheDriver({
// Default max is 1000 items
max: 500
}),
});
import { createStorage } from "unstorage";
import lruCacheDriver from "unstorage/drivers/lru-cache";

const storage = createStorage({
driver: lruCacheDriver({
// Default max is 1000 items
max: 500
}),
});
This would automatically evict the least recently used items when you reach your configured limit LRU Cache Driver. Performance Considerations In terms of performance, memory storage will generally be faster than Vercel KV since: 1. There's no network latency to an external service 2. No serialization/deserialization overhead 3. Direct memory access is faster than Redis operations However, there are tradeoffs: 1. Memory limitations: Serverless functions have memory limits 2. Cold starts: Each serverless instance starts with an empty cache 3. No sharing: Cache isn't shared between function instances Implementation Example If you decide to switch to memory storage, you can configure it in your Nuxt config:
export default defineNuxtConfig({
nitro: {
storage: {
cache: {
driver: 'memory'
// Or use LRU cache with limits
// driver: 'lru-cache',
// options: {
// max: 1000
// }
}
}
}
})
export default defineNuxtConfig({
nitro: {
storage: {
cache: {
driver: 'memory'
// Or use LRU cache with limits
// driver: 'lru-cache',
// options: {
// max: 1000
// }
}
}
}
})
Then access it in your code:
// In your API handlers
const cache = useStorage('cache')
// In your API handlers
const cache = useStorage('cache')
Conclusion Based on your requirements (no need for persistence between deployments, issues with key limits), switching to memory storage or LRU cache seems like a reasonable approach. You'll likely see performance improvements for cache operations, and your cache invalidation strategy can remain similar, just without the Redis-related issues. If you're concerned about memory usage in serverless functions, the LRU cache driver gives you more control by limiting the number of cached items. Mannilov's YouTube video on storage in Nuxt and Nitro provides more examples of using different storage options that might be helpful YouTube: Store files and K/Vs in Nuxt and Nitro. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ----------------------
kapa.ai
kapa.ai6d ago
<@1014896388758323331> Kapa.ai is still learning and improving, please let me know how I did by reacting below.

Did you find this page helpful?