Bug!? Server side calls are not being cached
The below is causing 2 API requests, one of the key patterns in the app router is to be able to call API functions wherever and have the response be cached across the entire request.
32 Replies
I've been considering moving back to the pages router because it didn't feel completely ready, this will definitely convince me though
Works as expected when manually wraping everything in a cache function
export const getHello = cache(() => api.post.hello());
Data Fetching: Data Fetching Patterns and Best Practices | Next.js
Learn about common data fetching patterns in React and Next.js.
If you need to use the same data (e.g. current user) in multiple components in a tree, you do not have to fetch data globally, nor forward props between components. Instead, you can use fetch or React cache in the component that needs the data without worrying about the performance implications of making multiple requests for the same data.
Are you checking in the dev build (
npm run dev
) , or production (npm run build
) build?
In the dev build, the caching is not applied by default. This is intentional.@noblica I'm checking in production
Very easy to recreate, just create 2 server functions that make the same call to a query that logs 'hello' (with no input) and you'll see it printed twice
I can't find anyone mentioning this which is surprising / concerning
only fetch has this behaviour by default. If you want it for ORM calls or any other function by yourself you need to wrap it in cache() from React
If I have to do that, I might as well not use tRPC
ooh sorry just saw you already mentioned that :KEKW:
not realted to tRPC but the thing I ran into as the most annoying is that there's an object.is used on the function args to check if it's the same call
which makes sense ofc
but also means it doesn't work at all when passing objects as arguments
What function args?
well say you have
if you call it from 2 components with the same values
it still won't cache em cause they're not the same object
in this case obviously you can just accept id as a string
Well thats shit
and then it does work
So you'd need to use the same reference all over
If you're doing it with an object yes but that's really not practical at all
so I rewrote quite a bit of code to just make more "focussed" functions that just take in an id for example
instead of an id and include params
You can always just write your own cache function anyway though right?
you could yes
I can't see that it does anything special
also another thing I've noticed, not sure cause need to tripple check, but I don't think that the cache works properly when using Next's metadata functions and such
even tho I'm pretty sure they do say it should
so like if you export a getMetadata function or whatever it's called and a page, and have them both call the same cached function it still runs twice
it definitely does this during build but not 100% if it also does it when running
I'm moving back to the page router, the juice isn't worth the squeeze tbh
eh idk this is just an extra, you don't have to use it
I personally do really like app router
Having a functional cache is pretty important, thankfully I tested this because it would have meant 6x the serverless function usage
Or at least, Vercel shouldn't list this as a pattern for sharing data
@v-for-v how would switching back to the pages router solve the server side caching issue though?
It forces you to use react-query in most cases, which caches requests
I could make everything a client component and continue using react-query which would work the same, but then i'm not getting any benefit for the increased complexity of using the app router
You can always do api calls using client components which use react query which does your caching and still use app router to take advantage of rsc for things you don’t need to cache and the new routing features like layouts etc
I thought the problem was server side caching, not client side?
React query might cache these request on the clients machine, but that's only one client. Your server will still make x amount of requests for x amount of clients requesting the page. Seems like the issue will still be there, or did I misunderstand?
Yeah i'm not talking about server caching across multiple devices
The issue is that the tRPC calls aren't being cached within the same server request as they should be
Yea the existing trpc docs refer to a cacheing method (setting cache-control headers) which are ignored with the app router.
I've found no way to control the tRPC cache with the app router
I essentially have just removed server components from my app and have kept the app router
Any performance gained for me would have been lost with the multiple requests
Yea the endpoints would still be exposed and uncached in prod though - so greater potential for a DDoS?
Why would it a greater potential for DDoS when its making less requests than a server component would?
Apologies 'Greater' is misleading here. Let me rephrase:
Your TRPC endpoints can still be queried directly, and will have the previous issue of being uncached.
Thus - your TRPC endpoints themselves are a DDoS vector (regardless of how you're querying them in your app)
I would need to have some client side calls in the app regardless of whether I went the server route
Its no different than using the pages router