Efficient caching strategy for querying hundreds of records
My use case involves clients querying an API to retrieve records for up to 200 IDs at a time. To minimize the number of requests and without relying on HTTP/2 multiplexing for hundreds of GETs, what's the optimal strategy here to cache each individual ID as it's retrieved so I'm not hitting the db every time? Should I send POST batches of 50 IDs and perform 50 cacheable GET subqueries on the Worker?
6 Replies
Workers KV would hit overage far quicker as far as I can see.
Are the IDs they query always the same? One of the big issues I see with this is that if most of the queries are unique, or aren’t requested very often, then they will be evicted before they are ever reused
The IDs are variable and unrelated to each other, so simply caching the batch request would be useless.
I also can't find an exact answer on how many responses can be cached at one time on either the Free or Pro plans, so I don't know if that subquery plan would be viable either.
Unlimited cached responses, but they can be evicted at any time
Is eviction effectively random or does it depend on things like total cache usage?
Here’s a post from today: https://discord.com/channels/595317990191398933/874727019479437372/1124080823524671590