Where does Cache fit into the Traffic Sequence?

I'm wondering where retrieving items from the CF Cache fit into the traffic sequence? Would it be at the very end - between Load Balancing and Origin Server? (and it would be Lower->Upper->R2 Reserve prior to Origin)
No description
10 Replies
Erisa
Erisa•13mo ago
It's technically both just after and at the same stage as Workers depending what happens, because Workers can manipulate cache directly using the Cache API, or make fetch calls with default or custom caching options. The cache doesn't care about your load balancer and will take action before it reaches that, but it will obey all the other rules along the way.
nickchomey
nickchomeyOP•13mo ago
Thank you. So, if a worker is registered for the route (after it has passed through all the preceding transform rules, etc...), the worker will pick up the request, do its thing, and return to the client. If the cache isn't used (be it for read or write), so be it. If worker is not registered for that route, the Smart Tiered Cache (lower->upper->r2 reserve) is checked and if still unsuccessful, it'll hit the load balancer (if in use) which will send it (optionally via Argo Smart Routing) the origin to be generated, then cached in R2 Reserve (if used), Upper, local lower and finally the client? Or, if a worker is registered AND makes a request from the cache (via Cache API or Fetch) then it'll do the same lower-upper-reserve->lb->origin->lb->reserve->upper-lower->worker->client?
Erisa
Erisa•13mo ago
When you put it like that it suddenly seems a lot more complicated 😄
nickchomey
nickchomeyOP•13mo ago
hahaha. But is it accurate? Im just trying to get an understanding of the whole Traffic Sequence as it relates to the various products
Erisa
Erisa•13mo ago
Yes that sounds correct and aligns with my understanding of the internals, though the origin data is returned to the client asynchronously it doesn't wait for the inserting into the various caches to finish first. Which is why you can get fun cache statuses like UPDATING if you have a lot of traffic.
nickchomey
nickchomeyOP•13mo ago
Right, i read that yesterday somewhere, which makes sense - get the response to the client asap, and then write to the cache when able. The UPDATING comes when there's a simultaneous request which sees an "updating" status prior to the cache being updated? What would happen in that situation - go to origin or wait for the update?
Erisa
Erisa•13mo ago
Its described here https://developers.cloudflare.com/cache/concepts/default-cache-behavior/#cloudflare-cache-responses
The resource was served from Cloudflare’s cache and was expired, but the origin web server is updating the resource. UPDATING is typically only seen for very popular cached resources.
It still serves from the old cache. I think its used in stale-while-revalidate cases, just something you don't see everyday I think if it was the first instance and there wasnt anything there but something was in the process of being inserted you would just get another MISS
nickchomey
nickchomeyOP•13mo ago
Perfect, thanks for the link. I'll read more now. And thanks for all the clarifications/confirmations! You guys are super helpful. I cant wait to finally implement my application architecture to the greatest degree possible in CF! I'm also glad that I've been able to significantly simplify the planned architecture by making use of Cache rather than heavy use of KV
nickchomey
nickchomeyOP•13mo ago
Perhaps one point of feedback - not sure if it should go here, in Github or somewhere else. I found it VERY confusing/difficult to wrap my head around the capabilities of Cache from Workers. The docs make it seem very limited, but that's just the Workers Cache API. It seems that we have full control via fetch to the general CF API. I'm specifically talking about purge single URL globally. I had written off Cache for many months because I thought it wouldnt be particularly possible to purge globally, so started planning around the eventual consistency of KV. But the other day I revisited it all and discovered that we can purge globally with fetch from the worker (or anywhere). I think it would fix the problem if a single sentence could be added to this section that fetch from a worker can purge globally. https://developers.cloudflare.com/workers/learning/how-the-cache-works/#single-file-purge--assets-cached-by-a-worker
How the Cache works · Cloudflare Workers docs
How Workers interacts with the Cloudflare cache.
nickchomey
nickchomeyOP•13mo ago
Also, while I understand that they're very different products, it might be helpful to add something somewhere - perhaps in the KV docs - to say that while KV can take up to a minute to update values globally, you could alternatively store things in the Cache and it would "propagate" essentially instantly, because the value would be stored in the Upper Cache and then copied globally as-requested. Like, I'm largely just looking to store html files/values on the edge. Obviously Cache is ideal for this, but because of the above confusion, I was trying to figure out how to instead get KV to store the html. But the 60s delay was a huge hurdle. Whereas if you can accept the latency to pull from upper to unpopulated lower caches, you could in theory propagate that HTML file to all DCs in seconds. Sorry if im being unclear, or off-topic
Want results from more Discord servers?
Add your server